Machine Learning, the usual Bat and deflationary epistemology

What does it feel like to be a mechanical Batman?
Original image by Andrew Martin [CC0 1.0].

This is a quick, semi-serious follow-up to my first Twitter poll. In a rare moment of impulsivity, I’ve recently posted a deliberately awkward question on twitter. A few respondents did notice that something was amiss, an indeed, an explanation is due, hence this post. The subject does demand a lengthier treatment, which is in my plans; for today, I’m hoping that what follows will not sound entirely ungrounded.

I rarely act impulsively, but maybe I should do it more often? Predictably, my poll did not collect many votes, however, I could not hope for better results: adding my own vote, we get a perfect 50-50 split. There appears to be no agreement on the matter, so perhaps the question was worth asking…

The Question itself

Here is the original tweet:

Why did I pose the question?

In extreme synthesis: I guessed the reactions will be thought-provoking, for me, at least.

I wasn’t wrong. I was also hoping not to find too much agreement, as a split opinion in this case would give me a chance to propose some additional lucubrations.

My interest can be summarised as follows:

  1. To my eyes, the question can only make proper sense if one is aware of two distinct debates. In philosophy of mind, most of the discussions revolve around foundational questions such as: how does phenomenal experience get generated? Is it reducible to physical mechanisms?
    On the other hand, as real life applications of Artificial Intelligence are becoming quasi ubiquitous, other questions are becoming important and even urgent: there is an important demand to make machine-learning algorithms auditable, accountable and/or generally “explainable”. Thus, I was curious to see what my Twitter bubble would make of my mix’n’match provocation. I think I didn’t include the “huh?” option in order to force people to try harder and see if they could figure out what the connection might be. In hindsight, it wasn’t a bad choice, perhaps.
  2. I was also being a bit mischievous, because by forcing people to double-check their reaction (by not allowing to answer “huh?”) I sort-of forced some to make an incorrect choice. The only way I can see to make sense of the question is by recognising (at least at level of intuition) that there is a connection. If someone saw no connection at all, then the “correct” answer would indeed have been “huh? question is malformed, can’t figure why it’s worth asking”. Thus, knowing that within my twitter reach there are plenty of very clever people, I was semi-consciously curious to see if anyone will call me out. At least two did, to my great satisfaction! (With my apologies.)
  3.  Both debates (point 1. above) are, IMVHO, informed by mistakes. I wanted to explore the intuition that these mistakes share a common root. Which then immediately becomes the reason why my answer is “No, it isn’t a coincidence“.

This leads me to the second part of this brief response, it’s time to spill my beans and write down what I think.

My answer: no, it isn’t a coincidence.

My position has to do with what it means to know/understand something and how my own deflationary epistemology allows to make sense of a good number of problems. I’m pointing at some sort of illusionism in terms of knowledge (as in: “knowledge isn’t what we think it is“). I’m not planning to fully unpack the above in here, but will use my question to explain a little.
[Note. I will do so from one angle only: a full exploration requires to show how the same manoeuvre works along many different paths and brings to more or less the same conclusions.]

The route I’ll pick today is about the mistakes I mentioned above. In AI (or better: Machine Learning – ML), (informed) people are both rightly and mistakenly(!!!) asking to work towards producing ML systems that can be “explained”. Specifically, because of the enormous importance that ML-based decision-making is acquiring in our society, (informed) people want  the ML algorithms to be auditable. When a given machine makes a non-trivial choice, we want to be able to know “why did this system pick A and not B?”. The reason to demand such “transparent” ML systems is obvious, important and entirely correct: after all, we *need* to be able to detect and correct mistakes.

However, I fear that it is impossible to fully satisfy this demand. This has to do with reduction and our epistemological limits. Starting with the latter, if the question is “why did this system pick A and not B?”, the set of what could count as acceptable answers does not, by definition, contain the correct answers. ML systems are built to deal with otherwise unmanageable high number of variables, each having the potential of contributing to the output, and usually the final result is indeed determined by small contributions of a very high number of input variables. Thus, saying “Machine picked A because…” requires to list the contribution of many factors, and explain how they influenced the training phase as well as their relative contribution in the current choice. Unfortunately, no human can make sense of such an answer! What we’d like instead are answers like “…because the training set was biased towards A” or “…because most training data points to A”. Trouble is, both kinds of answers are oversimplifications, to the point of being wrong and pointless.

To put it in another way: when we are applying ML to a domain that justifies the use of ML, the complexity of the domain in question guarantees that the easiest way for us to learn what the ML system will output is to let the system compute the response. If we had an alternative, “better” (simpler) way of doing it, we would use this simpler system directly and let intractable ML systems alone, right?

Looking at the same scenario in terms of reduction, what we find is that ML is used precisely when reducing a problem to a handful of tractable variables simply doesn’t work (or we don’t know how to make it work). Thus, the interesting/useful results provided by ML are precisely those we are currently unable to reduce to simpler, more explainable, algorithms. QED: we can’t know why the machine picked “A” precisely because we asked the machine in the first place!

In terms of deflationary epistemology: we can only fully “understand” simple stuff, most of us (including me) can hold in working memory only less than ten variables, working out how they interact without the aid of external mind-extensions (pen and paper, calculator, spreadsheet, ML systems, etc.) is simply not possible. In other words, we can’t understand ML-driven choices because we ask ML to operate on domains that we can’t reduce to stuff we can consciously process.

This leads me to our bat – or better, a bit closer to our (mis)understanding of phenomenal consciousness. Image recognition is the typical domain where only ML systems can match our own abilities (we could not design any “simpler” way of doing it). [Coincidence? No!] Of course humans are, according to their own standards, quite good at image recognition. However, not a single one of us has a clear (and demonstrable) idea of how we do it. We do it all the time, but we do it unconsciously. Yes, we “recognise” red among other colours, which leads us to say that there is a specific “how it is like” to perceive redness. But how we do recognise redness (or anything at all) is entirely obscure to us. Introspection can tell us exactly nothing about the mechanism that allows us to discern colours. Neuroscience is starting to produce some incomplete answers, but it is merely scratching the surface.
[Reminder: colour perception is spectacularly complex, do I need to mention “the dress“?]

Thus, we must conclude that humans (and probably mammals, if not most animal forms), just as ML systems, are able to make discriminations that rely on contributions made by a high numbers of variables. I hope that we can now agree that humans are unable to consciously explain exactly how equivalent tasks are performed by machines and biological organisms alike. This inability is a function of the complexity of the task, not a function of what system performs it.

[Note: I am not talking of what counts as “scientific explanations”, I am referring here to what we can grasp and feel without external aids.]

In the case of biological image-recognition, we don’t know how the mechanisms in question work, but we do know that even if we did (in scientific terms), we would not be able to produce explanations that are simple-enough to be understood by most humans (not without having to laboriously study for many years): in the case of ML, we know everything about the mechanisms, but we still can’t find the answers we’re seeking. This is because we want “simple” answers, simple enough to be understood, at least. The simplicity of the desired answers is the common factor between the two “unknowns” mentioned in my poll.

Thus, we reach my conclusion. We can’t (consciously) know how it feels to be a bat: even if we knew the mechanism (as per ML), we would not have the capacity of reasoning all the way up to forming the correct idea (such idea, in order to be correct, includes too many variables, so we wouldn’t be able to hold it in our limited conscious minds).
Thus, the answer to my question is (from my own perspective!) a definitive: “No, not a coincidence”. The common factor is how limited our conscious understanding can be.

Conclusion

My own hunch may well be wrong, however, the fact that the poll results are split (based on a tiny sample size!) is hopefully an indication that the question is not as absurd as it may appear at first sight. Please do feel free to add your own thoughts in the comments (or via Twitter, if you prefer). Thanks for reading and to all the poll responders!

Tagged with: , , , ,
Posted in Consciousness, Philosophy, Stupidity
4 comments on “Machine Learning, the usual Bat and deflationary epistemology
  1. David Duffy says:

    So you’re arguing that I can’t know what it is like for me myself to recognize a colour or an image? Or perhaps to know what it is like to perceive unconsciously by reviewing an experience where it can demonstrated that my behaviours or thoughts changed in response to a subliminal stimulus? I see art appreciation as the latter, to some extent.

  2. Sergio Graziosi says:

    Hello David,
    my apologies for the extremely slow reply!
    First of all, I’m just following intuitions in a way that’s aimed at solidifying them and/or seeing if they fall apart upon inspection, so most of what I’ve written here (above and below) is not to be taken as a firm position.

    I wasn’t pointing to unconscious effects, so I’ll stick to:

    So you’re arguing that I can’t know what it is like for me myself to recognize a colour or an image?

    In a way, yes, but overall perhaps I’m arguing that we don’t know what “knowing” means!

    I can recognise things/qualities, which means “I” as the sum of my conscious (amenable to introspection) and unconscious (inaccessible to “conscious me”) abilities do “know” how to recognise lots of things. The same holistic “I” can also recognise the act of recognising. Thus, if recognising = knowing, then my whole being knows what it is like to “recognize a colour or an image”.
    But that’s where it stops – if pressed, I suppose I could say that the feeling of redness is actually the second-(or third?)order introspective ability of knowing that I’ve recognised the red feature present in my visual field. The crucial bit is acknowledging that conscious me cannot describe the state of perceiving red, because I have no access to how I am recognising it as such. I guess this chimes with Peter Hankins’ emphasis on recognition, peppered with my own unpacking/version of Blind Brain Theory (Scott Bakker’s).
    Digging deeper: conscious me doesn’t know what it is like to feel redness, only holistic me does. But we normally (in philosophical discourse, at least) assume that knowing something entails the ability of communicating it (for deliberate communication, being accessible to conscious me is a necessary condition), while this is rarely the case: for most of our abilities, we can’t really produce verbal accounts.
    Hence my devious hint about our (mis)understanding of knowledge. People are usually very confused about what we mean with “knowledge”, and I think we should stop acting as if it was one single thing. There are many different forms of knowledge, implemented in many different structures (not even limited to neural structures, I suspect!).

    Does this help?

  3. Peter says:

    It doesn’t seem to me that ‘knowing how it feels to be a bat’ in the sense usually understood is the same kind of knowing as a machine learning program ‘knowing’ a face it recognises. For example, in the latter case the machine is correct or incorrect, and there’s one right answer. is it imaginable that if we sat an exam in bat phenomenology, some of us would get the same, correct answer, while all others were incorrect?

    I think ‘knowing what it’s like’ is a mere metaphor, not knowledge any more than carnal knowledge is.

  4. Sergio Graziosi says:

    Peter, it’s so nice to see you here!

    I think ‘knowing what it’s like’ is a mere metaphor, not knowledge any more than carnal knowledge is.

    Agreed! (Kudos for the choice of analogy!) Question then becomes: how many people disagree with us and would say that ‘knowing what it’s like’ is not that different from knowing how to calculate 316/13?

    Also: perhaps I’m not understanding the first part of your comment, or maybe I’ve failed to make my starting point clear. Analogy I was making in my mischievous question was between “knowing how it feels to be a bat”, which could then become “recognising a feeling as a bat-feeling” (I’m loving this bat-thing, trying to think about how to sneak the bat-shark-repellent in! :-)), versus knowing (as a human can know) why a ML system produced a given output. The only thing I’m claiming the two have in common is that both would require our conscious selves to be able to consciously manipulate way too many variables: we just can’t.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Google+ photo

You are commenting using your Google+ account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s

This site uses Akismet to reduce spam. Learn how your comment data is processed.

Follow me on Twitter

All original content published on this blog is licensed under a Creative Commons Attribution-NonCommercial 4.0 International License.
Creative Commons Licence
Please feel free to re-use and adapt. I would appreciate if you'll let me know about any reuse, you may do so via twitter or the comments section. Thanks!

%d bloggers like this: