One should learn from mistakes, I hope we can all agree on this. But what if you never notice certain mistakes, and what if the ones you don’t notice are the biggest and most visible? Some of the things I’ve explored in this blog do suggest that this may be the case. Most importantly, in the process of writing, I’ve collected first-hand experience that I do make mistakes that look pretty silly to an external eye. Luckily, there are plenty of lessons that I can learn. In this post I will dissect one particular error I’ve made, it may have been inconsequential, but it was painfully obvious, and can therefore be used as an exemplary case of study.
The error is there, hiding in plain sight in my post on Chalmers annual lecture. And I wouldn’t know it was, if Chalmers himself had not decided to point it out to me. He was kind enough to privately send a couple of comments on my essay, I reproduce the crucial one below (with permission). But first, my mistake: in the blog post mentioned above I use the example of the mind-body problem as a prototype of an issue that is (still) philosophical because science can’t tackle it (yet). It seemed obvious to me that if a theory could be used to create a conscious machine, dualism would instantly disappear. But it’s not so obvious! In fact Chalmers simply wrote:
I don’t think that creating a functioning brain from matter will settle the controversy over the mind-body problem.
For brevity’s sake, I omit the context, but I assure you that his message was sympathetic enough to make the sentence above sound casual and neither confrontational nor dismissive, we’ll see below why this is important.
Did that little remark stop me on my tracks? Of course not! I was foolish enough to challenge it, and it took another equally friendly message to make me see how right Chalmers is. Of course dualists will keep supporting their view, in fact many forms of dualism already contain the required arguments to nicely accommodate such results with little effort. What would look like final and incontrovertible proof to me can be legitimately met by a simple “meh, so what?” if you start from very different premises.
More on this follows, but for now, it’s important to explain why I could happily ignore this obvious fact, write it down in public, get corrected, and still be unable to recognise my mistake. It should have been obvious: after all, the story of ideas can be seen as a series of fights between different interpretations of evidence, where each interpretation seems obvious to its proponents.
The short answer is: I’m stupid. As we all are. Following Cipolla’s insights, you can never underestimate stupidity. That’s true, but doesn’t help, so I’ll produce a long answer as well.
An explanation that promises to be at least marginally useful is the following: existing cognitive attractions constrain my own thoughts. In the specific case, I happen to have a very well rehearsed, clear and carefully developed idea of how theory can drive the creation of a conscious machine. I may be right or wrong, but what matters here is that the existence and perceived solidity of such ideas mean that I am automatically led to believe that creating a conscious machine will wipe out all residual doubts. The seeding error is this: if my own theory will make such a machine possible, I will personally believe that my theory is true; I will be finally convinced. And I may well be alone! But because I’ve invested so much thought and efforts on it, the emotional impact that the thought of such a “proof” has on me makes me blind to the fact that it may have little or no consequence on someone else’s beliefs.
In other words, the unsurprising conclusion is that I’m blinded by my own passions, so why all this fuss? Because there is a parallel here, between my own mistake on the small-scale, and one or the other philosophical theory of mind in the bigger picture. There are hundreds of alternative theories about consciousness, some are somewhat compatible, some others are not, but people who strongly believe in alternative and incompatible theories will automatically use their own views to interpret new evidence, and will reach different, and probably incompatible conclusions. Following almost exactly the same pattern that fooled me, their own cognitive attractors will constrain their possible interpretations of the evidence. Assuming that some theories are indeed incompatible (monism and dualism, for example), it is fair to predict that “creating a functioning brain from matter” will not by itself convince the dualists that they are wrong.
But there is hope, because after all I could see my own error, demonstrating that at least in some cases cognitive attraction can be escaped. Why is it so? And what are the limits of this “escaping possibility”?
First of all: the error was shown to me, stated in a clear and unmistakable way, but more importantly, thanks to the right balance of clarity and kindness, it could breach my first line of defences. Also, the message arrived from someone for whom I have a huge amount of respect. In terms of cognitive attraction, it means that I was provided with plenty of potential energy to use, enough to escape the particular mistake-generating attractor that fooled me. A small result (given the trivial conditions), but still it was possible only because of a series of favourable circumstances.
Second: my cognitive space is full of different attractors; the strength of one can be used to escape from the influence of another. The metaphor I have in mind looks pretty much as the cosmos, where each star and planet generates its own gravitational field. I received a push, and my understanding could stop orbiting around my own theory of consciousness and get into the area of influence of a different attractor. In this case, the receiving attractor is the one that informs this post: the notion of cognitive attraction itself, naturally enriched with corollaries about human fallibility, implicit biases and the like.
At another explanatory level, what happened is that I could conceptualise (and therefore recognise) my mistake by using more abstract concepts (in terms of cognitive attraction, these are the concepts that are closer to the root beliefs). This can be seen as a confirmation (direct and indirect) of the usefulness of the cognitive attraction concept itself: in direct terms, it allowed me to recognise my mistake; indirectly, the idea is validated because it provides a (predictive) explanation of the whole process.
This rather trivial episode therefore allows to identify some important take-home messages about making, detecting, managing and correcting cognitive mistakes:
1) Making mistakes and not noticing it. This indeed happens when and because strongly held beliefs are in play [it may happen in other, yet to identify, circumstances, but today we focus on this particular case]. The strength of such beliefs is the main and all-powerful reason why errors may go undetected. The corollary is that the stronger the beliefs are, the bigger the blind spot is, allowing for macroscopic mistakes to pass through. It follows that strongly held beliefs are dangerous (human history provides plenty of confirmatory examples) and that doubts are helpful. Interestingly, this applies to all beliefs, it doesn’t seem to matter how true they are.
2) When this sort of error happens, the best chances to spot it always come from the outside. But the task of making the error visible to its author is prohibitively difficult: add a hint of confrontation and the mistaken person will go on defence-mode, and may ferociously protect her blind spot, frequently to comical (or tragic) extremes.
3) From the position of who tries to avoid falling into this sort of ever-present traps, a few heuristics offer the only countermeasures that I can think of. They have to be heuristic checks, precisely because one cannot know a priori when they should be used.
- Cultivating a vigorous distrust for your own stronger intuitions is the healthier option. The more an opinion or idea is supported by what seems very obvious, the more one should be worried about it. Of course (!!), this is especially true when the idea in question seems to be highly controversial and debatable. My exemplary error falls beautifully into this description and all theories of consciousness seem to apply as well.
- As a result, one should seek criticism, preferably from trusted sources. A critic that you perceive as unreliable, biased or mischievous will find it very difficult to give your mistaken idea a strong enough shake. Therefore, whenever one produces a controversial idea that nevertheless looks obvious, the best possible strategy is to discuss it with a trustworthy person that happens to disagree. Explain your position and let the criticism sink in. It’s neither easy nor pleasurable, but it may work.
- The specular perspective applies as well: if you wish to show someone their own (glaringly obvious) mistakes, you need to gain their trust first. Paradoxically, the more obvious the mistake is to you, the more you will need to be trusted. [Note that these are heuristics, a lot of exceptions are expected to apply]
- This is the reason why I strongly believe (!!) we should value disagreements. Having reciprocal trust and intellectual respect for people who don’t hold your own fundamental beliefs is the best way to keep your philosophical arguments in good order. It’s not a surprise, but it’s worth repeating, because it’s difficult to act according to this principle. The key is that you need not to consider those who hold alternative views your (intellectual) enemies, they are in fact your best allies.
- Academic controversies are supposed to work in this way, but in my experience they very rarely do. This is the direct consequence of the fact that academics are expected to compete with one another. They compete for grants, tenure, students, staff, most/all funding, visibility and prestige. As a result, intellectual U turns are almost unheard of, and the effect is that everyone protects their own theories (and of course, most of them will turn out to be wrong), slowing down the progress in a very marked way. The best ideas do tend to emerge, but they need generations of scholars to battle over them, while in ideal, non competitive setting they could emerge in an afternoon or little more.
4) Correcting already-made mistakes is even harder. First, one needs to clearly recognise the error, second, one needs to find its source, and it’s usually painful, because the source is almost certainly a cherished belief. If it wasn’t hard enough, I confess that I have absolutely no idea on how to set the record straight. If the idea you wish to recall is already out there, making sure everyone notices your change of mind can sometimes be impossible, but it’s always going to be hard. This is another reason why we need open science, but we also need an open, centralised structure to act as a hub. If ideas, theories, interpretations and models are dispersed across multiple media, in multiple forms, but generally in the static form of papers, they will gain their own independence and the change of mind of their authors may be almost inconsequential.
In my case, I do have a mistake to correct. My initial comments on Chalmers’ lecture should be revisited: I will write a follow-up post for this purpose as soon as I can.