After a longish pause, it’s time to start looking at some genuinely disturbing stuff: how easy it is to make intellectual mistakes, and how hard it is to notice. I’ve been procrastinating this discussion for many reasons, but mostly because it’s the toughest topic that I can imagine: if you accept that intellectual mistakes are the norm and that whoever makes them is the least likely to see them for what they are, how can you pretend that what you are saying is not a big steamy pile of rubbish? I don’t think I can, but I believe I still have to try, because I’m aware of the problem, and turning a blind eye to it is obviously not the right solution.
But where should I start? It’s probably best proceed in small, methodical steps, and as a generic approach, try not to be clever at all. The starting hypothesis is that clever arguments have the ability to obfuscate mistakes with a smoke-screen of complexity. We’ll see if this hunch will be confirmed along the way, but for now, I’m starting my exploration with a clear methodological constrain: I will try to proceed in the most linear fashion and avoid as much as I can the use my own reasoning; I will rely instead on external sources whenever possible.
But first, a lighter note: if you have been following me, you probably have noticed (in the about section) the reference I’ve made to human stupidity and to the semi-serious work of Carlo Cipolla. Although his essay on stupidity does not have any real scientific aspiration, it does highlight many interesting factoids, and one in particular is very relevant here: Cipolla explores the relation between stupidity (a quality, like intelligence, that is notoriously very difficult to define) and mistakes. He starts from noting that a mistake is an action made to obtain some result (justified by a purpose, I would say) and that this action harms the actor by having the opposite effect. This moves stupidity out of the acting subject and defines “stupid actions” instead. Extending the original thought a little, I can add that when the actor had enough information to make a correct prediction, the wrong/harmful action can be safely described as stupid. This approach is both amusing and useful because it allows to observe a sad truth: everybody, with no exception whatsoever, does silly mistakes. It doesn’t matter how clever a given individual is, s/he will eventually do something utterly moronic, and in fact, it is not uncommon to see very clever people (or better, people who are very good on some truly difficult discipline) doing astonishingly stupid things in their everyday life.
In my case, Cipolla’s outlook applies in the following way: I’m trying to use a combination of pure thought and direct or indirect experience to describe as faithfully as possible the world around me, including my inner life and aspirations; the aim is to find out how to better fulfil these aspirations, with the expectation that whatever I may find should be useful to (some) other people as well. Following Cipolla, I can predict that I will eventually generate some thoughts/writings that actually divert me from the truths I’m seeking, and drive me more and more into the falsehood domain instead. Crucially, as we’ve seen for Boris Johnson’s ethical delusions, when this will happen, it will happen because I am blind to the error I’m making. Oh dear, it still seems there is no hope: I will make this sort of mistakes, may have made plenty already, and I am, by definition, unable to find out if that’s the case. From now on, I will refer to this situation as the “stupidity catch“. And because of it, I may conclude that the whole project is hopeless and stop wasting my time writing here.
… But …
Where is the fun in that? Mistakes or not, I like what I’m doing here, I get pleasure from the effort, and I am determined to keep going. The overall understanding of the human condition has not been static, or randomly changing, since cognition and culture first appeared; there has been some unmistakable progress, some wrong turns, but also obvious results that we can all see. At least in the science and technology fields, this progress cannot be questioned: we understand our needs and how to fulfil them much better than 100, 200 or 1000 years ago. Hence, despite the stupidity catch, some progress is possible, and I am not ready to withdraw from the race.
… But …
I can’t simply ignore the catch, now that I know it’s there. It is clearly a good idea to give it a long deep look, and see if there is any way one can minimise its effects, if not avoid them definitively. To do so, here is the plan: I will look at known sources of error and for each one I will ask three questions:
- How can one detect them “a priori”? That is, how can one find out if his/her own thoughts are likely to be influenced by a given source of error, regardless of the contents of the specific thoughts?
- Whenever a source of error is identified, how does one check if it is relevant (e.g. it is indeed influencing your own thoughts)?
- If it’s likely that the stupidity catch is generating errors, how do you detect and correct them? [I can anticipate that this last question has an easy, always possible answer: one needs to ask for help. Other people may not be influenced by the same stupidity catch, and may be able to influence the affected thinker enough to break the spell.]
Result: I need to build a catalogue of the mechanisms that produce intellectual errors. Not a task that can be completed in a couple of days! To start on solid grounds, I will begin by looking at what science can tell us, and will do so following two distinct approaches. On the first, I will explore what is known about cognitive biases and similar mechanisms, or, in other words, will search for known and verified sources of error. On the second, and somewhat more intriguing stream, I will look at scientific errors, that is: examples where science has taken a wrong turn, and try to see what we can learn from past mistakes. Note: this second stream is not a science-bashing exercise, if anything, it is the opposite. It will certainly highlight the self-correcting quality of science, precisely because it will explore known errors!
Please also note that I do not think I will be able to proceed in a strictly systematic way: I will start with sources of error that I’ve already identified, and then add to the catalogue as and when I’ll smell the presence of a new, previously undetected one. To allow browsing the catalogue, I will use the “Stupidity” category, so that all posts will be available at this address. Overall, I like the prospect: starting from truly depressing considerations it seems I’ve found a promising way to proceed. I have the feeling this exploration will be fun!