In the previous posts in this series I’ve explored the ethical stance that underlines Boris Johnson’s political world-view. The result is harsh, my last conclusion was: “Boris’ position looks bankrupt: it doesn’t work in theory, it doesn’t work in practice and shows a very convenient type of selective blindness”. What interests me today the convenience of this selective blindness, because it is an ability that seems central to my own exploration in many ways:
- Reason can support selective blindness by providing explanations (‘rationalisations’) and narratives that allow to fit anomalies into a pre-existing world view. This provides apparently convincing arguments in support of world views that are in fact macroscopically flawed.
- Individuals do this all the time, and the fact that this is ‘convenient’ seems to make it ethically questionable. However, because of cognitive attraction (the kind that influences reality in a self-sustaining form), I believe that one should be careful in passing negative moral judgements for this sort of mistake. Boris’ case is a good example of a seemingly indefensible case of selective blindness, but I suspect it is not so clearly cut.
- If I’m right, it follows that we are all potentially prey of this mechanism, including myself. I’d better learn how to detect this issue before proceeding in any new direction.
- In science (and I claim to be inspired by a genuine scientific attitude) the same kind of phenomenon can be observed, and may even be formally defined. Ignoring some evidence is usually discouraged, so that a more sophisticated workaround is usually required:
- on the macro-scale, proponents of a given theory will generate new explanations to account for anomalies, actively avoiding to question the usefulness of their preferred theory. I’ve discussed this process in some length in a previous post.
- This process can be defined a ‘overfitting’ (discussion will follow in a future post), it happens both at the level of general theories and single experiments.
- The danger of overfitting therefore is clear and present in all explanatory intellectual endeavours: it is a good idea to identify where it tends to happen and to design some counter-measures.
In this post I will explore the personal domain, showing why it is so difficult to avoid this sort of errors (providing some redeeming arguments in favour of Boris Johnson). I will then observe that my own view is likely to be undermined by similar mechanisms, calling for some caution.
I would love to use myself as our ‘test-case’, but I can’t: if I am selectively blind to some real mechanisms, ‘by definition’ I can’t directly observe them. Therefore, I can’t proceed introspectively, and need to use someone else instead. Boris Johnson will work well, as the previous posts laid the foundation: we have a good theory of what side of reality he is inclined to ignore.
Let me enunciate the detail: Boris neglects the importance of human collaborative attitudes, this is convenient because it permits him to hold a view that shows a surprising level of internal coherence. More: it allows him to consider his own stance as ethically sound, and, importantly, affects his own environment in a way that reinforces the whole system of beliefs.
I will start from this last consideration: if one (our idealised, prototypical Boris Johnson) starts by believing that competition is the most significant driving force, he or she will naturally tend to compete in most situations (and vice-versa, it is the first positive feedback circle!). Anyone who interacts with such a person will have two obvious strategies to use: match the all-round competitiveness or refuse to engage. In both cases, Boris’ beliefs will not be explicitly challenged: his own environment will be enriched with competitors and deprived of those that tend to collaborate instead. This generates a second positive feedback that reinforces the initial beliefs. It’s the basic mechanism of self-sustained cognitive attraction, a paradigm example, in fact.
Ethically, the consequence is that one needs to generate a world-view that allows people to be ‘good’ while recognising the (seemingly undeniable) fact that competition is ubiquitous, that life is a race. The conservative solution is the market (or market-like mechanisms): it allows single actors to selfishly compete, and generate secondary effects that are good for all. This generates a third self-reinforcing loop: as long as the market-regime is genuine, acting selfishly is ethical, a good thing to do.
The forces that challenge this view are minimised and rare, with the exception of a class of cases that would shatter the whole construct, but happen to be very useful, and easy to fit-in as an “irrelevant and welcome exception”. In the kind of world that an ultra-competitor builds for itself, collaboration will happen, but mostly between actors in artificially market-like environments: to remain in theme, politicians of all parties will have personal interests in common, such as defending their own privileges, and will be happy to collaborate towards the common aim (the same applies to all corporative interests, think of bankers, solicitors, taxi drivers, etcetera). Forgetting this significant detail is easy because collaborating feels right, it is instinctively recognised as a good thing. That’s fine, but it also makes it easy to selectively forget that within-market collaboration destroys the whole world view: market-like systems turn selfish behaviours into results that are useful for the whole community only because market-actors compete; when actors collaborate, the results are generally useful for the actors alone.
To retain this contradictory belief, one needs to modify (over-fit?) the foundations and state the following (with emphasis added to the bits that are false):
- Collaboration is good, but usually rare.
- Competition is everywhere, it’s unavoidable and frequently destructive, but it can be productively exploited via market-like, artificially sustained mechanisms.
- Markets allow selfish actors to act in an inherently ethical way.
- Individuals normally compete. The occasional collaborative behaviours can be considered marginal because they are rare, and collaboration is good* anyway.
[*I do maintain that collaboration is usually a good thing. The problem here is that collaboration between market competitors is the one significant exception: when it happens, it is counter-productive.]
Of course, the world view above is a load of rubbish, but it took me 4 weeks and 5000 words to expose it, and I could do it only because I not am under the influence of this particular cognitive attractor.
The conclusion is: we can’t condemn whoever holds this stance. Given the premises (one starts believing that competition trumps collaboration, or one happens to be born with a very competitive attitude), it is natural and predictable that some persons will end up developing this sort of selective blindness. As per the title of this post, Boris’ view is delusional specifically because it serves him well: maximises self-esteem, provides practical advantages (not discussed here, they should be obvious) and does so while minimising the chances to stumble on disproving evidence.
I may be redeeming Boris a little bit, but more interestingly, it exposes a huge problem for my own endeavours. This whole theory generates an easy prediction: these self-sustaining drives towards selective blindness may happen to everyone, because they can. And when they happen, whoever is under their influence will not be aware of it. Means that I’m likely to be writing all this under the influence of some self-reinforcing belief that is also able to hide from my own view the evidence that could challenge it. Ouch. More and more pure thought is not a solution: if one is smart enough, it’s always possible to conjure rationalisations to justify any conceivable absurdity.
My own approach is to revert to evidence, and use it to keep me anchored to reality, after all I am trying to use a scientific approach! But this helps only so much: as hinted above, science itself is not immune from this sort of self-delusion. However, science does tend to self-correct, and as a consequence, it does offer some examples that can be used to identify one or two tell-tale signs.
In the next posts I would love to proceed onto new adventures of ethical empiricism, but the consideration above ought to suggest more caution: in an effort of self-doubt, I will look at the significance of over-fitting in science, and how we can hopefully use it to learn avoiding self-serving intellectual mistakes. [Update 01/02/2014: actually, the next post is about how I plan to investigate sources of errors: overfitting is one of them, but just one case in what I hope will be a long catalogue.]