In the previous two posts I’ve gone through the depressing exercise of exploring the misconstrued world-view that led the UK to leave the EU. The picture that emerges is not limited to the UK, but is a paradigm example of how misinformation and systematic lying is enough to derail the whole mechanism of democracy. On one side, a well established web of lies is enough to make electors consistently vote against their best interests, on the other, it forces politicians into an ever-narrowing path that leads to authoritarian forms of government.
We should be afraid and we should take action.
The problem is, of course: how? To explore this question, we need to give a closer look at how the web of lies affects the people who buy into it.
It goes without saying that in the case of Brexit, many people believed the lies; we’ve seen the effect this has on politicians who exploit such lies, but what about the people who happen to believe in them? The mechanism of cognitive attraction will unfortunately take its toll: the longer the lies are believed, the harder it will become to recognise them as false. Furthermore, because of the importance of politics in society, and in particular, in shaping our public and private persona, it is likely that the views justifying our political choices rapidly acquire foundational value. They become deeply embedded in our web of beliefs, triggering all the defence mechanisms that are explored by the prolific research on human biases.
In detail, the mechanism that is likely to take hold is usually described with the catch-all “confirmation bias” label. In layman’s terms, it is widely accepted that we all tend to actively seek evidence that confirms our beliefs and concurrently overlook evidence that undermines them. For a good review of the concept, along with a hint that it may be a broad concept that actually applies to numerous distinct mechanism, see this paper (Nickerson 1998). In short, it is fairly uncontroversial to say that a set of beliefs, once accepted, will become harder and harder to dislodge as time passes. In memetic and fragility/antifragility terms, this becomes self-evident: if a set of beliefs is good at persisting, it is probably because new experiences tend to reinforce it. If this wasn’t the case, a given set of beliefs would be likely to disappear sooner rather than later. Thus, the age of a given set of beliefs strongly correlates with its tendency of being reinforced by the accumulation of experience. This tautology isn’t particularly informative, but does help recognising the crucial factor that some elements of the outside world must be able to reinforce any web of widely spread beliefs; whether such beliefs are true or false becomes secondary.
Realising this is important, because it indicates one way of opposing the overall trend, but before looking into this we need to explore what is likely to happen if any web of lies gets widely accepted as true. We’ve seen that it frequently has the effect of constraining the options available to politicians. On the other hand, a member of the public who got caught in the same web will become progressively less and less likely to change her mind, and will therefore be more and more likely to vote for whoever publicly upholds the mendacious world-view in question. The degeneration of autocratic states is an enduring testimony of this mechanism: as time passes, the web of lies grows and those who hold power become more and more constricted by it. Meanwhile, considerable proportions of the public will become more and more entrenched in their support for the collective fiction. Old examples abound, but more recent ones include Putin’s Russia, Erdoğan’s Turkey, and naturally North Korea. The result is that entire nations seem to progressively detach from reality, with predictable dire consequences which I will not explore. Saying that the prospect doesn’t look appealing should suffice.
The question thus becomes: how do we stop this mechanism before it leads us toward self-destructing autarchy? First thing to note is that in the UK we already have a large proportion of people who have bought in the web of lies. Two consecutive elections and Brexit testify it. Thus, there is political capital available to whoever is willing to publicly uphold the web. In other words, expecting politicians to stop lying is delusional: in the current circumstances, lying brings votes, so in case candidate 1 refuses to do so, candidate 2 will be able to reap the benefits. Once aweb of lies is established, the democratic system makes it self-sustaining. Does this mean that we should restrict democracy? I don’t think so, for pragmatic reasons: autocratic systems have an even worse record in this respect, so they don’t seem to be a decent solution. In other words, the self-sustaining element of any established web of lies entrenched in a democratic system is the kind of problem that is good to have: the alternatives are worse. That’s not to say that it’s good to have one of more web of lies established in public discourse, it is not: we should try to dissolve the web, precisely because we know that it is somewhat self-sustaining.
Unfortunately, the whole discussion above demonstrates that dissolving the web is not going to be easy. For starters, the supply of people willing to publicly uphold it is likely to be endless: as long as lying comes with positive political value, someone will. Therefore, the only general strategy available is to try reducing the political value attached to the web, which in turn means trying to reduce the number of people who believe in it. Simple, right? Of course not, so we’d better try to figure out how.
Let’s start with the basics. As long as the lies can get aired and remain unchallenged, it will always be possible to believe them: remember, all lies that confer political capital are believable by definition. First rule of thumb: when exposed to a lie, we should do what we can to denounce it. This series of posts is trying, more won’t hurt – social media is there for you to use, it’s cheap and always on. However, this strategy alone can’t be very effective: to start with, it is at best only able to reduce the appeal of the lies, perhaps instil some doubts in whoever is still sitting on the fence, but not much more. Unfortunately, this strategy also comes with a genuine danger: whoever is already caught in the web, is likely to react defensively, which won’t help in any way. That’s right: the real danger is to polarise opinions, reinforcing misbeliefs as a defensive reaction against what might be perceived as an aggression. I can’t stress this enough: how do you expect people to react if you tell them that their political choices are wrong because they have been conned? They will tell you to eff off, that’s what they’ll do, and will reinforce whatever story they use to justify their beliefs.
Not convinced? Think again, the accumulated evidence looks overwhelming to me. Facts and rational discourse, don’t change minds, they polarise opinion. Google it, seriously! A few pointers: flat-earthers regard themselves as ultra-rational, but hey, they are crazy. However, the effect of evidence in entrenching opinion has been observed many times, starting at least with Lord et al. (1979), all the way up to Corner et al. (2012). [Note that the latter paper is also useful to see how these mechanisms are not straightforward, Corner and colleagues find a meaningful distinction between opinion polarisation and biased assimilation of evidence: fascinating stuff!]
Moreover, both the manifest and scientific images are clearly indicating that political discourse is getting more and more entrenched. Worryingly, social media does not contrast this, in fact, there are strong indications that it enables the creation of partisan bubbles. For details, see Nikolov et al. (2015 a summary is here), or the evidence about political discourse in Conover et al. (2012 – Note: both papers come from the work of the same group, their home page is full of good resources and well worth a visit).
Overall, one thing is clear: simply denouncing as false the various lies that are used to justify public policy, is clearly not enough, moreover, even directly challenging false beliefs in person, won’t work, not if the only strategy is to use evidence and rational discourse. Depressing. What else can be done? Well, we need to learn the art of persuasion, which we’ll explore with the next post.
The rise of populism is no accident. Front National in France, the long Italian thread between Berlusconi, Lega Nord and the Five Stars movement, Jobbik in Hungary, the Freedom Parties in Austria and the Netherlands, Trump himself, as well as the pro-Brexit rhetoric and the unashamed fascistic turn (see: 1, 2, 3) of Theresa May’s government are part of a pattern that is visible in most or all Western Democracies. It is a dangerous path: I see no reason to doubt that it leads towards autocratic regimes. Our current destination looks very much like Putin’s Russia or Erdoğan’s Turkey (see also the previous post). Recognising the pattern and doing nothing would make us complicit. Problem is: what can be done? My proposal is, first and foremost, whenever systematic lies are used to uphold public policy (this includes lies used to support policies we agree with), we should use all the channels we have to challenge at least the most evident BS. This won’t invert the trend, but should help slowing it down. To go further, the only option is, unfortunately, effortful and slow: as wonderfully expressed by Ben Walters, we need to “make art that wins hearts and arguments that win minds“, in short: make it personal and take action. In the next post, I will look at what is known about how to win minds (spoiler: it ain’t easy).
Bibliography and disclaimer.
Please note: this post draws on a few peer-reviewed papers which explore a handful of psychological mechanisms and effects. Most of my readers will be already well aware that the world of Psychology research is in turmoil, following the replication crisis. For an in-depth, comprehensive summary of the whole charade, see this post by Andrew Gelman.
Obvious question: if the science I’m citing is, as a whole, not necessarily trustworthy, why did I bother? I did because habits are hard to dislodge, but also because the bibliography I’m including does at least provide one important contribution: the cited studies should not be considered as evidence that I’m definitely right, but they do provide some reason to believe that I’m not unquestionably wrong. That’s good enough for me: my general approach is that I should nurture self-doubt, I should never take my views as obviously correct (even/because it feels they are). In this case, since reviewing the evidence that may challenge my point would be a prohibitively long task, I’ve settled for the second-best option and merely presented some evidence that (weakly) supports my argument.
[Explicit thanks are due to Artem Kaznatcheev for always providing food for thought, valuable sources, and for keeping me constantly on my toes.]
Nickerson, R. (1998). Confirmation bias: A ubiquitous phenomenon in many guises. Review of General Psychology, 2 (2), 175-220 DOI: 10.1037//1089-26126.96.36.199
Lord, C., Ross, L., & Lepper, M. (1979). Biased assimilation and attitude polarization: The effects of prior theories on subsequently considered evidence. Journal of Personality and Social Psychology, 37 (11), 2098-2109 DOI: 10.1037//0022-35188.8.131.528
Corner, A., Whitmarsh, L., & Xenias, D. (2012). Uncertainty, scepticism and attitudes towards climate change: biased assimilation and attitude polarisation Climatic Change, 114 (3-4), 463-478 DOI: 10.1007/s10584-012-0424-6
Nikolov, D., Oliveira, D., Flammini, A., & Menczer, F. (2015). Measuring online social bubbles PeerJ Computer Science, 1 DOI: 10.7717/peerj-cs.38
Conover, M., Gonçalves, B., Flammini, A., & Menczer, F. (2012). Partisan asymmetries in online political activity EPJ Data Science, 1 (1) DOI: 10.1140/epjds6