In the previous post, I’ve concluded, with Ben Walters, that we need to “make art that wins hearts and arguments that win minds“. The aim is to contrast the rise of populism, or, following the current slang, fight against post-fact politics. This is a hard thing to do, and unfortunately, a reactive endeavour as well. However, I do think that the challenge posed to democracy by the establishment of any web of lies is the kind or problem we have to manage: it comes with democracy itself and I don’t see how to eliminate it without undermining democracy itself.
Since there can be no general formula telling us how to make art that wins hearts, and since I know little about art in general, I will concentrate on what we know about what kind of arguments actually do win minds.
One could (probably should) start with the study of classical rhetoric. However, my inclinations are biased towards philosophy (typically modern) and standard science. Thus, I’ll draw from a range of ideas I’ve encountered in the last few years, and see if a coherent picture emerges.
To start from a very general point of view, this EGG article (with discussion) by the usual Artem Kaznatcheev offers a good launchpad. Seeing arguments through the metaphor of war very emphatically leads to disaster. That’s because it shifts the objective: instead of trying to improve knowledge (or to get closer to the truth, if you can bear the hyperbole), in a war-like argument the aim is to show that your counterpart is wrong – there are winners and losers, knowledge might improve only as an accidental by-product. More promising approaches try to focus on constructive strategies, however, as I write in the comments, the metaphor of midwifery doesn’t satisfy me in full: it still encourages me to consider my own position as privileged, creating a dangerous asymmetry. From where I stand, a better approach should incorporate the notion that both myself and my debating partner might be wrong (indeed, the assumption is that we are always somewhat wrong!) and that therefore the aim is for both to learn something from any given disagreement. Easy, uh? Not at all, but for now, I’m inclined to conclude that good old Socrates is useful, but not enough. Better strategies are needed. Where can we find them?
One place is this excellent article by Tom Stafford. Stafford draws from a wide range of primary sources, what we learn, among many other useful things, is that one effective strategy is to ask for mechanistic explanations: if someone has a belief that you consider false, a good way to find out who is right is to ask for detailed, mechanistic justification of said belief. If the belief is unfounded, such a justification will be hard to construct, and as a consequence, it is likely that your counterpart will start doubting their own position (Fernbach et al. 2013). Otherwise, you will get the chance to revise your own beliefs (one would hope). Result: someone should learn something either way…
Moreover, a recent study (Tuller et al 2015) hints at an even more profound mechanism: apparently, being asked to make your opponent’s point has a measurable effect in shifting your own position towards reconciliation, but only if you feel accountable to the opponent herself. This chimes powerfully with my beliefs (bias alert!): in order to have any hope to improve each-other’s beliefs, it is necessary to start by a position of mutual trust. Tom Stafford himself makes a very similar argument, offering a convincing explanation of why expert opinion had little if not counteracting effect on the case of the Brexit referendum.
The common thread is symmetry, and when symmetry is unachievable, mutual trust. In other words, to debate constructively, one needs to shift away from the default “I’m right, you are wrong” position, and at the very least try to figure out who is less wrong. Ideas in both debaters may shift, hopefully improving along the way. Fine, but isn’t this in direct contrast with my current aim? If I’m claiming that a web of lies has been established and that we need to disassemble it, how can I then claim that we should approach the task by assuming that we may be wrong, and there is no web of lies? Well, I don’t know, but I also don’t see any other way (I may be wrong, after all!), so let’s see if I can find some more helpful ideas.
In philosophy, it is frequently assumed that progress is made via an ever evolving argument: people propose a thesis, someone objects, thesis is refined to account for objections and so forth. In this context, Daniel Dennett in “Intuition Pumps And Other Tools for Thinking” (2013, page 33 in my hard cover) has been advocating the four rules of criticism (an approach first spelled out by Anatol Rapoport). The key point is that criticism needs to start by trying to re-express the idea you are criticising in the best possible light. As Dennett himself specifies, the power of this method is that “your targets will be[come] a receptive audience of your criticism”, but to me, the even more important point is that proceeding in this way will give me a chance to fully appreciate what makes the idea I’m opposing convincing to some. You have to start by accepting the possibility that there might be something valuable in the idea you find disagreeable, making it possible that instead of producing a counterargument you might end up shifting your position. In other words, this strategy is an honest way of earning the trust of your debating counterpart: what could have been your opposition becomes a partner.
This leads me to an interesting detour: in Bayesian approaches to psychology, what counts are priors. How people evaluate new evidence is a function of what they already believe. Let’s go back to Brexit: a well-known interpretation is that people have rejected experts’ opinion and voted against the status quo. Could this strategy be wrong, but nevertheless rational? Sure it could. Imagine you’ve led a life where your birthplace and the social status of your original family meant that (honestly earned) success was almost impossible to achieve. The reality you have experienced is that the elites (including teachers, university professors and politicians) constantly assume they know better. The same people are also evidently busy protecting the status quo and their social standing. Under these circumstances, would it be irrational to assume that all advice to vote remain (offered by the same people who demonstrably have enjoyed the upside of an uneven playing field) cannot be trusted? Perhaps not! From a Bayesian perspective, the Brexit result immediately becomes less surprising and shows that playing the anti-establishment card was decisive. The Brexit camp has successfully managed to be perceived as anti-establishment and by doing so it has mischievously earned the trust of too many people. Naturally, I don’t believe this trust was justified: thinking that people like Boris Johnson, Michael Gove, Nigel Farage and Ian Duncan Smith are anti-establishment is like believing that the Pope is Buddhist. Nevertheless, this view allows to see why the web of lies constrained those who defended the status-quo while simultaneously enabling those who didn’t. It also allows to see why certain life experiences would automatically make people more subjective to this particular set of misbeliefs, without having to conclude that most of those who voted to leave are stupid or despicable bigots. I still think they are mistaken, but are so for very understandable reasons. I can also recognise how I could have been mislead in the same way (quite some time ago). Moreover, this reading is in full accord with Stafford’s evidence-based (speculative) explanation, and concurrently allows to approach a debate following Rapaport/Dennett’s recommendations.
Before painting the final picture, I wish to mention another short essay, by Deepak Malhotra: “How to Build an Exit Ramp for Trump Supporters” (paywall alert. HT SelfAwarePatterns). Malhotra’s academic profile specifies that he “is a professor of Business Administration at the Harvard Business School. His teaching, research and advisory work is focused on negotiation, deal-making and conflict resolution”. Sounds promising! Reassuringly, Malhotra starts from noticing that:
[H]aving facts and data on your side is not enough. If someone’s ego or identity is on the line, overwhelming them with evidence will do little good.
I couldn’t agree more. I do have some reserves in the “building an exit ramp” metaphor (not symmetric enough for my taste), but nevertheless his 7 rules feel exactly right to me. In his case, the bottom line is that you ought to avoid direct confrontation at all costs (the whole approach looks entirely compatible with the midwifery view).
Overall, it seems to me that the art of (honest) persuasion is hard but not impossible. A few general principles emerge:
- Don’t assume the debate is one-sided. Being ready to learn will help at each step.
- Avoid confrontation and earn the trust of your debating partner instead.
- Try hard to understand the position you oppose. Don’t hide your effort.
To dispel misbeliefs via mere argument it is necessary to be trusted. Furthermore, instead of offering evidence in favour of our own beliefs, it seems that it is more useful to (honestly) ask your opponent to explain in detail what grounds their beliefs. This should be especially efficacious in case you can’t find these grounds yourself: gives you a chance to learn something while concurrently building mutual trust. Finally, swapping parts and ask each other to explain what the other believes is likely to foster better mutual understanding.
Because trust is a prerequisite, it’s important to approach this kind of exchange with an open mind: if your opponent will be led to believe that nothing she say may ever change your mind, trust will be withdrawn, jeopardising the whole enterprise. Overall, the art of persuasion looks very much as the art of mutual understanding: if it seems that I’m asking you to become a Zen master, it’s because I am. The best way to win minds is to stop trying, and try to learn from disagreements instead.
Bibliography and disclaimer.
Please note: this post draws on a couple peer-reviewed papers which explore psychological mechanisms and effects. For the full disclaimer, see previous post.
Fernbach, P., Rogers, T., Fox, C., & Sloman, S. (2013). Political Extremism Is Supported by an Illusion of Understanding Psychological Science, 24 (6), 939-946 DOI: 10.1177/0956797612464058
Tuller, HM., Bryan, CJ., Heyman, GD., & Christenfeld, NJS Volume 59, July 2015, Pages 18–23 (2015). Seeing the other side: Perspective taking and the moderation of extremity Journal of Experimental Social Psychology, 59, 18-23 : 10.1016/j.jesp.2015.02.003