If science can provide a trustworthy look into ethics, one could reasonably hope that it will provide new tools to prevent and resolve conflict. In fact, it is not hard to find scientist that are eagerly exploring this possibility. In this post I’ll be using what I’ve built so far (the definition of volatility of scientific endeavours and the distinction between the two sciences of ethics) to explain why this hope is largely misplaced, or at least worryingly optimistic.
In the previous post, I’ve briefly looked at some of the evidence that makes it possible to look scientifically at our natural moral inclinations. This allows me to substantiate a claim I did in the first post of this series, namely that psychology (in a broad sense) can look into the cognitive processes that drive our understanding of right and wrong. More importantly, I’ve also asserted that this endeavour can aspire to gain some scientific solidity, precisely because there are good reasons to believe that we can isolate and study the biological foundations of our ethical dispositions. If this is true, it is tempting to predict that this effort will become a game changing tool in conflict prevention and conflict resolution: after all, if it will be possible to understand our most profound needs, and put some objectivity behind empirically informed moral claims, we will end up with a new moral compass, that for the first time in our history has at least the potential of being universally recognised, precisely because it is grounded on evidence, and not an a-priori assumption on what really is “the Good” (see the first post of the series for a discussion on this). The hope would be that this new compass (or coordinates system) will defuse many conflicts, simply by pointing to the “right” resolution, as defined by the new science of ethics.
Personally, I believe this hope is misplaced, or at least, way too optimistic. First of all, it is based on a conceptual mistake: it uses the (potential) solidity of the first science of morality (the one that looks at our innate moral compass) to claim that the second science (the one that tries to objectively understand what is good) will generate results that are solid enough to convince both parties of a dispute. Thomas Nagel (of bat-fame) has recently reviewed Joshua’s Greene’s latest book: Moral Tribes: Emotion, Reason, and the Gap Between Us and Them, and while doing so, has made my point in a different way. It seems that Nagel fully accepts Greene’s claims that we are indeed learning how our innate moral compass works, but Nagel also strongly claims that this is not enough to jump at the conclusion that a fully utilitarian outlook will be able to guide the solution of conflicts. Yes, we are learning about the psychological origins of the kind of right/wrong claims that are frequently used to justify large and small conflicts, but it will take a lot more thought to use this new insight to produce answers that are acceptable to all. I essentially agree with Nagel’s conclusion, and have only one thing to add: in fact, what Greene and others are finding, includes the causes of conflict, but also the reasons why a rational discourse, no matter how solid, can not by itself play a significant “conflict-management” role.
To explain why, I’ll need to show the links between what I’ve explored so far.
We have seen before that humans come with some innate moral preferences, and that these allow to develop complex and fully formed moral systems that are apparently built on rationality, but in essence are driven and inspired by our primeval intuitions. This general view is backed by Bloom, Greene, Haidt and many others. But what does it tell us? Although the authors mentioned above may not be fully aware of this, it gives rationality an ancillary role, one that is responsible to create intelligible and defensible descriptions of what we instinctively “believe” it’s true: that our moral compass is right, and that if someone disagrees, they must be wrong. Interestingly, one can even find evidence that humans actually do evaluate moral questions in two ways: Molly Crockett has recently published a paper where she argues that we use two systems: one is “model-based” the other is “model free”. Remember my definition of knowledge as a collection of models? The pieces of the puzzle keep falling in their place: rationality builds models, and Crockett’s model-based ethical reasoning in humans is strikingly similar to the utilitarian, “rational” stance. Crucially, she points out that “there is evidence that stress shifts control from model-based to model-free systems“, so that under stress, human beings will make their ethical decisions based on what we normally call gut-feeling, and not cold reasoning. Obvious, right? Sure, but in the context of this post, it tells us one clear thing: you can’t always resolve a conflict by providing a rational resolution. It just doesn’t work like that. People will use their guts, and showing them why their intuitions are misleading (one way of describing Greene’s hope) is not going to help much, because intuitions provide our drive, and rationality interprets it. One needs to find a way to redirect the gut feelings themselves, and yes, to do so one needs to understand them (first science of morality), but finding useful ways of using this understanding (second science of morality) is a much, much harder endeavour. Even more: because the second effort deals with how people react and interact, it is guaranteed to produce only strictly provisional, not universally generalisable answers: the moment one person generates a new thought, a new way to explain, justify, build-on, or interpret his/her own “gut-feelings”, the way s/he will interact with other moral actors will change, potentially re-shuffling all the cards in the “second science” deck.
This final thought gives justice to the position of Jerry Coyne that I substantially refuted in my first post on ethics. In this sense, he is right: an objective science of morality (one that aims to become a universal moral guide) is always going to stand on shaky grounds. Nevertheless, I do think that he is wrong on concluding that this is enough to declare that such a science is impossible. Probably the disagreement originates on our different understanding of what can be defined as Science: Sam Harris and I have a very inclusive view of science, one that relies on the solidity of its method. Coyne’s definition apparently relies much more on the solidity of results. This is a common position, and it’s the reason why I’ve spent so many words discussing science epistemology: I believe that results can separate science from pseudoscience only in a limited number of (simple) cases. Instead, I’ve claimed that a better discrimination can be obtained by looking at the attitude that inspires the methodology, precisely because this view is abstract enough to inform the trickiest discrimination problems. This leads me to the main point that I’ve been trying to make in the current series: a science that tries to rate different actions within a universal “moral landscape” is legitimate, but condemned to a high volatility status (the landscape is guaranteed to change in frequent and unpredictable ways). Whether this is a good or bad conclusion, I will leave it for the reader to decide.