Having described and discussed my methodological approach, it’s now time to see it in action. Because my final aim is to understand humans, so to avoid doing stupid errors (I’m trying to write an evidence based user manual for my own life), ethics has to be a very relevant topic in my book. My method is going to be scientific (or empiric) , in the sense that I’ve discussed previously. As a result, one big question is clear: can you use science to solve moral dilemmas? Or even, can we treat ethics as another scientific subject? Can we have a science of morality?
Most people will be quick to answer “No, we can’t”. The common understanding of Hume’s law is that “you can’t derive ought from is”, hence no, since science is in the business of describing what “is”, it is and always will be unable to produce “ought” claims*. Of course, I think this position is rubbish, and I’m in good company, so I won’t spend too much time discussing the theoretical dilemma. Instead, I will explain my position briefly, point to accredited opinions (on both sides), and then move onto what really interests me.
There is no doubt that you can derive a non-moral ought from empirical observations: if you want to learn playing the guitar, you ought to practice. In this case, you have a predefined goal, and your experience of reality can clearly tell you what you ought to do to achieve it. The trouble with moral arguments is that they imply a universally shared goal, “the Good”, an entity that it is difficult to pinpoint, especially because the evidence seem to suggest that there is no such thing.
Want the evidence? Here it is: ask 5 people what they think is universally accepted as good and you’ll probably get less than 5 answers (somebody will likely refuse to answer) and what answers you’ll get will most likely not be the same. You may then dig even deeper and ask what is the most important universally accepted good, and you’ll get even fewer answers, and not always the same one. This is already interesting, because our own intuition frequently points in another direction: whatever you believe is good, usually feels universal, and if others disagree, they are, by some twisted and intuitive definition, wrong. Then of course, most of us know that this gut feeling is misleading and will allow various degrees of ethical relativism, but the point remains: there doesn’t seem to be a founding, universally recognisable “Good” that can be used to derive all the other “lesser goods” that drive our choices. Hence, at first sight, it doesn’t seem possible to create a universally valid “scientific system” able generate all the moral oughts that we may need.
Fair enough, but since when this was a problem? From where I stand, this impossibility of generating universally valid models that are definitive and error-free is shared among all sciences, and nobody is telling us that you can’t do physics because by definition all measures and predictions will not have infinite precision. Neither do anybody say that physics is not good because it only defines how things work in certain conditions and that, outside those conditions, it may know nothing at all. So no, the standard ought-is problem isn’t even remotely an issue, the issue here is another: it is rooted in our fallacious intuitions. Either there is a founding, universally recognisable “Good”, and therefore we should try to define it (and there never was a shortage of alternative definitions), or there isn’t. If the latter case is true, then a moral science will need to deal with contingent goods, possibly with distributions of different, often contrasting and competing goods, and study this interaction.
Clearly, I place my bets on this second scenario, but I must admit that it opens another problem. Dealing with distributions of different, contrasting and competing goods can be strictly descriptive: you may use this approach to create a model of conflicts and interactions, if this model works well, it will allow you to make predictions: if agent A does X and agent B does Y, Z will happen. And then what? Is Z any good? To answer this final question you need a non-contingent, not-subjective reference point for “good”. You can derive the particular optimum for our A & B interaction (the place in the possible outcomes where the aims of both A and B, pooled together, reach maximum satisfaction) but by doing so you are applying a criteria (satisfaction maximisation) and this criteria is itself a definition of a universal good.
This is exactly what Sam Harris is doing, he
Urges us to think about morality in terms of human and animal well-being, viewing the experiences of conscious creatures as peaks and valleys on a “moral landscape.” Because there are definite facts to be known about where we fall on this landscape, Harris foresees a time when science will no longer limit itself to merely describing what people do in the name of “morality”; in principle, science should be able to tell us what we ought to do to live the best lives possible.
He is defining a Universal Good as the upward direction in the imaginary “moral landscape”. I have no problem with this vision, as long as we accept that this direction is itself an approximation, it reflects our own understanding of an ideal (even “invented”!) Universal Good and is guaranteed to change as we get to refine the models that allow us to define this ideal good with increasing precision. Luckily, this process matches very well my own definition of science: it is the process of creating more and more precise models of a given phenomenon, it doesn’t claim to reach perfection, but only to strive to approach it. It doesn’t even require the model to include things that actually do exist, they can be (and often are) invented entities that are justified by our aim. If a model produces good predictions, we should use it, and if it includes concepts that we created to allow the model to work, so be it, we don’t need to say “there is one universal good” all we need is to say is that postulating its existence is useful for the task at hand. Hence, my own approach to the discrimination problem (between science and pseudo science) can be applied to answer the following question: can Science inform ethics? Yes it can, and the first consequence of this application is that it ought to, because it contains the promise to help us to produce more precise and reliable ways to avoid making mistakes.
As far as I can tell, all critics of this position use a variation of the following argument:
You can’t have a science of morality because you can’t define a universal good. Even if you could define it, there are too many variables and therefore you will never produce scientific facts that can be universally regarded as true.
If you look around, you will find gazillions of people who take this stance, I will cite (and link) two of them, arbitrarily chosen based on my tastes. The first one is Sean Carroll, useful in this context because he engaged in a comprehensive debate with Harris. It starts with a TED talk by Harris, Carroll replies on the Discover Magazine, Harris follows up, etc, etc. (you can spend a whole day digging, if you wish)
Also Jerry Coyne replied, in his characteristically uncompromising way, stating that:
A lot of the philosophers and thinkers I respect are coming around to the view that there can be an “objective” morality, which I take to mean this: rational consideration of the world’s facts will reveal criteria whereby things can be seen objectively as either right or wrong. It may be hard to get those facts, but once you do the moral path would, it seems, be clear.
I still don’t accept this, and for the reason that, unlike science, morality also includes “add ons”. That is, after you divine the consequences of any action, one still has to add on the stipulation that those consequences comport with some standard of “rightness” or “wrongness.” Now people like Sam Harris claim that those standards are objective, too (his is “does an act increase general well being?”) but I don’t think it’s so simple, and neither do other philosophers.
The interesting side of Coyne’s position is that he is right. You can’t find criteria that will objectively classify actions as either right or wrong. What you can do is create and refine criteria that may, as science progresses, move towards more objective classifications; you can find and improve a set of criteria that will generate more and more universally acceptable classifications of rightness and wrongness. Also, in a process that is intriguingly analogous to the one I use to rate competing models, this science aspires to produce relative rates of rightness, as in “action A” is better than “action B”, without needing to claim that action A is right, and that all other possible ones are wrong. This second claim is impossible to make: it requires to know all possible actions (and consequences), so I hope that no proponents of the science of ethics will ever try to make such claims.
Coyne’s reply is interesting also because he presupposes the possibility of objective claims: by now you should know that I don’t think this is sensible, but will have to discuss this stance on a separate post.
So, let’s wrap up. My claim in short: we ought to use reality (direct and indirect evidence) to try to clarify what is right and what isn’t. We ought to, but we know from the start that this can be done only within certain well-defined boundaries.
- We can only rate rightness and wrongness in relative terms, our results will never be absolute.
- While doing so, we accept that the criteria we’ll use are themselves subject to evaluation and bound to be improved as we progress.
[Side note: the second boundary is what allows this endeavour to be a scientific effort, so one could say that Coyne’s position enables the scientific-ness of the effort, quite the opposite of what I think Coyne wanted to claim.]
These boundaries are what interest me, because they clearly define two separate but interlinked domains of enquiry. The first domain concerns the criteria we plan to use. In order to find and refine them we will have to understand human beings, and all creatures that have their own, self-acknowledged purposes. The working hypothesis being that “What is good is what allows creatures to fulfil their needs“. Hence, we ought to study their needs. The second domain is the science that applies what the first may find, and studies how agents animated by distinct (and ever-changing) purposes interact. This study is necessary to make predictions on the consequences of different choices and should eventually allow to rate different actions within the “moral landscape”.
I am making this distinction because the two domains can and should be seen as two separate and distinct sciences. This is because they will use different methods, and are inherently bound to different “solidity” constraints. The first one can try to become a hard science, I see no reason why it shouldn’t. The second one however is inherently limited: the consequences of moral actions are always significantly influenced by how people/agents will react. And this variable knows no theoretical boundaries, as it ultimately depends on creativity (another point that Coyne makes very well). The consequence is that this second science is condemned to a certain degree of volatility, but this doesn’t mean that therefore it should not try to solidify as much as possible.
In the next post in this series I will investigate these two sciences in more detail the role of philosophy: if ethics enters in the science domain, does this curtail the traditional role of philosophy? For now, I’m happy to conclude that they can exist and should regard themselves as sciences, while acknowledging that they both risk to degenerate into fluff/flatulence.
*Ah! the unintentional irony: I’m pretty sure that most of the people who take this stance will be happy to extend it further, and claim that, since science can’t (an “is” statement) it should not (ought not)! [back]