How can we prepare for the unexpected? One of the central features of Nassim N. Taleb book “Antifragile: Things That Gain from Disorder” is that the great big world is essentially (?!?) unpredictable, and that most attempts to tame it are eventually destined to failure. Therefore, in order to preserve whatever good we are interested in, we should increase its antifragility instead of trying to make it robust, or worse, trying to modify the environment so to make sure the external world will not injure it. Similarly, to be successful, it is more useful to be antifragile, instead of trying to predict the exact consequences of our actions.
According to Taleb, a good way to achieve both aims is to adopt a heuristic approach to decision-making:
- If we know that it is impossible to predict the character and size of future events, it is safer to use fuzzy predictive systems that are known to be useful most of the time, and perhaps more importantly are not prone to occasionally produce catastrophically wrong predictions.
- Hence, the role for heuristic “rules”: they don’t try to be perfectly accurate, but to simply guide our decisions in the right direction, minimising the chances to get it spectacularly wrong. At the same time, it is possible to adopt rules that are designed to sacrifice a little predictive precision in favour of the desired asymmetry of outcomes: when we’ll choose the wrong option, the downfall will be minimised, when we’ll get it right, the advantage will be comparatively bigger.
- In this way, one maximises antifragility: finding out rules of thumb that prevent catastrophic errors and at the same time maximise the chances to score spectacular wins is possible even in an unpredictable world. More: it is desirable because it transforms unpredictability into an advantage.
Once again, this is my own reading of Taleb’s ideas, and I have no trouble at all with them. I was trying to live in this way well before I’ve heard of him. His book is constantly peppered with useful heuristic rules, and most of them make perfect sense, and can/should be filed in the “useful strategies” folder. However, I once more found reasons to get frustrated: from my point of view, Taleb overlooked the opportunity/necessity to discuss two related concepts: the limits and intrinsic risks of heuristic approaches, and the flip side, the advantages/possibility of generating accurate predictions.
I’ll start with the latter, because it is the business of science, and hence close to my heart. Fact: science has been able to produce fantastically accurate predictive models. Most of the technology we enjoy using is at least in part the direct consequence of this success. Taleb is absolutely right in pointing out that complex non-linear (for example, climate and weather), biological and super-biological (social, economic) systems are inherently very difficult to model accurately (we know this from chaos theory, but that’s a subject for another post). Taleb even goes as far as saying that these systems are impossible to predict, and that therefore should be approached with a heuristic mindset, abandoning all hopes of precision. He may be right, I honestly don’t know, but I am sure that the only way to find out is to keep trying. One typical example may help: some diseases (or biological anomalies) can be reliably predicted by looking at genetic clues (trisomy 21 invariably generates the same phenotype) some others show a clear relation between severity of risk and symptoms and the genetic anomaly (Huntington’s disease), in both cases, the causal role of genes is certain, even if we can’t claim to have understood the complete causal chain. However, the far more common situation is that we are only able to establish a weak correlation between genetic markers and one or the other disease, some alleles will increase the risk of this or the other condition, suggesting that a heuristic approach to management and prevention is the best option we have. That’s all right, but is not a good reason to stop trying to identify reliable causal links and/or predictive clues. We may find them, and the only way forward is to look for them. If we were to follow Taleb’s advice to its extreme consequences, we may never find useful and reliable truths that would eventually allow us to eradicate, cure or manage important diseases. It’s OK to observe that science may in some cases be unable to find and pinpoint the exact causes of some phenomena, but one should also note that it is largely impossible to predict when this is true (for exactly the same reasons!), and therefore conclude that it is always a good idea to try anyway (especially when the benefits of succeeding are very high). In this way, one avoids to de-legitimise most ongoing, still work-in-progress scientific efforts, and unsurprisingly, I think it’s an important addendum.
The other side is about the limits and risks of heuristics approaches. The first thing to note is that when a deterministic model is available (typically as the result of ‘traditional’ science), this should be considered as superior to mere rules of thumb, simply in virtue of its increased precision (with one important caveat: we don’t know when a predictive model will get it wrong, so one should always also consider the possible consequences of errors!). Taleb’s concern is that in our information-driven society we are usually too fast in awarding this superiority to models that are in fact not deterministic: his main argument is about economic models that are good at predicting standard “business as usual” situations but consistently fail to predict anomalies. We can all agree that he is right: the risk is always present and should NOT be underestimated. However, a lot of the progress made by society can be seen as the result of ditching well established rules of thumb of (the “that’s the way it’s done” kind) in favour of more reasoned “new approaches” that are the direct consequence of deterministic models and/or more precise understandings. Taleb spends a lot of words praising the value of traditional wisdom, of notions and practices that have been consolidated by time and cultural evolution. Once more, he is superficially right: notions that survived the test of time must have some strong justification. However, this consideration alone risks to be dangerously regressive, and for very empirical reasons. An “eye for an eye” approach to social justice is the typical example: it’s a traditional way to administer justice, and finds its strong motivation in our own, largely shaped by evolution, moral inclinations. But it is spectacularly harmful, and societies that moved away, and deployed a legal system that actively represses personal retribution in favour of impersonal/impartial justice, clearly enjoy much higher degrees of social cohesion. The reason is Darwinism: information (in this case, established practices) that favours its own persistence will tend to survive the test of time. Family feuds can last for generations because hate and resentment spread and have remarkable self-sustaining qualities. They do not get selected because they are useful, they get selected because they self-sustain. It is exactly the same mechanism that can be seen in biological evolution: genes can be seen as selfish, because they will survive multiple generations if/when they facilitate their own propagation; whether or not they are actually beneficial to their host, is utterly irrelevant: in other words, genes favour their own propagation, and have no direct drive towards maximising the well-being of their host, in fact, some directly make their host miserable (it would be a long, and largely personal story, but I think the perfect example here is about the genetic predisposition for bipolar disorder).
In other words, traditional wisdom (and the heuristic rules that usually contains) always has some self-sustaining qualities. These qualities may or may not be beneficial for their host (the people who hold such beliefs), and simply praising ancient wisdom “because it survived the test of time, and hence is antifragile” is downright dangerous. This obviously applies to religious beliefs, that can be (and often are) seen as systems that are optimised for self sustainability (think of the importance that they give to evangelism, the requirement to raise children in accordance to the given religion, and the prevalence/encouragement of having many children) and only secondarily (if at all) useful for their hosts. Clearly, being useful is beneficial to any belief-system, but it is by no means the most important requirement.
Taleb in other words is at risk of perpetuating a new and sophisticated version of the naturalistic fallacy, he does somehow suggest that what resists the test of time is good, because it will invariably be antifragile. He doesn’t quite spell it out in so many words, but the whole book suggests this idea, even if it does mention exceptions (when antifragility is obtained by fragilising human beings). What he doesn’t explicitly point out is that a great number of traditional heuristics usually do exactly this: an eye for an eye generates self-sustaining feuds; diffidence towards the unusual generates bigotry and xenophobia; the grand intellectual architectures of different religions justify conflicts and frequently make some of the adherents systematically miserable (think of Catholic homosexuals, if you want a cheap example); and on and on, the list could continue for another thousand words.
In conclusion, while I agree with the foundations of Taleb’s thoughts, I wish to point out two distinct situations where the straightforward application of antifragility concepts is problematic and/or directly harmful:
1) The chaotic component of biological and social systems does not mean that all efforts aimed at producing deterministic and predictive models of such domains are hopeless and should always be avoided. On the contrary, scientific efforts should always be aware of the relevant sources of uncertainty, but that’s precisely because the general aim is to reduce uncertainty.
2) Centuries-old wisdom usually is antifragile, but that doesn’t make it morally superior or inherently useful. Sometimes it does, but it frequently does exactly the opposite.