Random thoughts on Antifragility (Part 2): the role of heuristics

How can we prepare for the unexpected? One of the central features of Nassim N. Taleb book “Antifragile: Things That Gain from Disorder” is that the great big world is essentially (?!?) unpredictable, and that most attempts to tame it are eventually destined to failure. Therefore, in order to preserve whatever good we are interested in, we should increase its antifragility instead of trying to make it robust, or worse, trying to modify the environment so to make sure the external world will not injure it. Similarly, to be successful, it is more useful to be antifragile, instead of trying to predict the exact consequences of our actions.

According to Taleb, a good way to achieve both aims is to adopt a heuristic approach to decision-making:

  1. If we know that it is impossible to predict the character and size of future events, it is safer to use fuzzy predictive systems that are known to be useful most of the time, and perhaps more importantly are not prone to occasionally produce catastrophically wrong predictions.
  2. Hence, the role for heuristic “rules”: they don’t try to be perfectly accurate, but to simply guide our decisions in the right direction, minimising the chances to get it spectacularly wrong. At the same time, it is possible to adopt rules that are designed to sacrifice a little predictive precision in favour of the desired asymmetry of outcomes: when we’ll choose the wrong option, the downfall will be minimised, when we’ll get it right, the advantage will be comparatively bigger.
  3. In this way, one maximises antifragility: finding out rules of thumb that prevent catastrophic errors and at the same time maximise the chances to score spectacular wins is possible even in an unpredictable world. More: it is desirable because it transforms unpredictability into an advantage.

Once again, this is my own reading of Taleb’s ideas, and I have no trouble at all with them. I was trying to live in this way well before I’ve heard of him. His book is constantly peppered with useful heuristic rules, and most of them make perfect sense, and can/should be filed in the “useful strategies” folder. However, I once more found reasons to get frustrated: from my point of view, Taleb overlooked the opportunity/necessity to discuss two related concepts: the limits and intrinsic risks of heuristic approaches, and the flip side, the advantages/possibility of generating accurate predictions.

I’ll start with the latter, because it is the business of science, and hence close to my heart. Fact: science has been able to produce fantastically accurate predictive models. Most of the technology we enjoy using is at least in part the direct consequence of this success. Taleb is absolutely right in pointing out that complex non-linear (for example, climate and weather), biological and super-biological (social, economic) systems are inherently very difficult to model accurately (we know this from chaos theory, but that’s a subject for another post). Taleb even goes as far as saying that these systems are impossible to predict, and that therefore should be approached with a heuristic mindset, abandoning all hopes of precision. He may be right, I honestly don’t know, but I am sure that the only way to find out is to keep trying. One typical example may help: some diseases (or biological anomalies) can be reliably predicted by looking at genetic clues (trisomy 21 invariably generates the same phenotype) some others show a clear relation between severity of risk and symptoms and the genetic anomaly (Huntington’s disease), in both cases, the causal role of genes is certain, even if we can’t claim to have understood the complete causal chain. However, the far more common situation is that we are only able to establish a weak correlation between genetic markers and one or the other disease, some alleles will increase the risk of this or the other condition, suggesting that a heuristic approach to management and prevention is the best option we have. That’s all right, but is not a good reason to stop trying to identify reliable causal links and/or predictive clues. We may find them, and the only way forward is to look for them. If we were to follow Taleb’s advice to its extreme consequences, we may never find useful and reliable truths that would eventually allow us to eradicate, cure or manage important diseases. It’s OK to observe that science may in some cases be unable to find and pinpoint the exact causes of some phenomena, but one should also note that it is largely impossible to predict when this is true (for exactly the same reasons!), and therefore conclude that it is always a good idea to try anyway (especially when the benefits of succeeding are very high). In this way, one avoids to de-legitimise most ongoing, still work-in-progress scientific efforts, and unsurprisingly, I think it’s an important addendum.

The other side is about the limits and risks of heuristics approaches. The first thing to note is that when a deterministic model is available (typically as the result of ‘traditional’ science), this should be considered as superior to mere rules of thumb, simply in virtue of its increased precision (with one important caveat: we don’t know when a predictive model will get it wrong, so one should always also consider the possible consequences of errors!). Taleb’s concern is that in our information-driven society we are usually too fast in awarding this superiority to models that are in fact not deterministic: his main argument is about economic models that are good at predicting standard “business as usual” situations but consistently fail to predict anomalies. We can all agree that he is right: the risk is always present and should NOT be underestimated. However, a lot of the progress made by society can be seen as the result of ditching well established rules of thumb of (the “that’s the way it’s done” kind) in favour of more reasoned “new approaches” that are the direct consequence of deterministic models and/or more precise understandings. Taleb spends a lot of words praising the value of traditional wisdom, of notions and practices that have been consolidated by time and cultural evolution. Once more, he is superficially right: notions that survived the test of time must have some strong justification. However, this consideration alone risks to be dangerously regressive, and for very empirical reasons. An “eye for an eye” approach to social justice is the typical example: it’s a traditional way to administer justice, and finds its strong motivation in our own, largely shaped by evolution, moral inclinations. But it is spectacularly harmful, and societies that moved away, and deployed a legal system that actively represses personal retribution in favour of impersonal/impartial justice, clearly enjoy much higher degrees of social cohesion. The reason is Darwinism: information (in this case, established practices) that favours its own persistence will tend to survive the test of time. Family feuds can last for generations because hate and resentment spread and have remarkable self-sustaining qualities. They do not get selected because they are useful, they get selected because they self-sustain. It is exactly the same mechanism that can be seen in biological evolution: genes can be seen as selfish, because they will survive multiple generations if/when they facilitate their own propagation; whether or not they are actually beneficial to their host, is utterly irrelevant: in other words, genes favour their own propagation, and have no direct drive towards maximising the well-being of their host, in fact, some directly make their host miserable (it would be a long, and largely personal story, but I think the perfect example here is about the genetic predisposition for bipolar disorder).

In other words, traditional wisdom (and the heuristic rules that usually contains) always has some self-sustaining qualities. These qualities may or may not be beneficial for their host (the people who hold such beliefs), and simply praising ancient wisdom “because it survived the test of time, and hence is antifragile” is downright dangerous. This obviously applies to religious beliefs, that can be (and often are) seen as systems that are optimised for self sustainability (think of the importance that they give to evangelism, the requirement to raise children in accordance to the given religion, and the prevalence/encouragement of having many children) and only secondarily (if at all) useful for their hosts. Clearly, being useful is beneficial to any belief-system, but it is by no means the most important requirement.

Taleb in other words is at risk of perpetuating a new and sophisticated version of the naturalistic fallacy, he does somehow suggest that what resists the test of time is good, because it will invariably be antifragile. He doesn’t quite spell it out in so many words, but the whole book suggests this idea, even if it does mention exceptions (when antifragility is obtained by fragilising human beings). What he doesn’t explicitly point out is that a great number of traditional heuristics usually do exactly this: an eye for an eye generates self-sustaining feuds; diffidence towards the unusual generates bigotry and xenophobia; the grand intellectual architectures of different religions justify conflicts and frequently make some of the adherents systematically miserable (think of Catholic homosexuals, if you want a cheap example); and on and on, the list could continue for another thousand words.

In conclusion, while I agree with the foundations of Taleb’s thoughts, I wish to point out two distinct situations where the straightforward application of antifragility concepts is problematic and/or directly harmful:
1) The chaotic component of biological and social systems does not mean that all efforts aimed at producing deterministic and predictive models of such domains are hopeless and should always be avoided. On the contrary, scientific efforts should always be aware of the relevant sources of uncertainty, but that’s precisely because the general aim is to reduce uncertainty.
2) Centuries-old wisdom usually is antifragile, but that doesn’t make it morally superior or inherently useful. Sometimes it does, but it frequently does exactly the opposite.

Tagged with: , , ,
Posted in Ethics, Evolution, Philosophy, Science
9 comments on “Random thoughts on Antifragility (Part 2): the role of heuristics
  1. […] misfire and/or fail to fire when appropriate. In fact, in perfect accordance with my analysis of Taleb’s take on heuristics, it is a good rule of thumb, that usually produces “rational” behaviours, but […]

  2. […] is not, in any way or form, optimised to promote the well-being of the people who embrace it. Sure, as I’ve said before, beliefs that are beneficial to the believer are also favoured, but they are a side consequence, […]

  3. gvdr says:

    I really enjoyed this excursion on Taleb’s Antifragility. As you, I found the book being provoking and, at the same time, dull: Taleb appears to ignore all of complex systems, statistical physics non linear dynamics results about stability, resiliance, and pattern emergence (and he never actually defines rigorously its main concept). Enough to drive a mathematician crazy.

    That said, I find really naive the idea that the “best” phenomena, the most important for our species are antifragile: a glorious counterexample is co-operation. The absence of cooperation is far more antifragile. For the development of co-operative behaviours a society needs peculiar mechanisms—i.e., a certain level of assortativity in social relationships—or a fine parameters tuning. Co-operative groups are hard to maintain, and easily break apart into smaller groups or solitary individuals. Yet, individuals (genes) in co-operative groups have, often, a higher informal fitness compared to individuals in non co-operative groups. That’s enough for Lady Natural Selection.

    I would, hence, object to your claim that “natural selection produces antifragility” and propose a milder phrasing as “natural selection tends to avoid fragility, if it is not worth the risk”. [the two claim seems asymptotically the same: in a scenario were evolution has enough time to test each strategy for long enough, fragile strategy are never worth the risk, but that’s not the case in our world]

    I’m sure one can rephrase either the narrative of co-operation emergence or the heuristic definition of antifragility to cover also this case. This is the virtue of Just So Stories and hand waving.

    • Sergio Graziosi says:

      thanks for the feedback and critique, always appreciated!

      I guess this discussion would probably fit better under the previous post on the subject, where I explain my point on “natural selection produces antifragility”, but it really doesn’t matter. I’ll answer here to keep things simple.

      Taleb’s Antifragility is a frustrating book indeed. In a weird way you could see it as a positive quality, as it forces the reader to engage her brains, re-adapt what’s there, plus fill in the gaps. It certainly worked in this way for me: I had to think about it and then write down my thoughts (here), so to be able to make the core concepts mine and put the mind to rest.
      I’m not a mathematician, I guess the sources of frustration are different for you and me… Have you read Taleb’s papers on the mathematical side? I’ve skimmed them, but I can’t easily appreciate the details, so for me it’s a lot of effort with little return. You’ll certainly have the tools to grasp them more easily, so I’d be interested to hear your thoughts.

      As for your main point/criticism, for a minute you managed to convince me, and I found myself thinking “gvdr is right: I’ve got it all wrong”. But it was just a moment, I’m back to my usual position now, so I’ll try to explain why (in plain language, I’m afraid).

      My claim is even stronger than what you attribute to me here: in the previous post I’ve written that “Natural Selection is the source of all antifragility”. After temporarily accepting your view, I now fully stand by my claim once more, and even more firmly, thanks to your challenge.
      In short, this is because you are attributing the fragility-antifragility quality to the wrong kind of object/system/level. To me the pattern is clear, let’s assume we have some sort of atomic replicator (AR): by this I mean a replicator that doesn’t contain smaller replicators, if you disassemble an AR, you find no replicators among its components. In the typical hypothesis of how life started, this would be a self-replicating strand of RNA, for example.

      Being a replicator, this kind of structure tends to accumulate over time, so the structure itself (not the single instantiation) is already getting closer to being antifragile: in our primordial RNA example, an RNA sequence that has some self-replicating qualities will resist the challenge of time far better than an equivalent sequence (same % of nucleotides) that doesn’t self-replicate. Thus, it’s the sequence that is somewhat antifragile, not the single molecule, in this case.

      The story then continues with the sequence getting longer, so that it acquires other enzymatic functions, and eventually promotes the formation of liposomes (or similar encapsulating structures). Once this happens, RNA strands and/or enzymes that simply replicate other RNA strands may start appearing (they do not self-replicate!), supporting the formation of things that start looking like unicellular organisms. At this stage, the single molecule of RNA, even the ones necessary for replication, become expendable. The whole proto-cell can keep working as long as at least one copy of each type of sequence is present. Note that you could probably already say that the different (now expendable) components “collaborate”…

      This little story (very hypothetical, to the point of being science fiction) shows the general pattern: a system (in our case, a proto-cell) enhances its own antifragility its own internal single components (in this case the single molecules of RNA) are made somewhat expendable. If the RNA strands where conscious entities, genuinely selfish in human-like terms, this would be bad news: suddenly they became expendable.

      Skipping many (hypothetical) steps, we can now move to DNA-based genes, and see that they generally cooperate (to produce fully-developed prokaryotes, in the first instance), in the bacteria we know, not one of their DNA sequences we know-of is fixed and immutable, at least some mutations are possible without destroying the whole machinery. Why? Because the same pattern operates again, the whole machinery is the cell, the single (cooperating) genes are individually expendable, and this is why the cell is antifragile. In very concrete terms: the system can evolve and survive to new, unprecedented challenges because not all mutations are lethal. (It’s already complicated because this “system” isn’t a single bacterium, it’s made of more bacteria and spans multiple generations)

      A few more jumps and we reach the level you mention: cooperating humans. In a truly cooperative society (can be a small group of hunter-gatherers) the single individual is expendable. Once again, this is where the strength of the group comes from: the resulting cooperating structure (the whole group) becomes antifragile once it exploits the expendability of the individual by creating self-preserving measures. Enter moral inclinations, an eye for an eye, ostracism, crime and punishment and all that.

      The story continues: villages and cities will aggregate smaller groups, making the original groups more fluid (the single groups themselves become expendable) and thus becoming more and more resilient/antifragile. You then get states, empires, law enforcement, money, multinational companies and whatnot. All pumping or exploiting the fragility of lower levels of complexity/organisation.

      I can now tackle your counter-example: cooperation requires a lot of ad-hoc mechanisms, that’s agreed. But it doesn’t mean it is fragile: if it was, how do you explain the ubiquity of the multi-layered cooperation I’m describing, with a number of layers that keeps increasing? We can see the same conceptual pattern on all the major evolutionary revolutions:
      Prokaryotes become eukaryotes by incorporating more individuals into one, consequently making each single mitochondria expendable.
      Colonies make the survival and reproductive success of a single cell almost completely irrelevant.
      Proper multicellular organisms quickly make their single cells utterly irrelevant, to the point where most organisms rely on the ordered discharge of old cells, or on cell-self-destructing mechanisms.
      Sexual reproduction makes it possible that even if you’ll reproduce, some of your genes will not pass on (irrespective of their utility), thus the diversification it produces (more optionality equals less fragility) is obtained at the expense of your single genes.

      However, as you hint, each cooperative layer does create a new source of risk, it generates new niches for exploitation: free-riders, cancer and all forms of parasitism. But the new niches will necessary be self limiting: they rely on the existence of the cooperative structure they exploit. Thus, parasitism tends to evolve into symbiosis and ultimately contributes to the larger pattern.

      You may note that the “story” I’ve sketched above is far too neat and well encapsulated, this is what I think confused you. In reality, most of the time the boundaries between nested cooperation layers are fuzzy, and overlap across scale, spatial and temporal dimensions. Think of the structures that sustain the existence of currencies, for a clear example of something that escapes boundaries on all dimensions, or sexual reproduction for a less extreme case.

      In the case of cooperation within (small) human groups, it really is difficult to say that selection happens at the group level and thus cooperation is favoured because of natural selection. Groups are not replicators, but self-preservating superstructures at best, in some cases they are also very imprecise replicators in the sense that they may divide in smaller groups which retain more or less the same intra-group organisation. Thus, the role of cooperation and cooperation-enhancing structures in favouring the propagation of single genes (the only proper replicators that we can find) is only visible across generations and functions in very indirect ways. IOW, natural selection favours systems that promote cooperation in a way that is very slow (agreed!), and difficult to describe in rigorous (quantifiable) terms. This doesn’t make its action less real, though.
      It’s a very fuzzy, difficult to pinpoint and most likely a just-so story: I say “most likely” because I’m not even sure I fully understand what the “just so” (dis)qualifier really means.

      So, no, I wouldn’t settle for “natural selection tends to avoid fragility, if it is not worth the risk”: your rephrasing contains some truth (but I could also say it’s somewhat contradictory: if something it’s worth the risk it means it probably reduces fragility), but doesn’t capture the pattern I’m interested in. I prefer “Natural Selection is the source of all antifragility”, but if you can see why or where I’m wrong, please do share!

      On the other hand, all of the above is not supposed to be anthropocentric, thus, it does not contain any “value” judgement. Yes, Taleb does seem to suggest that “antifragile = good” and therefore that “natural = good”, and I do agree with you that this isn’t justified (in my story above, even the distinction between natural and artificial doesn’t make much sense). Cooperation in the multi-layered, multi-domain fashion that I discuss here just-happens, it has no intrinsic moral value. From a human perspective, sometimes it is good, sometimes bad, and many other times it depends on contextual factors.

      I fear you might receive all of the above as hand waving… It may well be! At best, I’m trying to show you why it would be extremely difficult to formalise it mathematically. Certainly I don’t have the know-how, so I can only hope that someone like you might someday take on the challenge! 😉

  4. gvdr says:

    I didn’t read your answer till now (I vainly tried to subscribe to the notifications…).

    I agree with most, yet I’m preparing a long follow up. 🙂



  5. Sergio Graziosi says:

    looks promising: long follow ups are my thing.
    And don’t worry about time: if you want to discuss with me, you’ll quickly find out that I like (and have) to take my time as well. I have precious little spare mental energy and I know that allowing my thoughts to settle for a while is usually productive.
    Having said that, you probably have already noticed that I look at disagreements as a resource, so there is no need to sacrifice clarity in favour of courtesy.

  6. […] These might seem minor quibbles, but such cases are all too common. Think of office practices, established long times ago by people who are now happily retired and were responding to needs that might have disappeared. Frequently, new technological developments allow to do things in new, much more efficient ways. These improvements may be there, available at our fingertips, but never implemented just because we never spotted the possibility. Another (only partially surpassed) example would be global warming: before finding out about the greenhouse effect, we already had a problem, but didn’t know – we just kept going into deeper and deeper trouble, unwillingly and unaware (yes I know, we are still going, even if willingly and aware 😦 ). The list of this kind of problems is just endless: I could go on and on for days. Unlike the previous classes, realising this is quite disruptive. In the first two cases, the conclusion to be drawn is a little conservative: the status-quo isn’t perfect, but the alternatives are worse, so we should learn to live with the imperfection. In this case, the opposite is true: the status-quo isn’t perfect and we should significantly improve it. The problem is that we don’t know about the imperfection, so how can we tackle it? Being a meta-problem, it requires a meta-solution: the only way to respond to this kind of challenge is to be challenging. The issue here is over-reliance on habit and/or received wisdom. Thus, the solution lies in cultivating curiosity and encouraging experimentation. The problem of unknown problems is the one strongest reason why conformism should never be encouraged (most humans long for acceptance, so we can be sure that some degree of conformism will always be preserved). Sure, a society animated with too much anti-conformism will be plagued with inefficiency: valuable energy will be wasted exploring dead ends. However, a society with too much anti-conformism is inherently unstable: if everyone does things differently, you may not even call it a society! Thus, too much  anti-conformism inevitably self-corrects and will, without exceptions, tend to generate new self-sustaining norms (because you know, natural selection operates on all structures). Thus, the waste that comes with too much experimentation is a clear example of the second type of surprising problems: the ones we don’t want to solve, we want to manage them. In order to avoid repeating always the same errors without even noticing that something is wrong, we should always try different routes as a matter of routine, in this way we maximise our chances of actually making progress while keeping the downside automatically under control (I hope you all spotted the antifragile angle). […]

  7. VS says:

    I like your second concluding point. I’ve always found Taleb to be self-absorbed. He invents terms for stuff that scientists and statisticians have known for centuries and passes them off as his original ideas. Be it the black swan and fat-tailed distributions, or issues of stability/predictability in chaotic systems and “antifragility”. His books are also peppered with historical anecdotes that seldom serve any purpose other than communicate to the reader that the author is intelligent and well-read. In the end, there is nothing in his ideas that is revolutionary or even potentially useful to guide scientific progress. Everything that you said in your responses in the comments about sizes and connectivity of groups, expendability of individual nodes, etc leading to anti fragility are well known to network scientists. A huge part of graph theory is devoted to mathematical investigations into those questions. So in the end, what Taleb does is repackage ideas that are well known to scientists and pass them off as his own and makes millions of bucks in book sales. He is obviously an intelligent dude and he’s tried very hard to convince people of that and he has been successful.

  8. Sergio Graziosi says:

    Thanks VS.
    I find the trajectory followed by Taleb to be even more frustrating than the book. Not a good reason to ignore ideas that I believe could be useful. Of course you are right: all of the building blocks of antifragility have been found, described and utilised in many different theoretical ways already.

    I still (stubbornly, perhaps) hang on the concept of antifragility because of three reasons:
    1. The focus on persistence/survival of given structures works very well in linking structure/superstructure and information, I find it aesthetically pleasing.
    2. In terms of understanding the world around me (forget official science for a minute, if you may), the fragile/robust/antifragile triplet works much, much better than just the fragile/robust couple – it captures a concept for which we didn’t have a name. Furthermore, the reason why heuristic strategies work so well is immediately apparent and in turn it becomes easier to find how to best manage my everyday (and/or exceptional) risks.
    3. The relationship between Evolution (and/or natural selection) and antifragility looks fascinating to my (somewhat naive) eyes. I can’t make up my mind: are they two ways to describe the same processes? Are they two sides of the same coin? Do they feed off each-other? I don’t know! But I love not being sure and being forced to think and re-think, it feels fruitful.

    Be reassured: I’m not trying to convince you of anything at all. Since you’ve (very gently) rattled my cage, I’m merely grabbing the chance to perform a super-quick “status check” on where I am and whether my position has shifted while I was busy on other stuff. It seems I’m still agreeing with myself, for now.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Google photo

You are commenting using your Google account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s

This site uses Akismet to reduce spam. Learn how your comment data is processed.

Follow me on Twitter

All original content published on this blog is licensed under a Creative Commons Attribution-NonCommercial 4.0 International License.
Creative Commons Licence
Please feel free to re-use and adapt. I would appreciate if you'll let me know about any reuse, you may do so via twitter or the comments section. Thanks!

%d bloggers like this: