Begging for trouble: vaccine resistance and the UK

Right now, people living in the UK should be terrified, because our government is gambling with our lives and well-being, but doing so with astonishingly bad odds. Everywhere else, people should try hard to ensure their own governments understand how dangerous this gambling is, so that they can avoid repeating it, while mitigating the equally high danger that the UK’s experiment poses to the whole world1.
The gamble I’ll focus on is this: the UK government is acting as if there was no chance that the virus will learn to infect vaccinated people, or people who got infected and survived. This is utter madness, because in fact, it is very likely that the virus will evolve in this direction, and alas, current policies can only speed up the process, multiplying the risk many, many times.

In the words of Dr. Mike Ryan, from WHO: “the logic of ‘more people being infected is better’ is, I think, logic that has been proven its moral emptiness and its epidemiological stupidity”, but alas, this is exactly what the UK government is producing with its explicit policies2.
In response, a number of people, medical doctors, epidemiologists and scientists, led by the Independent SAGE are sounding the alarm (Gurdasani et al, 2021). Their focus is on residual risks of deaths (which isn’t close to zero, even for vaccinated people) and the vastly underestimated impact of long COVID. Their focus is entirely valid! Both aspects, even in isolation, are grave enough to demonstrate the utter stupidity of the official policies. Not attempting to reduce R below 1 and instead hoping that surging infections will not have too bad effects, given the amount and type of people vaccinated, is manifestly reckless, on those grounds only.

However, these valuable efforts are relegating one argument to the sidelines. The risk of breeding a new variant able to escape existing immunity (immunity due to either vaccination or natural infection) is consistently mentioned, but not highlighted. I think this is wrong, for two reasons:

  1. If such a variant will appear, the damage it will make (in the UK and worldwide) is much, much more than the direct (and deeply concerning) effects of the current policies.
  2. The current policies are making the appearance of such a variant (or multiple such variants) much more likely. If they can exist, they will appear sooner and in greater numbers.

I don’t think point 1 needs much supporting evidence. If a vaccine-escaping variant will appear, we’ll be once more, defenseless. Only strategy available would be to lock-down again, hard. But this time, we’ll be doing so from a weaker starting point: strained populations, weaker economies, health systems and hospitals already under huge and long-lasting pressure, declining credibility of governments and scientists and so forth. Perhaps new “counteracting” vaccines will become available relatively quickly, but as we know, distribution will still be slow in most of the world and of course, anti-vaxxers will inevitably gain traction, becoming even more dangerous than they already are.

This leads me to the golden rule of risk-management under high uncertainty:

You don’t, ever, gamble with known existential risks.

In high uncertainty situations, when an event is known to be possible, but it is rare enough to have an impossible to estimate probability of occurring, the only sensible policy is to do whatever is possible to minimise its likelihood.

Willfully doing the opposite matches quite well a definition of insanity.

And yet, this government is breaking this golden rule, overtly. Knowing full well that if vaccine resistance will appear, many of us will die as a consequence. It’s as simple and terrifying as that.

Point 2 above is less obviously true, though. In what follows I will claim that it is, in fact, obviously true, but only after putting together enough separate bits of existing knowledge. Thus, I think it is useful to piece these bits together in one coherent argument.
Given the urgency3, I do not have the time to collect multiple references supporting each claim, thus I will merely point to some supporting evidence, picking from the least controversial options I can find quickly.

Obvious fact number one: this virus can mutate, and each new person it infects is a new chance to mutate. Thus, high numbers of infections produce higher chances of mutations.

Obvious fact number two: this virus mutates often (Hudson et al. 2021). We know this from the important sequencing work done in the UK and elsewhere. But even as non-specialists, we can infer this fact, because the various variants of concern (Alpha to Delta and more) did appear and spread. They spread, because they are better than the original virus at spreading (this is a tautology: it can only be true). But this implies that the virus mutated in innumerable other ways, of which only a handful actually made it better at spreading. It happens because each single mutation is random, and finding a mutation that increases infectiveness of an already very infective virus is clearly an extremely low-probability event.

Obvious fact number three: the protection provided by vaccines is fragile. We need many different vaccines for each different pathogen because vaccinations are specific. They teach our immune system to neutralise a specific pathogen, and only that specific one. If the pathogen changes enough, vaccines stop being effective. It’s why we need to get yearly jabs against the flu. The flu virus mutates at a pace that is high enough to make this necessary.

Obvious fact number four: COVID-19 has mutating capabilities (intended as “abilities of finding effective mutations”) that are comparable to the flu. The appearance of several strands, which are better than the original virus at spreading, provides the incontrovertible proof.

Effects of all these obvious facts:

  1. The random appearance of some degree of vaccine- (and natural immunity-) resistance is more likely than the appearance of each one of the “named” variants (Alpha to Gamma). This is because, to escape vaccines and immunity, the virus needs merely to change. Unlike the named variants, it doesn’t also need to be better at spreading than the currently prevalent form.
  2. Allowing more infections to happen multiplies the probability of new variants to emerge. It thus makes it incrementally more and more likely that vaccine resistance will appear, with each new infections.
  3. For a vaccine-resistant variant to appear and spread, one of two things need to apply: the variant needs to be better at spreading on top of its resistance qualities, which is a rare event; and/or it needs to have access to a large-enough proportion of vaccinated people. Until now, the latter case was rare or impossible, because too few people were vaccinated. But as more people do get the jabs, this case becomes not only possible, but also more and more likely (if the virus is allowed to spread).

These three effects are not controversial. They are the inevitable consequence of the obvious facts I’ve listed.

Nevertheless, these days, people expect evidence. Mere reasoning, however inescapable has somehow lost its powers of persuasion, at least when it comes to informing policies (for well known, and understandable reasons!).
Thus, I went through the trouble of demonstrating, with numbers(!), my assertions. This however, is not something that can be done in a scientifically sound manner4, because there are too many unknowns.
We don’t know the exact mutation rate, and even less, the rate of different types of mutations. We don’t know well enough how the virus spreads, and thus, we know even less about the ways in which its spreading abilities can change. We don’t know what mutations will make it better or worse at spreading, nor what mutations will make it better at escaping immunity. However, we do know that each mutation makes it somewhat better at escaping immunity, inevitably, because it makes it a little different.

Still, apparently numbers are more convincing than (inescapable) arguments, so I’ve made a toy model to produce some. In it, we have a set of 4 inputs, which I’ve labelled as “assumptions“:

  1. Given an infected person, the probability that interacting with one not infected, not immune person will transmit the virus.
  2. Given an infected person, the probability that interacting with one not infected, already immune person will transmit the virus.
  3. The proportion of immune / not immune people in the population.
  4. The number of interactions that an infected person has, while infected.

I don’t need to convince anyone that this is a toy model. It oversimplifies everything! But it is complex enough to account for: changes in infectivity and thus, for the likely effects of mutations (1,2), changes in vaccination rates (3), changes in social-distancing policies and practices (4).
Here is the summary of the findings:

Summary results of my toy model. Second column shows the estimated R value for various scenarios/cases.

Note: you can download the Excel spreadsheet I’ve used to calculate these figures here. Hopefully, it is organised well enough to allow tinkering with it. It will allow you to check my numbers (please do!) and also to play with the parameters and see how the simulated R value changes accordingly. If you need “more serious” science, please see this preprint (Gog et al. 2021), which supports my (obvious) conclusions, since they demonstrate that “the highest risk for vaccine escape can occur at intermediate levels of vaccination“.

We start with CASE 1: in which I’ve (arbitrarily) picked a combination of variables that result in an R number that is comparable to the present situation in the UK.
I then changed one or more variables to see what effect they have on the all-important (but imaginary: this is a toy model/simulation!) R number. In yellow, I’ve highlighted all cases where the resulting R remains above one. What this toy demonstrates is that it is possible, and in fact, it is extremely easy to find conditions where a mutation that reduces infectivity, but confers some degree of vaccine/immunity resistance, can proliferate. In fact, the families of cases 2 and 3 assume that a mutation conferred some, but far from perfect vaccine resistance, at the cost of reduced infectivity. Cases of family 4 are “limit” cases, where immunity escape is perfect, obtained at an extremely high cost in terms of infectivity. Thus, cases 2 and 3 are clearly much more likely than the emergence of each one of the known variants of concern (Alpha to Delta), while case 4 might be of comparable likelihood (but we really can’t know!).

This is why we should all be terrified: my model (limited as it is) already demonstrates that, when a significant amount of the populations is indeed vaccinated and/or “naturally immune”, vaccine resistant variants, able to spread at exponential rates, are MORE LIKELY than the appearance of each of the known variants of concern. [For those at the back: the variants of concern were likely, enough for them to actually happen; it follows that anything that has now become more likely than those will, oh, probably happen!]
From here, one has to conclude that, precisely because we now have a partially vaccinated population, allowing the virus to spread is utter madness. It multiplies the probability of a catastrophic event, which has, by definition, the potential of killing millions of us.

Moreover, the same toy model is already powerful enough to also show what governments should do.

Looking at “CASE 1.1: high immunity, high infectiveness, more immune population“, is all we need. To minimise the existential risk of producing a vaccine-resistant variant, whenever a significant proportion of the population is vaccinated, governments should do all they can to keep infection rates as low as possible. They should also keep vaccinating the population, until they reach a situation equivalent of CASE 1.1, where R is naturally below 1 and therefore COVID-19 is dying out, allowing to finally reduce social distancing measures, SAFELY!
Following this route, with a bit of luck (i.e. if vaccine resistance won’t emerge despite our sensible efforts to prevent it) would lead to the extinction of COVID-19. I believe we all agree that’s desirable.

However, the UK government is doing precisely the opposite of what is beyond doubt the optimal strategy. We can and perhaps we should admit that the optimal strategy might be impractical: we need some freedoms, after all… However, the current situation would not require us to all go back to a hard, “do not leave home”, prolonged lockdown: any social-distancing measure will reduce R of some amount. Thus, any social-distancing measure is, right now, better than nothing. But that’s a million miles from “let’s remove all restrictions”, which is, if we won’t stop them, what the UK government is going to do.

Overall: the UK government is doing exactly all it can to produce vaccine-resistant variants.
Which counts as something worse than being “morally empty and epidemiologically stupid”, I’ll leave it to your imagination to figure how I call this government in the privacy of my own mind. The same names apply to Prof Chris Whitty and Sir Patrick Vallance, whose duty is to explain the above to our ministers and the population. They are both clearly failing to do both things and thus, given that they are not resigning (nor threatening to) they are overtly complicit with all this madness.

Now please go and sign this document, where the specialists from Independent SAGE (and more) express their overall (entirely correct) disagreement with the current plan.

Notes and Bibliography

1. This post is very UK-centric, in a way I find uncomfortable. It is important to note a few things, from a more global perspective: first, what is happening in the UK poses a danger to everyone on earth, because the virus does not respect borders, given that border controls are not watertight in most cases. Second, many more countries are making or are about to make similar/equivalent mistakes, which are, or will be, equally dangerous. Third, the argument I’m making here can and should be used to reassert the need of a fairer distribution of vaccines world-wide. For reasons of length, I have, with regret, decided not unpack any one of these points here.

2. For those who don’t know, starting on July 19th, the UK government is planning to remove all restrictions to businesses, remove all social distancing rules, including legal obligations of wearing masks, reduce border controls and withdraw the availability of free lateral-flow tests, while not issuing obligations or even guidelines promoting the provision of adequate ventilation in indoor spaces.

3. Urgency: yes, this is really urgent. We need to find a way of making the UK government to make a spectacular, and presumably unpopular, U-Turn. Unfortunately, it is a race against time. Every day that passes in the current situation multiplies the probability of the appearance of vaccine resistance. Thus, every minute counts.

4. Scientific manner: there simply is too much we don’t know. For example, producing a “non-toy” model of how selective pressure may change as vaccine resistance mutations appear, in order to be “accurate” (and therefore: not a toy) requires to have accurate estimations of mutation rates, as well as of an idea of how many will produce vaccine resistance and to what extent. Presently, we are still debating how much vaccine resistance the Delta variant has, which is a variant that does exist! Thus, such attempts, interesting as they are, cannot and should not be taken too seriously.

Gurdasani D, et al. Mass infection is not an option: we must do more to protect our young. The Lancet. July 07, 2021 DOI: 10.1016/S0140-6736(21)01589-0

Hudson B S, Kolte V, Khan A, Sharma G. Dynamic tracking of variant frequencies depicts the evolution of mutation sites amongst SARS-CoV-2 genomes from India. J Med Virol. 2021 Apr; 93(4):2534-2537. DOI: 10.1002/jmv.26756.

Gog JR, Hill EM, Danon L, Thompson R. Vaccine escape in a heterogeneous population: insights for SARS-CoV-2 from a simple model. medRxiv. 2021 Jan 1. DOI: 10.1101/2021.03.14.21253544

Tagged with: , , , ,
Posted in COVID, Politics, Science, Stupidity

Correction: on what I did not understand about Chalmers’ concept of “Strong Emergence”.

In my previous post, I claimed (semi-seriously) that Chalmers’ argument about Strong Emergence was incoherent, and mercifully peppered the post with warnings that I was probably missing something important. Turns out I was right, because indeed, my argument was wrong.

Indeed, I was missing something important, which means my argument was based on a misrepresentation of Chalmers position. So this is a quick (rushed!) follow-up post to put the record straight. My previous attempt should be considered (for the most part) retracted, however, I will not delete it, because:

  • It does still aim at a possible conceptualisation of strong emergence (which does not describe what Chalmers thinks).
  • Judging on the responses I got, that conceptualisation does describe how many people read Chalmers’ words (I wasn’t the only one making the same mistake).
  • I am now even more confused than before, and I can’t explain why, without keeping the old post in place.
  • I do want my own mistakes to remain “on record” (hiding them smells of cowardice, to me).

Instead, I’m writing this post and will add prominent pointers to it in the original post.

What did I get wrong and how did I learn about my mistake?

Starting from the second question, a few days after publishing my post I managed to convince me to send it to Chalmers himself (with some help from Travis R). Chalmers really is as approachable and generous as his reputation suggests[1], which in this case manifested as a short exchange of short remarks[2].

The exchange revolved around (and ended with) the following (reproduced with permission):

[Chalmers]: strongly emergent with respect to physics, weakly emergent with respect to physics-plus!
[Me]: OK, therefore we don’t have any known case of genuine “strong emergence”, right? (To be strong, emergence needs to be strong across any possible domain, even those we haven’t yet discovered or invented, I assume we agree on this.)
[Chalmers]: No, emergence is always relative to a domain. there’s nothing that is strongly emergent with respect to all possible domains.

At which point I remarked something like “Thanks! This is likely to be what I was missing” and retired back into my den, to think about it a little longer. After sitting on it for more than a week, I’m still puzzled, but I do now think that:

  1. There is no real disagreement between Chalmers and myself with respect to what I call “Strong Emergence”,
  2. There is a disagreement about what “Strong Emergence” refers to, which might be just a semantic quibble,
  3. I still haven’t a clue regarding what Chalmers’ original distinction between Strong and Weak emergence points to (and much more!).

How we agree (perhaps)

As per the conversation quoted above, for me strong emergence is a concept that has to apply across all possible domains. Strong emergence may happen if, given complete knowledge of a system’s parts and laws about how these parts interact, it still is impossible to deduce (or better: predict) some properties of the whole system. These unpredicted properties are “strongly emergent”. My main point then becomes “there is no way to identify any phenomenon as strongly emergent, because we can never know for sure that our knowledge of a system is complete”. Indeed, the presence of something that looks like strong emergence should be understood as an indication that our knowledge is likely to be incomplete.

When Chalmers says “there’s nothing that is strongly emergent with respect to all possible domains“, I understand it in this way: “domain knowledge about X is incomplete by definition, that’s because it is knowledge filtered through the lens of the chosen domain”, and also: strong emergence, as I (yours truly) intend it, is indeed impossible.
Thus, what looks like strong emergence when looked through a given lens can look as weak emergence through a different lens, or even as something totally obvious and not at all surprising.

There really is a high probability I’m still misunderstanding/misrepresenting Chalmers’ position, though[3]. It is very clear to me that we think in very different ways about these subjects, which makes communication extremely difficult. If I’m right, however, it means that Chalmers and I agree on something which seems crucial.

The residual disagreement

I see epistemological domains (for example: fundamental physics, branches of philosophy, any branch of the humanities, biology, chemistry, etc. etc.) as necessary and handful labels, useful to give a rough understanding of different people’s background and specialisation. I also think that these labels are opportunistic, imprecise, always shifting and inevitably fuzzy (clear boundaries cannot be established). As a result, I believe that giving them an ontological status (reifying them) and expecting sharp and fixed boundaries between domains is actively harmful. It builds a picture of humankind’s epistemological landscape that is sharply separated in isolated silos. This might point to the worst consequences of how academia works, but is neither a necessary condition nor an accurate description of actual human knowledge. Thus, I think that reasoning based on the expectation that different epistemological domains exist and are objectively distinct is both fruitless and harmful, and I do not see how the concept of “(strong) emergence relative to a domain” can be sustained without doing so.

I thus think that Chalmers’ did make this mistake, but wouldn’t bet more than £1 on it, given how often I misunderstand his thinking (see also [3]).

Anyway, given that “Strong Emergence” for Chalmers is something that strictly refers to a given (epistemological?) domain and never all possible domains, and given that I don’t think epistemological domains actually exist as separate and “objectively well defined” entities, then it’s clear why I would never even dream to define Strong Emergence as Chalmers did (as I currently understand it – and I’m still probably wrong!), and it’s also clear why Chalmers wouldn’t bother with my definition (because indeed: see the agreement, my notion of strong emergence fails to point to anything we can properly identify).

Not having a clue

I find it remarkable how badly I misunderstood Chalmers’ article. Re-reading it after our exchange, I can see that he always included the “with respect to the domain of [x]” qualifier. Nevertheless, I did misread his point quite spectacularly, and I still don’t understand it! He mentions “the game of life” and the emergent patterns it creates as a clear-cut example of weak emergence. However, if we leverage the notion that emergence is always relative to a domain, then, oh, stuff becomes confusing.

We know for a fact that if we take a functioning computer running some software and we try to figure out what it does using only our knowledge of electronics (and the best conceptual “tools” from neuroscience), we wouldn’t have a chance to properly understand what’s going on (yes, we know this, because very clever people actually tried it, famously. See Jonas E. and Kording KP, 2017). Indeed, in a domain that does not include the notion of software and how it can be used to instantiate the simple rules of the game of life, it follows that, given a computer running it, the patterns it creates will look strongly emergent. So, with respect to the domains of software engineering (or mathematics, or computer science), the patterns generated by the game of life are weakly emergent, but with respect to electronics (which, I remind you, neatly reduces to the domain of fundamental physics) the same things look as strongly emergent.

Dear, dear, I am losing it! It appears that the game of life exhibits strong emergence with respect to the one domain that does reduce to fundamental physics. Yet, Chalmers uses it as a clear example of how weak emergence exists and then argues that weak emergence has opposite effects to strong emergence. But wait, if both concepts only apply to a domain, what’s weak in this domain is strong in another. So, given all of the above, I could feel justified to claim that Chalmers’ argument entails that the complex patterns of activity generated by the game of life demonstrate that “our conception of nature needs to be expanded to accommodate them“. Saying so is however obviously wrong, because Chalmers in explicitly claims the opposite, therefore: I am lost. Clearly: I’m still missing something important.

Guess what? It’s OK. I can live with the knowledge of not having a clue. It describes the human condition, after all!

 

Notes and Bibliography

Chalmers, D. J. (2006). Strong and weak emergence. In The re-emergence of emergence (Clayton P and Davies P, Eds.), 244-256.

Jonas, E., & Kording, K. P. (2017). Could a neuroscientist understand a microprocessor? PLoS computational biology, 13(1), e1005268.

[1] I do my best to treat disagreements as a chance to learn something, and, I believe, Chalmers does as well. This is in stark contrast to so much of philosophical debate, where the game appears to be “winning the argument”. I wish more people actively avoided the second kind of game, as it is the correct attitude only when one faces fascists and their ilk.

[2] Shortness here is important to note, as is delimits the boundaries of what I can or cannot conclude. I have already exploited Chalmers’ generosity, so I’m writing this follow-up to set the record straight, even if I am also unable to keep my subsequent thoughts for myself.

[3] I’d put my money on the following: I treat the concept of “domain” in an uncompromisingly epistemological way, but Chalmers does not. Coming from biology, I’m keenly aware of how different specialisations bring people to pick diverging ontologies: what is a system to someone, is a tiny cog inside a system for someone else and vice-versa. Which is why I understand Chalmers’ statements about domains in the way I do. However, I do see that one could consider “a domain” a statement about an ontological fact: something like “the fundamental particles as defined by physics belong to, and define, the physical domain”. This leads to drawing a sharp line between what does and does not belong to the physical domain and assigns to this line an ontological role. Following through, you would then say that consciousness requires “physics plus” because the “physical domain” cannot generate a theory of consciousness (AKA “consciousness is strongly emergent with respect to physics”). This might work, but I’m unable to form a stable opinion, because I can’t draw that ontological line myself. If I try, I get blinded by my automatic reaction, which keeps telling me: “Who are you trying to fool? Drawing that line is an epistemic (and opportunistic) decision, it serves the purpose of allowing the development of different specialisations. Change the purpose and the line will shift or cease to make sense. Thus, the line carries no ontological weight“. That’s my own intellectual limit, right here: when a path feels wrong to me, I can peer around a corner to get a glimpse of where it leads, but I’m unable to follow it for more than one or two steps. Being as it is, I now suspect that my failure to fully understand what Chalmers means might stem from this disagreement and my inability to understand its consequences.

Tagged with: , , , ,
Posted in Consciousness, Philosophy, Stupidity

Strong emergence is metaphysically incoherent

Emergence is a slippery and confusing concept, one that has the capacity to produce endless debate (see Taylor 2014 for a useful and enjoyable summary[1]). In this post, I will claim that the concept of “strong emergence” as proposed by Chalmers (2006) is incoherent and, by implication, not very useful.

[Update (05/12/20): it really is confusing! Please see my correction here. After receiving Chalmers’ feedback, I can confirm that indeed, I did not understand his position, leading me to misrepresent it. Thus, most of this post should be considered retracted. More details are in the “correction” post.]

Yes, you read that right. I am indeed pulling Chalmers’ leg. On one hand, much of his work relies on metaphysical “possibility”, which requires coherence; on the other, his work on consciousness is facilitated by the presupposition that strong emergence exists, at least in the case of consciousness.

Waves are not discrete “things”, even if they look like it, to us. They are an emergent phenomenon. Image: coastline on the Wilsons Promontory, Victoria AU – © Sergio Graziosi.

This second pillar of Chalmers’ approach is explicitly tackled in “Strong and Weak Emergence” (2006), which is what I’ll use as my starting point.

His definition of weak emergence is:

We can say that a high-level phenomenon is weakly emergent with respect to a low-level domain when the high-level phenomenon arises from the low-level domain, but truths concerning that phenomenon are unexpected given the principles governing the low-level domain.

Strong emergence is (initially) defined as follows:

We can say that a high-level phenomenon is strongly emergent with respect to a low-level domain when the high-level phenomenon arises from the low-level domain, but truths concerning that phenomenon are not deducible even in principle from truths in the low-level domain.

This definition is then used to propose that yes, strong emergence does indeed exist, because we know of one clear instance of it: consciousness.

even if consciousness is not deducible from physical facts, states of consciousness are still systematically correlated with physical states. In particular, it remains plausible that in the actual world, the state of a person’s brain determines his or her state of consciousness, in the sense that duplicating the brain state will cause the conscious state to be duplicated too. That is, consciousness still supervenes on the physical domain.

What Chalmers is pointing at (supervenience) is often considered a requirement for “strong emergence”: given how the universe works, an emergent property of a given system is determined by the state of the system. Change this state appropriately, and the property changes or disappears. The phenomenon then qualifies as strongly emergent if, even if we do know all there is to know about the state of the system (its structure and the lawful behaviours of its parts), it is still impossible for us to deduce the appearance and properties of the emergent phenomenon, even in principle.

I must confess that I do suspect that I’m misunderstanding Chalmers’ claims, because in my view, he quickly and directly undermines his own argument by stating:

If there are phenomena that are strongly emergent with respect to the domain of physics, then our conception of nature needs to be expanded to accommodate them.

And:

This suggests that the lawful connection between physical processes and consciousness is not itself derivable from the laws of physics but is instead a further basic law or laws of its own.
[…] I think this account provides a good general model for strong emergence.
[…] In any case like this, fundamental physical laws need to be supplemented with further fundamental laws to ground the connection between low-level properties and high-level properties.

Let’s unpack this a little. Cases of strong emergence imply that we have a known “system” and a complete map of its internal parts, including all laws governing the interactions between them. Having this, there are still properties or phenomena of this whole system which we can’t deduce from our existing knowledge. The interesting part is Chalmers’ suggestion about what we should do next: we need to “supplement” additional laws which would specify how high-level properties supervene on low-level ones. Implicitly, it seems to me that Chalmers is claiming that, if we’ll succeed in doing so, our job will be done, and we would have “explained” our strongly emergent phenomenon.

What puzzles me is that, if indeed we will manage to achieve what Chalmers suggests, we would have concurrently demonstrated that we were dealing with a case of weak emergence. After all, our final epistemological situation is that we have “discovered” some new “fundamental laws”, which in fact allow us to deduce “emergent facts” given our knowledge of the system state, its parts and the laws that describe how these parts interact.

It seems to me that Chalmers is claiming that the correct way to handle cases of strong emergence is to admit our ignorance and start looking for fundamental laws that we haven’t yet discovered. I have no problem with this, but I also don’t see how it can be compatible with Chalmers’ own definition of strong emergence.
In other words, I think that Chalmers’ own reasoning is incoherent, and very manifestly so [2]. Which is puzzling, because the likelihood that Chalmers may commit such macroscopical mistakes is minute. Thus, my mind immediately generates a follow up question: what could possibly explain how such a mistake could go undetected?

My answer to this question points to the interaction of two sources of error:

  1. A misunderstanding of how scientific progress happens.
  2. The incorrect assumption that “levels” (the distinction between a system and its components) are ontologically fundamental, while they are entirely epistemological.

Dealing with the first mistake feels extremely easy to me (this usually indicates I’m wrong). Scientific discovery, in my view, proceeds exactly as Chalmers proposes: we start from a point where we think we have our full picture. We claim that we understand all properties and relations that occur within a system. But in fact there is a handful of additional properties that we can’t yet account for. So we accept our ignorance, scratch our heads, think, experiment some more and eventually figure out what we were missing. Sometimes this requires the formulation of new fundamental laws (think of electro-magnetism, or of the laws regulating the relations between speed, time and mass). How we can find and formulate these additional laws is something that typically changes from case to case, but I do not think that it’s controversial to claim that for a good number of “previously unresolved” scientific problems, we eventually did.
What I’m observing here is that every time we do, we also demonstrate that what initially looked as a case of strong emergence (if applicable) was in fact weak emergence. It looked like strong emergence, because we had no knowledge of some missing piece, which in some case might be best described as a new “fundamental law”. If I’m right, this implies that Chalmers is claiming that consciousness is, hopefully, a case of this kind, which however is not what Chalmers is explicitly claiming… (Interim conclusion is that I must be missing something, but I don’t know what it is, since I’m missing it!)

The second source of confusion is about levels. I think that perhaps, with a fair amount of effort, it is possible to unpack what Chalmers claims in a coherent way, if we start from the assumption that the distinction between a system and its components isn’t arbitrary. Accepting this view would allow us to create a sharp distinction between the “level” at which “the fundamental laws of physics” apply, and a separate one that concern “fundamental psychophysical laws”. Looking at this in detail, however, uncovers a problem: the levels of explanation that are used in science are manifestly epistemic. We decide what’s a system and what are its part in an entirely opportunistic way, depending on what it is that we’re trying to achieve (our methods are always somewhat reductionist, because that’s how our own minds work). On the other hand, everyone knows that nature does not do any such distinction: some animals have sensory capacities that rely on weird quantum effects, for example. I think that it is self evident that natural phenomena operate on all possible levels “at once” and that indeed, we begin trying to understand how nature operates by “slicing it up” into different levels, picking our distinctions based on the regularities that they allow to uncover[3].

Put in another way, the distinction that Chalmers picks up, between fundamental laws of physics and fundamental laws of psychophysics is a distinction that refers to how we conceptualise reality and not to how reality works[4]. This in turn may generate some confusion and lead to the idea that something can be a case of strong emergence in terms of fundamental physics, but is otherwise explainable with the aid of ontologically distinct laws of psychophysics. However, it is self evident (to me!) that if something is “strongly emergent” from one specific point of view, but is explainable from another, then the phenomenon in question cannot be considered “strongly emergent” in an all-encompassing metaphysical sense.

[Side note: assuming that it is possible to reconstruct someone else’s mental processes leading to what seems to be, in my eyes, a mistake, is an act of extreme arrogance that I am not comfortable with. I’m doing it here because one of my purposes is to try to identify what it is that I’m missing, and I cannot do so without exploring my own reasoning in full.]

Be as it may, I find myself forced to conclude that indeed, strong emergence, as described by Chalmers is a concept that fails to point to anything that is properly conceivable. Once one does, as in Chalmers’ case, include the “supervenience” side (given a state of the system, it will be necessarily associated with the emergent phenomenon), it follows that such situation implies our current inability to deduce the emergent phenomenon, given what we know of the system, and therefore immediately suggests that we do not know everything that there is to know about it.

Conclusion

This leads me to a final consideration, which departs from Chalmers’ view, but I think is nevertheless useful to enunciate. I’ve mentioned above the opportunistic nature of scientific theorising: levels of analysis and theoretical frameworks are ultimately picked on the basis of how useful they prove to be, not on the basis of some stable and well understood ontological principle. For this reason, when I find myself criticising a given concept I think it’s necessary to also ask the following question: irrespective to what it refers to and regardless to how concrete this referent is, is this concept useful in one way or the other?

My answer for the case of strong emergence is: yes, the concept is somewhat useful, but for one very limited reason only. The reason is that in practice, whenever something looks like a strongly emergent phenomenon, we can and probably should deploy a modicum of induction. We’d then realise that the history of science can be summarised as a series of repetitions, following a broadly repetitive pattern. We start from a situation where it feels like we have learned everything there is to know about a given subject, except for one or few secondary aspects, and some of which, upon close inspection, look like strongly emergent: given our otherwise “complete” knowledge, we still can’t explain their existence. Naturally, people will start concentrating on those few outliers and eventually someone will propose new conceptualisations, or perhaps design new instruments that allow to measure things we didn’t even know existed. These new advances will be promoted as “superior”, specifically because they allow to explain also one or more of the previously mysterious emergent phenomena.

The important side effect of realising this is practical: even if we assumed that strong emergence can exist, whenever we are presented with a situation that suggests strong emergence, the only thing we should do is to proceed as if we were dealing with a case of weak emergence, accepting that the appearance of strength is usually a function of our own ignorance.

We would thus respond by looking for alternatives ways to conceptualise and analyse the phenomenon at hand, or, if you prefer, we would redouble and renew our theoretical efforts, explicitly searching for new clues about the mechanisms we don’t fully understand. The alternative isn’t viable: should we accept something as a strongly emergent thing, we would be implicitly declare it as unexplainable, even in principle, and I fail to see any reason why we would want to.

This points to a twisted kind of negative usefulness for the concept of strong emergence: labelling something as such is functionally equivalent to declaring it “inexplicable, even in principle”; however, doing so also shows that we do not posses the intellectual tools required to reliably identify something “strongly emergent”. From our limited epistemic position, we have no way to distinguish between strong and weak emergence (assuming that strong emergence is indeed a coherent concept), and therefore we should always behave as if all emergence is weak. Failing to do is equivalent to giving up, which happens to be the exact opposite of what both science and philosophy try to achieve.

[Update (05/12/20): please see my correction here. After receiving Chalmers’ feedback, I can confirm that indeed, I did not understand his position, leading me to misrepresent it. Thus, most of this post should be considered retracted. More details are in the “correction” post.]

Notes and bibliography

ResearchBlogging.org

Chalmers, DJ (2006). Strong and weak emergence
The Re-Emergence of Emergence (Clayton P and Davies P, Eds.), 244-256 : 10.1093/acprof:oso/9780199544318.003.0011

Taylor E (2015). An explication of emergence Philosophical Studies, 172 (3), 653-669 : 10.1007/S11098-014-0324-X

[1] In case you’ll enjoy reading the present post, I would highly recommend to also read Taylor’s article. It is very well written and will provide a good description of the landscape. It will also count as a much needed correction to all the over-simplifications that I’ve scattered in here.

[2] It’s important to note here that I am not claiming that all possible formulations of “strong emergence” are necessarily incoherent. I’m claiming “only” that what Chalmers proposes in his paper is. Specifically, his treatment of the one (supposedly) known case of strong emergence implies it is not (or that it hopefully isn’t) a case of strong emergence.

[3] An important disclaimer is due: the “reductionism” concept is, just like emergence, a slippery and ambiguous one. In this case, I am pointing to a methodology and not a metaphysical stance. I claim that scientific understanding always relies on “slicing up reality” in some way, and that this happens because it is how human cognition operates. It is entirely possible to proceed in this way without assuming an all-encompassing reductionist metaphysics, and indeed, in my opinion, that’s probably what we should do.

[4] To be fair, it should noted that the concept of emergence does in itself rest on the assumption that different levels exist. For those of us who regard emergence as “obviously” epistemic (as I do), this detail is crucial. Given that the different levels are in our minds and not in the world out there, “emergence” becomes the side effect of how cognition works. On the other hand, if one assumes that the different levels have an ontological status, then perhaps a more metaphysical view of emergence may start making more sense (ignoring what to me looks like a foundational mistake).

Tagged with: , , , ,
Posted in Consciousness, Philosophy, Science

Labour, the media and the cultural battle

If you are interested in UK’s politics and self-identify as left-leaning, reading my previous post might have got you screaming “Yes, yes, but WHAT ABOUT the overt hostility shown by THE MEDIA?

This reaction would be predictable and somewhat justifiable, but also a sign that my point about the cultural battleground was right.

Amused to death album cover

Roger Waters’ Amused to Death because it’s brilliant and relevant.

Why? Because failing to see the direct link between whether or not Labour’s cultural foundations are recognised as legitimate and how Labour policies are represented in the public sphere is a big part of the problem[1]. This link is a root cause of why so much of the political battles that Labour needs to win are profoundly asymmetric and tilted heavily against Labour’s side.

In the previous post I claim that it is extremely important to establish some solid cultural foundations able to provide legitimacy to left-wing policies[2]. In practice, this means that it is necessary to have ready-made arguments in favour of social and policy solutions that rely on cooperative and collaborative approaches, whenever we have proof, or strong reasons to believe that market-based solutions won’t work. The current problem of Labour in the cultural battlefield[3] is that these cultural foundations are nowhere to be found, at least not within anything that resembles the mainstream.

Why is it so? The short and oversimplified answer is: Blair’s legacy.

I wasn’t in the UK during Blair’s tenure, but my impression is that he appeared on the scene during a rare moment when a Labour government could get elected even if it vacated the cultural battleground. It’s possible that doing so even gave it the decisive advantage, at the time. Be as it may, I think it’s uncontroversial to claim that Blair’s success relied in proposing some state-driven, not-marked-based solutions to problems that had been becoming more and more acute during the previous decade, while also taking an openly laissez-faire stance regarding many other “established markets”. There is, in my view, nothing wrong with this approach, but unfortunately, there was something very wrong with the way that such approach was justified. The situation allowed to propose some unapologetically left-wing policies and justify the choice simply by claiming: well, we know the current system is failing, so let’s try something different.

It worked, and it worked well for quite a while, but Blair made the fatal mistake of not catering for the long-term consequences of this move. I believe that in his case, he was genuinely convinced to have found a long-lasting solid middle ground, and for this reason, perhaps he thought (probably still thinks) that there was nothing to be done or gained by investing in the cultural battleground (as I’ve defined it).

We can see the effect of this disastrous error still today, Labour is still paying the price. How? Well, Brown and Darling first, Miliband after, simply had no argument available to propose anything different from “austerity-light”[4].
Having witnessed their efforts in person, I am sure they thought they had already lost the cultural battle and tried to produce a strategy based on this fact. They might have been right, as well. Proposing something that relied on a worldview that values cooperation more than competition might indeed have hurt their electoral prospects. What they did instead, however, clearly did not work either. This, in a nutshell, is why I am claiming that the cultural battlefield is of critical importance to Labour. Without making sure that people recognise some “cooperation first” perspectives as legitimate and defensible, the work that is required to justify and defend left-leaning policies on the media is simply too much. It cannot be done effectively in the time allocated (be it an interview, an op-ed on some newspaper, or even a full electoral campaign). As I’ve said previously, without those cultural foundations, most genuinely left-leaning policies look like well-meaning wishful-thinking, which is a polite way of saying that they lack a minimum amount of credibility.

Back to Blair (no, I won’t mention the war, yet): if my analysis is right, the generation of politicians that were formed during his governments were self-selected amongst those who did not feel an urge to defend the “cooperative” foundations of left-wing thought. This is what I mean when I say that the cultural battlefield was vacated. People just left – there remained no-one with influence willing to defend the set of ideas that make Labour what it is (or what it is supposed to be). This being the case, the poor performances of Brown and Miliband become predictable and very easily explained – see for example Simon Wren-Lewis‘s recent analysis of that period.
Looking at the relationship with the media, you can then notice that their policies were actively ridiculed, pretty much like Corbyn and his policies were, because they had no well-known and respected intellectual line of defence. “Why is a little less austerity better, if austerity is what we need?” There is literally no credible answer to this question that does not rest on the recognition that competitive markets are not the solution to all problems. It’s as simple as that, I’m afraid.

Moving over to Corbyn, the situation changes in some ways, but once again, the lack of recognised and publicly respected foundations is one of the reasons why Corbyn was never going to succeed. When Corbyn was elected, for a brief period I was hopeful and fully onboard – he quickly showed his ability to challenge the hegemony of neoliberal assumptions and did it, with some success, on main stream TV. Thus, for a little while I could cultivate the hope that Labour was going to re-engage in the cultural battlefield, which is, if I’m right, the precondition to fulfilling its social function. What a fool I was – I was hoping that Corbyn understood the need of establishing solid cultural footholds! As we’ve seen, as soon as the Brexit referendum happened, Corbyn lost both his (visible) interest in this endeavour, as well as the legitimacy necessary to actually succeed. Ignoring the Brexit position of the majority of Labour members made it clear that his declared intentions to democratise the party structures were nothing but nice words, while pandering to anti-immigration sentiments made it impossible to propose a coherent world view centered on the value of collaboration.

The awful treatment of Labour offered by all of the mainstream media was the direct result. Why? Because not a single policy proposal made by Labour now rested on solid foundations (maybe disputed foundations, but at least widely known and recognised as legitimate). The Neoliberal outlook is still seen by most as the only legitimate position, meaning that all progressive policies that Labour offered needed to be defended from scratch, no shortcuts were available and generally, proposals like spending on social policy and investing in infrastructure and well-being are treated as unproven, wishful and optimistic at best, actively dangerous otherwise. In other words, during the 2019 campaign, Labour had the need to fight and perform extremely well on both the cultural and political battlegrounds, and do so in just six or eight weeks. It was impossible, would have been prohibitively difficult even to much better communicators than Corbyn.

Of course the media battleground is and always will be asymmetric: rich media owners will always be hostile to genuine left wing policies and ideas; the BBC will always have some pro-government bias (especially as a consequence of the Iraq war fallout). It is true that during the 2019 general election the BBC expressed this bias with unprecedented clarity, but again, this was predictable: in this occasion the BBC did not have the need of balancing two worldviews, because only one was available – the other went AWOL when Labour decided to allow the triggering of article 50 and to appease xenophobic undercurrents.

The upshot of all this is that the precondition to make the media scene a little less biased against Labour is to start promoting, defending, elaborating and developing the foundations of left-wing thought. Still unconvinced? Fine, here is some supporting evidence.

First of all, some raw data: I’ve claimed above that Corbyn did initially engage on the cultural ground and that he later went AWOL. I also claimed that later on, it was too late to even try. If I’m right, there should be a clear difference in Corbyn’s performance and effectiveness when interviewed at different times. We can look at three examples, all involving the same interviewer, to aid comparability.
Here is Corbyn in 2015, explaining to Marr the reasons behind his proposed policies. We then jump to 2019, interviewed by Marr again, during Labour’s conference. Something big changed: almost all the second interview is used up by topics that concern infighting, discontent and the “elusive” stance of Corbyn himself towards Brexit (these are the consequence of his other mistakes). He then gets about 5 minutes to talk about policies, during which he has no chance to try explaining their rationale.
Marr and Corbyn would meet again shortly after, for an interview during the election campaign, here is the transcript; this time, after spending even longer exposing the inherent contradictions of Corbyn’s position towards Brexit, Marr tries to formulate a question by saying (emphasis is mine):

[Y]our instinct in every area seems to be that where there’s a choice the state can always do something better than the private sector, whether it’s broadband or dentistry or anything else. And I wonder is there any part of the economy which is completely safe from the threat, as they would see it, of nationalisation.

Boom. This passage clearly indicates that Corbyn at this point can rely on exactly zero recognised foundations to his policies: he’s being asked to explain their rationale from scratch. Moreover, his policies are concurrently described as a threat. What happens after is worse, depressing and predictable: just as Corbyn tries to rise to the challenge, Marr interrupts with “we’re out of time”. QED – Corbyn never had a chance, once he abandoned the approach he showed back in 2015.

If this isn’t enough, maybe now Anna Turley’s anger can be understood in a new light. She writes (emphasis is still mine):

Despite 10 years of Tory austerity that has led to Dickensian levels of poverty, and the end of 175 years of steel-making in my constituency, people didn’t believe Labour would be any better.

Why? If I’m right, it’s because, having vacated the cultural battleground for decades (with one short exception), Labour had no way to build its own credibility in time. The preparatory work should have been done incessantly from 2015, but alas, was abandoned as a consequence of Brexit.

If you’re still unconvinced, please read this recent report by Luke Pagarani on his direct experience of canvassing, and of how the problem of credibility kept recurring. He writes (emphasis added)

With such voters, retired or coming towards the end of their careers, Corbyn’s collectivist language of what we could build together left them sceptical and uncomprehending. It seemed more zero sum to them, where one person’s gain must be another’s loss.

What’s the message here? That they didn’t know the fundamental reasons why not all games are zero sums, and why cooperation can and sometimes must work. On young people, Pagarani again (added emphasis):

I also canvassed many young, working-class people who were not engaged with politics. Many had never heard about class politics at all […]. The idea of voting for a party to tax the rich to pay for redistribution and public services was completely novel, and generally immediately attractive. It was amazing to see how quickly and instinctively they grasped a leftwing agenda while saying they had never thought about it before.

Surprise! Actually articulating the reasons why left-wing policies are sorely needed does work, after all. Phew!

In conclusion, while figuring out what went wrong, it is essential for Labour to recognise the immense long-term damage done by not spending enough resources and efforts in building and promoting its cultural foundations. Without this background work, Labour will always be vulnerable to the unavoidable hostility of much of the media.

Notes

[1] If you don’t know what I’m referring to, you may have to check out my previous post (sorry). In a nutshell, I’m saying that currently, society is organised around the idea that competition is inevitable and also good. This vision forgets the virtues of cooperation, which are in turn fundamental to understand the merit of Labour’s policies and aspirations.

[2] Careful readers are likely to notice that I’m not using the word “socialism” anywhere in this series. That’s because you do not need to identify as a socialist to grasp the merits of cooperative societies. Moreover, the world is changing and while the S word is a negative trigger to many, the policies and solutions that are needed right now look quite different from the ones normally associated with traditional socialism (collective ownership of means of production, central planning, etc.).

[3] I am deliberately filling these posts with war metaphores, even when I am promoting the value of cooperation. This is because even if war does require two sides, only one needs to have destructive aims in order to trigger a fight. Framing the struggle of Labour as a violent confrontation against irreducible enemies is, I believe, correct. On one side, the political game requires to do so, on the other, global capitalist forces are indeed hostile and trying to destroy left-wing thinking as well as all its political expressions. Talking about class war, and a war that the left did not initiate, wouldn’t be wrong, but might alienate whoever sees socialism as a dangerous ideology.

[4] Here I refer to the rationale used to justify the policies proposed in their electoral manifesto, as well as their decision of how to present them to the public. I think that Brown, Darling and Miliband (Ed), would all be quite onboard with my approach and emphasis on collaboration and not-zero-sum games. However, I also believe that, for electoral purposes, they thought that being seen to do so would be punished in the ballot box.

Tagged with: , , , , , , , , , ,
Posted in Politics

The recurring errors of the left

I’ve been scared for three and a half years. That wasn’t enough. Now I’m also angry.

Protests in Chile, Plaza Baquedano

Protest in Chile, because it’s the same fight. Image by Hugo Morales (CC BY-SA 4.0)

Yes, I am somewhat angry at Tory voters, but not ferociously so. The anger that hurts me the most is the one I’m directing against those who led the Labour party, especially Corbynites, but also his predecessors. I also burn with regret, for not having spoken loud enough and for letting scraps of hope to justify my own inaction. No more, I will use my anger to do something. My doing starts here, with good old fashioned critique, directed to my own team.

[If you are not scared, it may be worth reading back to where I explain the danger of Fascist and Authoritarian rhetoric. If you are not angry, please keep reading.]

I’m angry at the Labour leadership because I have predicted the present catastrophe – years before it happened. I was proven right. If I, a foreigner with no relevant background, could clearly see what was coming, failing to see it as a professional politician must count as inexcusable. Fatal errors were made, starting in 2016. They were errors that had been made before, making their consequences predictable. They were errors that should have been avoided. Thus, I will use my rage and put it into words. You’d better listen – I might be right.

The summary of the errors made is:

  1. Neglecting the cultural battleground. Without engaging with the rationale supporting our ideas, without establishing some firm and recognised cultural foundations, left-leaning parties are condemned to get decent electoral results only if and when many external factors all align in their favour. This is the reason why the left is constantly the underdog.
  2. Forgetting that the political battle is thoroughly asymmetric. Some of the strategies that work for the right and/or for Neoliberals do not work for the left and progressives. Co-opting the dirty methods of our opposition is harmful and does not work.
  3. Political actions made today will invariably reduce your options in the future. Every decision has a long-term cost that needs to be accounted for. Shortcuts for immediate political gains are like borrowing: you get the money now, and return more of it later. Like debt, these costs need to be predicted and managed wisely.

I believe it’s worth exploring these errors, before doing anything practical, because I’ve seen these errors being committed too many times. I must point at them, now.

The cultural battleground.

Left-leaning policies and ideologies make sense only in light of the interdependency that underlies any modern society, as well as all of life on Earth. This is particularly true for the climate emergency, but is crucial also when settings the aim of most political decisions.
In terms of society, there is one intuitive view that tends to see most social interactions as zero-sum games. If you got that job, it follows that I didn’t. You won, I lost, end of story. This is what I call Naïve Darwinism, the idea that everything that matters can be understood through the lens of competition, of winners and losers. Frequently, this outlook is paired with the idea that competition is the natural state of affairs, and therefore inherently good. Except it isn’t[1].

Be as it may, a worldview that sees competition as both the default state and a good thing is wrong, but has two very strong selective advantages: it is simple (if you got the job, it’s because I didn’t, and that’s that) and it is self-sustaining. It self-sustains because if I view my peers as competitors, chances are that I will be recognised as a bad collaborator. As a consequence, people will tend not to collaborate with me and therefore my own world will genuinely become dominated by competition. Moreover, if I struggle to put food on my table, the fact that you got that job instead of me is indeed really bad news for me and it does not matter if somehow the larger society may get some benefits: I didn’t![2]
As a consequence, politicians find the idea of all-encompassing competition easy to sell. It does not help that it can be joined with the idea that competition is inevitable and that it can always be used to produce “efficient markets” which in turn are beneficial[3]. This then produces what Simon Wren-Lewis labels “neo-liberal overreach“, which is the policy effect of the “competition-first” cluster of beliefs: the role of the state is reduced to creating markets, whenever there is a problem to be solved. This cluster of closely related ideas is what justifies the perennial growth race, is what’s destroying our planet’s ecosystems and, if left in place, will eventually destroy society as we know it. In terms of intellectual foundations, it boils down to the idea that competition is the one and only lens required to understand the social and natural worlds.

The cultural battleground is the result: foundational to left-leaning ideologies is the idea of interdependency. If a society cares for everyone, cooperation is maximised, making our world better for everyone – it’s the recognition that we can design our own games, and that adopting the ones where we can all win is not only desirable, it is also possible. That’s because social interactions are not usually zero-sum games: if I’m generous to you, it’s more likely you will be generous to me sometime in the future. If your kids go to school and will gain access to better jobs, they will also be better at their jobs and that’s good for me too: I’ll get better doctors and better teachers for my kids, so they will in turn have better opportunities. More cooperation means we’re wasting less energy in fighting one another, releasing more resources in the pursuit of the common good. Unfortunately, one thing that left-leaning politicians keep forgetting is that, if one is not sympathetic with the “cooperation is possible and good” view, left-leaning ideas and policies make no sense. They look hopelessly optimistic, ideological and unjustified. To make matters worse, the lower you place is in society, and the more the world around you is shaped by the “competition first” worldview, the less you can see and experience the benefits of cooperation – which means that it becomes more and more “rational” to espouse the neo-liberal worldview[2].

In turn, this is why the right finds it so easy to persuade turkeys to vote for Christmas. Labour and similar parties struggle to keep the support of working class electors because the more a society is competitive, the less a “cooperation first” outlook is justifiable. The more one struggles to get by, the more all of life is a struggle, the less any promise of a bright cooperative future looks credible[4].

Thus, if left-leaning parties are to ever get to play on equal terms against their opponents, they have to establish the cultural background where the value of cooperation is at least recognised as in-par with the value of competitive markets. But left-leaning politicians and strategists keep forgetting this. Frequently, they actively work against their best interests by espousing thoroughly neoliberal views: see the infamous “there’s no money left” note, or Corbyn’s claim that the “wholesale import of labourers” damages local labour markets. Both positions presuppose the primacy of markets. For the first, it implies that state-borrowing is always bad because the market will impose higher interest rates; in the second case the implicit message is that, to keep wages from falling, the state has to intervene by reducing the supply of labourers, while ignoring a number of much more useful strategies (all well-known to self-respecting economists)!

The battleground is asymmetric.

I’ve mentioned above that the belief in the “primacy of competition” is self-sustaining. This means that the cultural battlefield is skewered in favour of Neoliberals. As a consequence, it should never be neglected. Left-leaning and progressive organisations should always invest more energies than their adversaries in the cultural arena – failing to do so awards a permanent tactical advantage to their foes. But the asymmetry cuts more deeply than this, and strategists on the left regularly fail to recognise the implications. The fact is that if you espouse the view that competition shapes everything, then selfish acts are not only normal, they are also the only reasonable strategy available, to electors and politicians alike. The effect is that when a politician is caught acting in his/her own interest first, placing the common good second, it will be perceived as normal and not newsworthy if said politician is right-wing and/or Neoliberal. On the other hand, if the politician promotes or presupposes the cooperative worldview, being seen to put self-interest first, to the detriment of common good, is genuinely newsworthy. The politician has just exposed her/himself as a dangerous and inexcusable hypocrite. Right-wing politicians have more leeway in terms of acting selfishly because such acts conform with their ideological position; left-wingers have no such luxury – none at all, to be precise!

The result is a recurring mistake, which is single-handedly responsible for incalculably high costs in terms of actual votes. I’ve seen this happening over and over in Italy, when Berlusconi dominated the political landscape, only to witness the same pattern again and again in the UK. No lessons were learned!

It goes like this: we start where both left and right work under the assumption that a certain level of decency, at least on the surface, is a hard requirement. Everyone assumes that if a politician or a party is seen to engage with dirty tricks, below a given threshold, the electorate will turn against them and punish them harshly at the next election. Disruption happens: one politician or a whole party decides to break the established rules of decency and visibly goes where no-one dared before. If they are on the right, the unimaginable happens: not only they are not punished, the may even gain in popularity because of their manifest immorality. At this point, without known exceptions, left-leaning and progressive strategist start thinking “OK, if they can stoop so low, surely we can relax our own constraints and play a little bit more dirty. As long as we’re seen to be clearly better than them, we’ll be fine“.

I have news: it NEVER works.

Why? Because the battleground is asymmetric. A right-winger can be selfish, as they found their political credibility on the idea that selfishness is ubiquitous, unavoidable and somewhat good. Left-wing and progressive politicians can’t, because their fundamental promise is that cooperation is good and possible. Unfortunately, it takes only one free-rider to undermine a cooperative system. Thus, they cannot expose themselves as aspiring free-riders, because by doing so they demonstrate that their promise is unachievable (at least when they are in the driving seat).
People are surprised about why accusations of anti-Semitism, and more importantly, accusations of not taking the problem seriously, could hurt Labour so much, while everyone knows that the Tories are both racist, misogynistic and classist. I am sure that left-leaning politicians are incessantly frustrated by this kind of mechanisms, they find the situation insufferably unfair and tend to neglect it because of its unfairness. Too bad: these are the rules of the game, the asymmetry comes with the team you’ve chosen. If you find the need to act (somewhat) selflessly unfair and restrictive, that’s because you’ve picked the wrong team.

Here is another example: Tories break their electoral promises with clockwork regularity. They never met their immigration targets, the economy always disproves their growth estimations, crime increased while they were in power, there will be no “350m millions per week” to the NHS, etcetera, etcetera. And yet, here they are, broken promise after broken promise, still in power, with a bigger majority than before. How could they possibly improve? They did, because they now have a leader that unapologetically embodies the ideology they promote – as selfish and devious as you ought to be in a dog eats dog world. Compare with the LibDems: 9 years ago they broke one single (crucial) promise and they are still paying the price. I would feel pity if it weren’t so easily predicted. If you brand yourself as progressive, you can’t act like a selfish crook. But if you convince people that selfishness is both good and inevitable, then you can, and you gain credibility by acting in your own interest!
I wish left-leaning and progressive politicians could learn this lesson once and for all…

Your actions today will invariably reduce your options in the future.

I have explored this principle before. Its importance however seems to escape politicians, strategists and commentators alike. One example I’ve used before is the (utterly predictable) ineffectiveness of Cameron’s campaigning for Remain. It was literally impossible for him to be credible. Why? Because the discontent that was weaponised by the leave side was the direct consequence of Cameron’s own policies. The way to neutralise Leave’s scapegoating of the EU was to say: you’re being held down by austerity and Tory policies, not the EU. It goes without saying: Cameron himself was exactly the wrong person for the job.
Unfortunately, the same kind of trap later started to apply to Labour… After the referendum, once they decided to try appeasing leave voters, their course was set. They started down a route that will eventually hurt them, but at any given time, the immediate cost of a U-turn made it increasingly harder to correct the course. I predicted the present defeat on this basis, back in January 2017. Around that time, Labour took a couple of catastrophic decisions. Perhaps they made some sense at the time, as they allowed to retain some support in the leave-leaning Labour heartlands in the North of England (or so I’m told), but also made the current catastrophe almost inevitable. I mention this here because what was done summarises all of my three key messages.

All three mistakes, in one go!

First of all, the position that “Labour respects the referendum result” was adopted. The implications are:
1. The referendum result is legitimate. [The clearly untrue and devious promises didn’t matter.]
2. The resulting mandate is compelling. [Forget checks and balances, along with the fundamental tenants of representative democracy.]
3. Exercises in direct democracy should trump ordinary representative democracy[5]. [Yes, that’s the foundation of Fascism, but hey, if the BBC says so…]

Thus, their own decision undermined the legitimacy of all efforts to use parliamentary mechanisms to keep the actions of the government in check. Bad move. Even worse was the decision to try appeasing the anti-immigration sentiment that is believed to underlie much of the support for Leave. As hinted above, this move meant that Labour vacated the cultural battlefield altogether; they just went AWOL overnight. The implication of this decision validated instantly the assumptions made by the worst part of the Leave campaign – they provided fuel to Labour’s opponents! Concurrently, it undermined the foundations of left-leaning thoughts and policies. Instead of fostering and valuing solidarity amongst labourers, it pitches labourers against one another, depending on where they come from. The effect of this is that the cultural battle was lost without even fighting it: it became impossible for Labour to credibly espouse the policies that would indeed solve many of the problems affecting their working-class Leave and Labour voters. Utter madness! But it gets worse: the move made no intellectual sense whatsoever. Thus, it was widely (and rightly) seen as a somewhat self-interested move: a hard-to-defend position was taken in order to retain the support of some Labour voters, not because it was “right” in itself. This breaks my second principle: being seen to act selfishly is critically harmful to left-wingers, even if or when it is neutral of advantageous to right-wingers.
What happened, with a snowballing cost that kept growing ever since, is that well-meaning voters, well-meaning Labour members, well-meaning Labour MPs and a good number of public intellectuals suddenly found it much, much harder to enthusiastically support Labour policies, politicians and strategies. The effect is cumulative: in the absence of a U-turn, it becomes harder and harder to approve of Labour, while the vanishing support means that less and less electors will perceive Labour as a viable and credible alternative to the status-quo.

Finally, the move meant that every single day, both the perceived and actual cost of changing course kept increasing (this is my third point in action). The direct effect of such mindless decisions is that the number of people still actively promoting Labour kept decreasing. Self-selection kicked in: only those who were comfortable with very un-labour policies remained onboard. Thus, changing course became harder: one would predict that those who still supported Labour would not approve of a U-turn, while nobody could be sure if the people who left would obediently return to the fold (and in fact, they didn’t, overall). Moreover, the same selection also operated on decision-makers: those who disagreed abandoned the ship, one by one. Thus, those who remained were precisely the ones who were less likely to advocate for a change of course. In this way, the initial decision produced for selfish, short-term purposes in the first place, made it almost impossible for Labour to remain credible and also to correct its course in time. Even if a sort of half-hearted U-turn was indeed made at the last possible moment, the cost in terms of both talent and credibility has now hit Labour in full. The reputation of the current Labour leadership crumbled, and rightly so. They have been fools, and while I don’t doubt they had good intentions, the price they will pay personally is commensurate to their hubris and recklessness. My anger remains, because the country will pay a much bigger price.

Conclusion.

This analysis is partial, naturally. I did not list all mistakes that were made by Labour in recent years. I did not even list all recurrent mistakes made by left-leaning politicians elsewhere. I limited myself to the mistakes that are almost always repeated, and considered only those that tend to be catastrophic. I don’t have many good news to offer, the battle ahead is uphill, from beginning to end. But still, not fooling ourselves must be a start. In the next post we’ll see how [we can change gear edit (30/12/19)] to handle hostility from the media and hopefully achieve something better in the future.

 

Notes.

[1] Competition is emphatically not enough to describe “the natural state of affairs”, cooperation is equally ubiquitous and naturally emerges always and without exceptions. In fact competition and cooperation naturally constrain one another, with the effect that every naturally occurring system will have a bit of both, but that’s another story…

[2] This is the main reason why my anger is not primarily directed at working class Tory voters. The harder your life currently is, the closer it is to a genuinely zero-sum game. Thus, as life gets harder, a dog eats dog attitude becomes genuinely justified, given the evidence available.

[3] Sometimes, even often, they are. But this doesn’t mean they always are. My own definition of “Neoliberalism” follows: it is the belief that competitive markets are the best possible solution to pretty much every problem of society. Needless to say: it’s a risible belief – how could the same solution apply to all problems? It is also an extremely popular belief, alas.

[4] The important exception is, naturally, bona-fide Fascism. The reason why it is appealing is that it promises to produce in-group harmony and cooperation by identifying and actively fighting against both internal and external enemies. In this context, it is the identification of common enemies that makes the promise of cooperation credible.

[5] Importantly, the idea that a single “Will of the People” exists and is knowable is completely false. This is why Fascist and Authoritarian ideologies are dangerous: they are founded on an idea that is both appealing and wrong.

Tagged with: , , , ,
Posted in Politics, Stupidity

The wrong kind of activism

As I start writing this, I am sitting in the inner courtyard of a beautiful hotel is Santiago (Chile); what I’m planning to write has been stirring in my mind for much too long…

© 2019 Ana Tijoux. Cacerolazo: a form of protest that cannot be ignored…

Today’s curfew will start at 10PM, for now, most of the sounds come from road traffic. The unmistakable rhythm of the ongoing struggle is intermittent – clang, clang, ta-ta-ta, it surfaces at random. Later today, it will become pervasive, an impossible to neglect statement of how this part of town sees things. No more procrastination! The Chileans are speaking and I will not waste the chance I’m being given.

In my ordinary life, as I grow older, I’ve started to notice more and more the signs of ongoing struggles: the never receding bigotry of small-scale, relentless and deceivingly polite racism that pervades all British society; the #MeeToo movement; the always present signs of why proactive feminism is still sorely needed at home and everywhere else; homophobia, transphobia, the class system; the plight of people fleeing unliveable conditions, war and persecution; the seemingly unstoppable rise in inequality and more – I can’t even make myself put together a comprehensive list, let alone put these struggles in any discernible “order”. Everywhere I look, there is progress to be made and yet, although I would like to consider myself an “ally”, for all of the above (and then some), I do close to nothing about anything. I can write, though, so I will.

Whenever I make my views known, there is one kind of response that keeps recurring. Even more, while exploring ongoing “conversations” about this or that attempt to make this world a little better, the same kind of reaction pops up over and over. I (think I) know where it comes from, as not too long ago, it would have been my natural response as well.
It goes like this:

“[This person] is doing it wrong: she’s alienating the very people she should be convincing. Bridges need to be built, but she’s too blunt, polarising even.”

[This person] typically is an activist and frequently an activist who is directly affected by the issue she’s trying to resolve.

My current position is peculiar, all my instincts are (or perhaps have been?) geared towards building bridges; however, fairly recently I’ve come to believe that the sentiment above is not only wrong, it is actively harmful. Spectacular changes of mind are rare, more so after reaching middle age. Thus, I’m going to explore what I believe are the reasons for my U-turn.

The core ones are two: the historical characteristic of successful social movements and the inevitable differences between those who are affected by a given issue and those who are not.

Effective Social Movements.

You don’t need to be a historian to notice that all well known cases of successful social movements (excluding violent revolutions) have one element in common: they included, and were usually driven by a core of irreducible, uncompromising activists. I can see why: if a given ’cause’ is clearly just, but resisted by the status quo / powers that be, you need to inject a significant amount of energy in order to disrupt the established order. If the desired changes are also going to erode someone’s privileges, then it’s likely that it will be necessary to overcome both active and passive resistance. None of this is possible without a die-hard core of activists who will simply refuse to back down or compromise.

Change will start to happen once the “silent majority” realises that these people will never shut up, no matter what.

When the discomfort generated by such campaigners becomes noticeable and at the same time it becomes obvious that it will not go away, only then, conceding something might start to look appealing, even to people motivated exclusively by self-interest.
The way I understand it, this is the point in which “allies” and bridge builders can become useful, if not indispensable. Advocating for change, as a third party with no direct stake in the dispute can and usually does provide the last push. At that point, whoever is resisting change will find herself in an uncomfortable position, with no way to ameliorate it without conceding something.

Thus, to achieve social change you need:

  1. A core of irreducible activists, who are determined enough to convince most people that they simply cannot be silenced (the Activists).
  2. Enough sympathetic outsiders who broadly agree with the main concern (the Sympathisers). It’s frequently this second group which will become the negotiating party and which will win incremental “concessions”.

If you are not convinced, we can look at a recent (and somewhat surprising) example: Brexit. the UK Independence Party (UKIP) was founded in the early nineties and started growing significantly when Farage became its leader. Two decades later, despite never having elected a single MP, they were still there, still advocating for the same change and showing no sign of decline. That’s when their sympathisers within the Tory party could start to make a difference1. The relentless annoyance produced by UKIP campaigners is what allowed them to influence Tory party policies. Concessions started to be made, such as Cameron trying to renegotiate the UK’s place within the EU first (to appease the growing influence of Tory Eurosceptic), and eventually calling for the referendum.
As expected, the key elements I mention above are present: a core of irreducible campaigners and a number of external, less committed sympathisers. Crucially, it’s this second group which was able to exert direct influence and negotiate incremental changes to the policies of their party.
This example shows that these mechanisms are quasi-universal: they don’t depend on the kind of change that is being sought, whether they succeed or fail depends on the presence and size of the two kinds of groups2.

Different people, different roles.

The pattern I’ve sketched above points to the different and complementary roles that people might play. These in turn are strongly influenced by self identification and/or visible and therefore somewhat inevitable group membership. If a UKIP member demands Brexit, that’s news to exactly no-one, but when someone belonging to a different party does, then people start to notice. Similarly, if a person of colour demands the end of racism, few will take notice (alas), but when a white MP stands up and proclaims “She’s right”, that’s when newspapers may start developing an interest. If I’m happy to make a stand for a cause that does not directly affect me (or is not directly linked to my perceived identity), my support will carry significant weight precisely because people will not be able to dismiss it as mere self-interest. This is why having a large enough group of the second type is usually necessary to make the first steps in the desired direction.

Interestingly, it’s possible to argue that this whole mechanism rests on the errors of the “resisting” parties. If and when concessions are made, they are made by negotiating with the sympathisers, hoping to placate those annoying activists. This usually is a mistake: the effect will be exactly the opposite, activists will rise the stakes, and the ranks of sympathisers will start to grow, having validated their credentials.
The important thing to note here is that how people respond to campaigners of the two “types” is radically different. Activists are usually listened to by those who are sympathetic enough – namely, members of the second group, as well as those likely to join-in. The silent majority, however, would notice the (annoying) existence of group one, but would eventually listen and engage with those people who they perceive as “reasonable”. Not the activists, but the sympathisers.
Moreover, some people do not really have a choice about what group to join. Being Italian, I will be perceived as a Pro-European activist whenever I speak against Brexit. When a gay person speaks about gay rights, would you label her a sympathiser? This is important, because how my position is perceived, informs who my natural interlocutor should be, if and when I actually want to make a difference. As an activist, I have two roles to play: I should be a visible annoyance to the silent majority and simply someone who happens to have a valid point to all possible sympathisers. As an sympathiser, I can amplify the visibility of the activists and can also persuade (build bridges, at last) anyone who currently does not care about the issue at hand.

The wrong kind of ally.

We thus reach the reason why I maintain that the typical reaction to activism is wrong. Saying “you are too blunt, you are alienating people” to an activist is not just wrong, it’s harmful. First of all, most activists didn’t really choose to be so, people don’t go around shopping for worthy causes and simply pick one. I’m a passionate remainer (also) because Brexit is a clear and present danger to me (as well as utterly stupid). Of course I’m angry about Brexit, what else could I be? If you tell me that I should not show my anger, how am I supposed to react? Should I repress my anger, make yet one more effort for my cause, and thus remove3 myself from the ranks of the all-important activists? Nope, I don’t think so.

If you really want to help, here is the thing: you could actually help, instead of issuing counter-productive advice. You are sympathetic? Great! Go out and make your sympathies visible. That is precisely how you can help. Trying to dissolve the hard core group of irreducible activists and replace it with a “reasonable” bunch of bridge-builders simply does not work – there would be nothing to build the bridge to. You, the sympathiser, are the one who can build bridges; you are, manifestly, the possible link between those who are minding their own business and those who are trying to make change happen. You can be an ally, and you do have a role to play. Criticising activists for being activists is not that – it’s the (entirely understandable) sign of not understanding how activism works (at best). Otherwise, it’s a malign attempt to look sympathetic, just to save appearances. If you are annoyed by the unwillingness to compromise shown by activists for a cause you find agreeable, believe me, I feel you – I know why. But it still is the wrong reaction, so please – suck it up and try to use your frustration productively. Go build that bridge, or else, go away – in silence.

Notes:

1. Yes, I know. Some Tory members are and always have been extreme Eurosceptics. That’s OK, count them as activists, if you wish. The point here is that change was achieved, because a die-hard core existed (for decades), along with a growing crowd of sympathisers. Take one out of the picture, and little or nothing would have happened.

2. Still unconvinced? In the Chilean uprising I’ve witnessed, the same dynamic was obviously at play. The “violent protesters”, happy to clash with the police and to cause impossible to ignore disruption and tangible damage allowed the majority of peaceful protesters to negotiate with the government. Remove one group and not even small progress would have been made, alas.
For a good description of the situation (good: matches well the impressions I’ve formed by talking to some of the locals and didn’t make me cringe!) I recommend this NYT article (via @idshemilt). The one thing that the article misses is that declaring the state of emergency, deploying the army and imposing curfews had the effect of focussing minds: people were clearly more inclined to protest, as a consequence. In my eyes, it also gave new legitimacy to violent (deliberately clashing with the army/police) and destructive (torching, looting) protests.

3. I am temporarily promoting myself to the rank of activist. That’s a lie, told here for dramatic effect. In reality, I’m little more than a passive sympathiser.

Tagged with: , , , , , ,
Posted in Ethics, Politics, Psychology
Follow me on Twitter

All original content published on this blog is licensed under a Creative Commons Attribution-NonCommercial 4.0 International License.
Creative Commons Licence
Please feel free to re-use and adapt. I would appreciate if you'll let me know about any reuse, you may do so via twitter or the comments section. Thanks!