Essentialism and Compassion

What’s the relation between identity and essentialism? Why is compassion so rare? What links the TERF-wars with UK’s Labour party alleged Antisemitism? I can’t stop lucubrating on such questions.

Gender Neutral Restroom

Gender Neutral Restroom, because some problems do admit simple solutions. Source

Thus, I need to write my thoughts down. As usual, I hope the exercise will help me clarify my own (somewhat unstable) position and test it in the real world. With a big amount of luck, it might also be useful to some of my readers.

I cannot doubt my own pain. When I feel it, the feeling is what warrants my claim of being in pain. When someone else denies it, the only way I can possibly understand their denial is that they imply I’m lying. If they were right, two possibilities would exist: I’m lying, or I mistakenly believe to be in pain.

Denying someone else’s pain is to call them liars or deluded. There is no escape from this, apparently simple conclusion, is there? I don’t think so, but nevertheless, denying someone else’s pain is something that happens all the time, in medical settings, even.
Hang on, surely I’m overstating my case, right? Doctors are there to help, and if a patient comes in, claiming they are in pain, they will take her seriously and do everything they can to help. Right? Right?


Many women know it is wrong. The case of endometriosis makes it glaringly obvious and is well documented. As reported by Huntington and Gilmour already in 2005 “The time period between initially seeking medical help to a diagnosis being made typically took 5–10 years” or even:

Characteristically, diagnosis was a time of relief […] after years of having experiences negated by medical authorities and being told the pain was a normal part of menstruation. Women’s feelings that their pain was being dismissed as imaginary have also been noted in other studies

Or take the reports collected by Elaine Denny as far back as 2004:

The same survey reports that over half the respondents felt that their general practitioner (GP) did not take their symptoms seriously

Here is the crucial point: endometriosis is common. About 10% of women suffer from it. And yet, according to Morassutto et al. (2016), it’s quite possible that about 6 out of 10 cases still goes undiagnosed.
Endometriosis is a very common and debilitating illness; nevertheless it is still trivialised and dismissed on a regular basis. So long for Evidence-Informed Medicine. But it doesn’t end there, you can pick pretty much any pathology that disproportionally affects women and find similar patterns: misdiagnosis, lack of understanding, dismissal, mistreatments (do I need to mention the sorry business of the infamous vaginal mesh?).
Right, why is this happening? Most likely, because medicine still is a male-dominated field. Blokes can’t empathise with menstrual pain, not very well, at least. When presented with difficult cases of chronic pain that they can’t even begin to imagine, trivialising and/or normalising it presents a relatively easy escape route for the practitioner; as a result, too many might pick it.

The fact that doing so puts the patient in an unmanageable psychological situation does not prevent this kind of error: the patient ends up having to accept the authoritative conclusion (your pain is normal, not a concern) while being unable to do so (the pain is real and debilitating). It’s an insufferable injury: adds psychological distress of the worst kind on top of chronic pain, and nevertheless happens all the time.

Going back to my starting point: my pain, when present, is undeniable. Like ‘Cogito ergo sum‘, it is not up for discussion. Why? Because it happens in my mind, and as such, it has a genuine essence. My own description[1] of such essence is: perceived pain is a physical sensation that comes with an avoidance imperative. The essence of pain is that it comes with the desire to make it stop. This makes it a special phenomenon: when it happens to us, it’s one of the few things of which we can (at least sometimes) be absolutely sure.

Nevertheless, doubting someone else’s pain is common practice – in fact, denying the undeniable, simply because it happens in someone else’s mind, generates much unnecessary suffering.

Some time ago I wrote a essay where I claim that physical things don’t have an essence, only concepts do. Our cognitive structure generates the illusion that the concept “Tiger” picks up something objective: the ‘essence’ of Tigerness. But in the real world, there is no such thing. There is no point in time when the “Tiger” species started to exist, not objectively – when a tiger dies, there is no objectively identifiable point in time where it ceases to be a tiger. We can be sure that 2+2=4, as it’s a completely abstract claim (the elements thereof admit precise definitions, because they do have an essence), but claiming that a given creature is, objectively, a “Tiger” is, if taken literally, nonsensical. To make such a claim meaningful, we need to do extra work: such claims can be understood to mean that the vast majority of able-minded humans will recognise the creature in question as an instance of the concept “tiger”[2]
All claims involving physical things should be understood in such a way. This is important: since such considerations apply to all claims of a certain kind, in common practice we can productively shortcut the whole thing, stop making the distinction between concept and instance thereof, forget all this malarkey and just talk. In fact, we do. We forget the distinction and by forgetting it, we create huge problems.

Here is one: right now, there is a fight consuming two communities which would (actually: should) otherwise be ‘natural allies’. Trans-Rights activists clash with other Feminists in a fight that has become nasty and the cause of much suffering. I want to propose that, much like the doctor dismissing pain caused by endometriosis, the clash in question is caused by the common confusion on how and when something has an essence, as opposed to when such an essence should instead be recognised as a useful (shorthand) fiction.

Let’s start from a Trans Woman. She self identifies as a woman[3], but her physical appearance is such that, seeing her, the vast majority of able-minded humans will recognise the person in question as an instance of the concept “man”. Like the case of pain, her feeling of being a woman is something that happens in her mind; negating it can have only two implications: she’s either delusional or lying. Like the case of pain, her feeling isn’t questionable: she feels like a woman because she feels like a woman. The feeling is not something she can change, and when she says “my essence is womanhood”, every single reply of the “No, it is not” sort comes with the “You are lying” implication. The “delusion” implication doesn’t or shouldn’t really work, because our woman knows for a fact how she feels. The result is nasty: all such reactions are either accusations of bad-faith or of mental-illness, and accusations that ultimately negate the legitimacy of feelings which, from within, are unquestionable. How would you react? With a strong defensive reaction, that’s how. Such “No, it is not” responses are inevitably psychologically and existentially threatening.

It should be obvious, but alas, it isn’t. Why? Because the vast majority of able humans will (instantly and automatically) recognise the person in question as an instance of the concept “man”. Thus, the automatic and uncontrollable (most common) reaction gets erroneously equated with “objective reality”. You will hear/read things like “this person is biologically male, and that’s a fact“, or “the intricacies of sex-biology are irrelevant, he has a penis, therefore he’s a man“. Here is the news: such claims are not only wrong, they are catastrophically wrong. They cause unnecessary pain and are plain stupid. For starters, the intricacies of sex-biology are relevant beyond reasonable doubt. Mismatches between anatomy, genetic material, self image, sexual orientation and development are common[4] – whoever claims to care about “facts” should start by acknowledging them. Moreover, if you are affected by any one of those intricacies of sex-biology, they will probably define a huge part of your existence; anyone claiming they are irrelevant is claiming “you are irrelevant”. It is cruel, without any counterbalancing benefit, and also manifestly false: nobody is irrelevant to herself. Finally, it is counter-productive: when the stated aim is to eliminate systematic oppression (which is why I consider myself a feminist), negating the very existence of people who are very visibly subject to systematic oppression doesn’t quite facilitate reaching the stated objective.

To clarify:

(1) A claim such as “I self-identify as a woman” refers to the concept of womanhood. Since it refers to a concept, there are situations where it can be asserted with absolute certainty. [Therefore, it is acceptable to negate such claims only if and when one has strong reasons to believe that who makes them is deliberately lying.]
(2) Claims such as: “this person has a penis, therefore he’s a man“, refer to objective reality and as such should always be understood as approximations – they imply an essence, which is a useful shortcut, but does not exist outside our own minds.

It is a counter-intuitive reversal, but I find it extremely useful, essential(!), even.

It follows that using (2) to rebuke (1) does not work. You can’t negate (1) without implying that it is a delusion or a wilful lie. Claim (2) is irrelevant to (1) as it refers to a separate domain. It is also ineffective, as claim (2) (properly understood) is inherently weaker than claim (1).

Oh. So why does it happen? Because my main argument cuts both ways, that’s why. If you are an outspoken feminist, invested into actively trying to reduce the systematic oppression of women, chances are you probably have been afflicted by an endless stream of unpleasant, threatening and perhaps physically damaging interactions with men. Thus, when confronted with a person who you immediately and automatically recognise as a man, you probably wouldn’t see a natural ally, you’d see a potential threat.

Responding, fine, but *this* particular person self-identifies as a woman, so your feeling is wrong and should be ignored, is stupid and harmful, pretty much as objection (2) is harmful to Trans people. If somebody feels pain, they are in pain. If somebody feels like a woman, they are a woman (to themselves). If someone feels threatened, they feel threatened.

How this translates in actual situations does matter. For example, Trans activists would like sport activities to be organised following self-identification and not by claims about (non existent) objective biology. I have (personally) little doubt: their argument looks logically unassailable. If you self-identify as a woman, you’d like to share the dressing room with other women and would feel out of place when competing against men. Moreover, if you dress, behave and actively try to look like a woman, frequenting male dressing rooms is likely to be genuinely dangerous, I dare anybody with half a brain to deny it.
But other women see (inevitably perceive) you as a man, and some would (objectively!) feel threatened if you were to use their same dressing room; many would also feel disadvantaged when competing against you. Moreover, asking them to ignore their (actual, real and well identified) feelings and act as if they didn’t exist, is a form of oppression, which gets justifiably and predictably resisted.

It is not clear to me how to solve such problems, but one thing can be said: deploying arguments of the type exemplified by (2) is harmful and ineffective – no matter who they are aimed at. Calling someone a TERF when they use arguments like (2) is, consequently, formally correct, but at the same time, it is also harmful, because it reinforces the (justified) negative reactions which created the problem at hand. In my view, both approaches (calling someone a TERF, as well as being a TERF) are afflicted by the same source of error: we mistake what we perceive as real with reality itself – we need to, in order to function, but we should be very aware of how and when it can cause problems.
I see no universal antidote, but a general rule of thumb does apply: compassion works. It really is that simple. One needs to start by acknowledging the reality of the other person’s feelings – failing to do so inevitably generates new enemies, it creates more problems, without solving any.

All this is quite depressing, even if it rests on an absurdly optimistic assumption. So far, I’ve constructed my argument as if we could take it for granted that all claims made were sincere. That’s not always the case. All people lie, sometimes.
In the case of Trans people Vs (some) Radical Feminists, a common objection to the Trans-rights requests is that predatory males might exploit any system based on “self-identification”, to prey-on, or otherwise harm women (4). If it can happen, given enough chances, eventually it will, making this argument non-dismissable. Conversely, we know that Trans people are regularly harassed and harmed when no weight is granted to self-identification claims (5). Unfortunately this situation is pretty much the status-quo, thus, arguments of type (4) cannot legitimately be used to stop attempts to change the status-quo based on (5). However, considerations based on (4) can and should be taken into account when discussing how to change the current arrangements.

I do not wish to claim that such problems are easily solved[5], far from it, but I can propose an interim conclusion: starting from a compassionate stance does make such horribly difficult problems a little bit more tractable. Or, in negative form: trying to address such problems without starting from a compassionate stance, makes the problem harder, because it inevitably creates a conflict (or exacerbates the pre-existing one).

Does this depend on the ratio between honest and deceitful claims of type (1)? In the case of Trans Women, self identifying as such comes with such an obvious and terrible cost, that one would be inclined to think that very few men would consistently self-identify as women for predatory (or otherwise unsavoury) reasons. However, in cases where we can expect many people to wilfully misrepresent their perceptions, the situation might change. Does this invalidate my claim about the need for compassion? I don’t think so.

One such case applies to the row about Antisemitism within the British Labour Party. According to many, Labour has a problem with Antisemitism, a problem which the Party has consistently failed to address, even if, undeniably, a vast amount of words were spent trying to do just that. Many people, including me, are convinced that lots of accusations of Antisemitism have been hurled in bad faith: political opponents of Labour abound (naturally) and even within Labour, factions opposed to the current leadership have a clear reason to hurl (possibly untrue, but very damaging) accusations. It’s politics, you should expect all sorts of foul play.
Does this mean that Labour does not have a problem with Antisemitism? Nope. It would mean so if and only if, all accusations were lies. The existence of any self-identified Jew who felt threatened by some Labour policy or the utterances of some Labour officials makes the problem real. Anyone claiming that “Labour does not have a problem with Antisemitism” is either lying or a failing to apply due compassion.

Personally, I could produce, on command, a number of claims, knowing full well that:

(a) I’d be saying things I believe to be true,
(b) I do not intend them as Antisemitic,
(c) I can detect no trace of Antisemitism in my whole being, but,
(d) they will be perceived as Antisemitic by a non-negligible number of people.

This is the case because what “Antisemitic” means is different to different people – even if something does have an essence, we still can’t be sure it is the same essence for everyone! To a non-Jew, white bloke, like myself, Antisemitism means some things, to a Jew, it necessarily means much, much more. Thus, compassion: one should sometimes shut up and listen. Ask what is perceived as Antisemitic and why, learn the nuances, and only then venture in the delicate business of making claims you believe to be true, if you really must.

The crux remains the same: if hearing an utterance makes you feel threatened, I have no right to say that it doesn’t or shouldn’t. Whether it shouldn’t is irrelevant, because it does. Nobody has control over their immediate emotional responses, in the same way in which when I see someone with a penis I inevitably see them as men. Thus, refusing to acknowledge the perceived threat is precisely what didn’t work with respect to the whole Labour Antisemitism row. Responding claiming innocence, on the basis of (a,b,c) and/or the alleged bad-faith of some players, is wrong and harmful. That’s because arguments of this sort are arguments of type (2): they negate the undeniable. If one is in pain, nothing anyone can tell them will convince her of the contrary. Claiming that “Attacking Israel isn’t Antisemitic” or that “One can denounce Zionism as a racist ideology without being Antisemitic” negates the reality of the effects that such claims do have on real people. It does not work, it harms both parties, it is utterly stupid because the lack of compassion makes the problem harder to solve. It’s a way to manufacture new enemies, instead of facilitating the creation of a shared understanding.
Importantly, all this works even if lots of people willingly deploy the Antisemitism card exclusively for their own political aims. Given that we have reasons to believe that lots of people do feel threatened and that this feeling does impact their capacity of participating in the activities of the Party, the presence of liars does not remove the requirement for compassion.

In conclusion, I argue that claims involving one’s own feelings refer to mental states, therefore they have an essence[6]. Having an essence frequently allows such claims to be accurate and undeniable (with the exception of lies), which is why, when tackling these problems, a compassionate stance is necessary. Denying such claims, when they are sincere, can frequently imply an (objectively?!) existential threat to the claimant. This fact inevitably generates a confrontational reaction. Thus, if the aim is to solve the problem at hand, such denials are usually extremely harmful and should be avoided at all costs.

Now go out there and be nice to one another, that’s an order.

Notes and Bibliography

I wish to thank Abeba Birhane for her support, inspiration and for providing useful feedback on this article.

[1] Note that I’m explicitly referring to my own best effort to describe the essence of (my own) pain. The fact that my pain has an essence doesn’t automatically mean I can accurately describe it in words; more importantly, it doesn’t mean that I can safely assume that such essence is shared between me and all other human beings. In fact, I’m pretty sure that different people feel pain in different ways…

[2] If you think this is too abstract and complicated, a typical case of philosophical nonsense, then I urge you to keep reading. If I’m right (big if), this stuff matters and has important implications on how we live and for everyone’s well-being.

[3] My learned readers might wonder why I’ve decided to avoid referring to the distinction between sex and gender. I did, because the intuition it fosters is the opposite of what I’m promoting. If something is seen as “socially constructed” many (automatically and immediately) feel that it makes this something “less real”. Which is the central problem I’m trying to address, so I will steer away from a framing that is likely to backfire (in the case of this present essay).

[4] Yes, really, even if it’s hard to pinpoint precise numbers. We’re talking about stuff you can see and touch in the real world: it doesn’t have an essence and therefore you can’t objectively pin it down. Moreover, nobody can claim to know how biological factors influence mental ones, apart from knowing that they obviously do have some influence. To get an idea of the prevalence, we can look at estimates of anatomical abnormalities, which according to Lee et al. (2015) are not uncommon at all: “When all congenital genital anomalies are considered, including cryptorchidism and hypospadias, the rate may be as high as 1: 200 to 1: 300“.

[5] The case of Trans Rights could indeed be used to discuss other common and harmful mechanisms. First of all, given a well defined problem, most people would expect that it is possible and desirable to identify a set of rules that define the correct way to solve it. This might explain why reactions such as (2) are so common, despite being wrong. Some problems can’t be solved, some others should not be solved, and some don’t admit one single set of rules that apply to all cases. Moreover, the sub-problem regarding Trans Women in professional Sport is plagued by additional issues. Sport, as commonly practised, benefits from competition as a stimulus to keep improving. Professional Sport is different: competition, winning and losing is what generates money. This makes the sub-problem different from the common one (the one that applies to Sport as leisure) and should therefore be treated separately (I’d also like to add that part of the problem is precisely how we tend to give too much importance to winning and losing).

[6] I regard this point as crucial, because it requires people to make a counter-intuitive inversion. Claims about things that don’t have an essence (the actual stuff out there in the world) should be understood as less certain than claims regarding one’s own mental states. Which produces a bit of a mess with respect to the concept of ‘objectivity’, but that’s another story…

Denny, E., 2004. Women’s experience of endometriosis. Journal of advanced nursing, 46(6), pp.641-648.

Huntington, A. and Gilmour, J.A., 2005. A life shaped by pain: women and endometriosis. Journal of clinical nursing, 14(9), pp.1124-1132.

Lee, P.A., Nordenström, A., Houk, C.P., Ahmed, S.F., Auchus, R., Baratz, A., Dalke, K.B., Liao, L.M., Lin-Su, K., Looijenga 3rd, L.H. and Mazur, T., 2016. Global disorders of sex development update since 2006: perceptions, approach and care. Hormone research in paediatrics, 85(3), pp.158-180.

Morassutto, C., Monasta, L., Ricci, G., Barbone, F. and Ronfani, L., 2016. Incidence and estimated prevalence of endometriosis and adenomyosis in Northeast Italy: a data linkage study. PloS one, 11(4), p.e0154227.

Tagged with: , , , , ,
Posted in Ethics, Philosophy, Politics, Psychology

Changing my mind: the big list

How do we change our minds? I find this question fascinating and important. To start: only fools never change their mind. Secondarily, it is surprisingly hard to remember on what I actually did change my mind, and equally difficult to pinpoint why. Finally, understanding how and why our opinions change is of fundamental importance in political discourse, because of the obvious link to the art of persuasion. [A delightful tangent arises also when considering history and philosophy of science: apparently, it’s not all about logic and “facts”.] In this post, I’ll start building a list of things on which I have changed my mind, I encourage every reader to do the same: it’s a fascinating endeavour.

Image by AZQuotes (Quote Source).

Some time ago Adam Elkus tweeted a self-challenge: he would tweet one thing on which he changed his mind for each “like” he received. [Unfortunately, the impressively long thread that resulted appears to be deleted.] Shortly after, Artem Kaznatcheev directed my attention to it: we ended up agreeing that it’s a good idea to keep such records.

I am not aiming at producing a complete list, instead, I’ll try to stick to subjects I do care about. The “strong feelings” filter is important to me: on one side, I’m convinced that our core beliefs are the least likely to change.On the other, it’s in our interest to change our views, if/when they happen to be wrong or harmful.

To try impose some order in an otherwise messy list, I’ll group entries by broad topics, starting with what feel like the most important changes. I’ll conclude with a summary of what seem to be the most frequent elements that appear in the disparate instances. In time, I hope to extend the present list with new entries, and perhaps with “feature-length” posts about one or the other specific case.

Big, world-view changes:

I will start with the most remarkable flip-flops, where changing my mind required to re-adjust a significant proportion of linked beliefs. Unsurprisingly, it’s a short list. Moreover, only one change in this section did start well after reaching adulthood.

What it means to be an adult: with effects on my understanding of competence.

As a child, I displayed a remarkable case of cognitive dissonance. I believed that:

A: Being an adult means “knowing what you are talking about”, and
B: Most adults are idiots, commanded by their emotions and unaware of this fact.

Growing up, I tried hard to achieve A, striving to learn what felt important and trying to develop reasonable ways to know when it was OK for me to express my thought (e.g., when I wasn’t risking to be badly wrong). Growing old, I’ve realised that nobody knows what they are talking about – instead, some rare and noteworthy individuals are able to express opinions while acknowledging and accounting for their own ignorance. In the process, I’ve developed a keen interest in epistemology.
This change is interesting to me on multiple fronts. First and foremost, it’s a prime example of why the ability to concurrently hold incompatible beliefs is useful: cognitive dissonance is, at least sometimes, an asset – in my case, what I now consider a false belief (A), helped me to become what I am now (in what feels as a useful way). Secondarily, this change consolidated my idea of competence: you have achieved competence on a given domain when you can attach reasonably reliable confidence intervals to your own predictions. It’s a view founded on the acceptance that what we don’t know always surpasses what we do know. Finally, it’s the reason why I maintain this  blog: I write to test, clarify and improve my ideas – trying to be mindful of my own ignorance.

Overall, this change initiated during adolescence (if not before), as I became articulate enough to try expressing beliefs like (A) and (B). Since I grew more and more convinced that (B) is fundamentally correct (as a rule of thumb – we are all slaves to our emotions), (A) had to give way.

Apparently, this path is quite common, here is proof (recommended soundtrack for this post):

Politics – how to achieve change and the role of radical positions/rhetoric.

For as long as I can remember, I always preferred building bridges to winning. I like to compete, but on the condition that the confrontation is seen by all participants as a mutual way to help each other at becoming better at whatever it is that we’re competing on. I think this is a useful attitude to maintain, but in my case, I can claim no credit for it: it’s not something I’ve learned with effort and dedication, it comes natural to me – I’ve got it for free. As a result of this inclination of mine, I always had an instinctive dislike for radical and uncompromising political stances. In many cases, I still do. However, in recent years I’ve changed my stance by introducing a very important class of exceptions. Specifically, I’ve realised that when a given group of people is marginalised, deemed irrelevant, and/or otherwise oppressed, the power imbalance that sustains the situation makes it impossible to change the status quo by deploying only persuasion and bridge-building strategies. This change of mind may be subtle, but has remarkable consequences. For example, as a young adult, I loathed radical feminism: I thought it was unquestionably counter-productive in that it facilitated a self-sustaining and fruitless confrontation. The same applied to probably most positions (see quote in the picture above: it’s really hard to remember what I thought before changing my mind) where the weak side in a power struggle advertised itself as combative and intransigent.

Right now, my view could not be more different. When one wants to eliminate a long-lasting power imbalance, if historical precedents are an indication, it seems to me that it is necessary to deploy a fair amount of intransigence. Specifically, it’s necessary to have a group of people who very clearly, and very publicly, won’t accept anything less than the complete elimination of such an imbalance (even if, or maybe especially when a complete resolution is manifestly impossible). If such a group is very visible and stubbornly refusing to be silenced, only then a separate group of more conciliatory activists (those who will accept or even seek small improvements as a form of progress) can become effective. I still believe that the first group will have the negative effect of fostering confrontation and entrenching, and that in most cases it is the second group is the one which can more effectively achieve desirable results. However, I now think that activists of the second kind can be effective only if, when and while the first kind is well established (crucially: when their common opposition believes that the radical activists will never cease and desist). Thus, even if my own predispositions force me to deploy and/or endorse the second strategy, I’m now fully convinced that the first approach is necessary and has to coexist with the first. (There is a parallel here with my views on Cognitive Dissonance, perhaps worth a future post).

How did I make this change? This story is too long to fit in here, but I’ll tickle my readers’ curiosity by mentioning the two crucial elements that contributed: the new atheism movement (yes, I know!) and the patience of many women, especially Abeba Birhane.

Science and epistemology – objectivity.

I presume that when I started my BA I thought I was in the business of understanding how the world works in a fairly straightforward manner. However, I can’t really be sure (see pic on top): while I was changing my mind on competence, I am now guessing that also another change was happening. Right now, I don’t believe in objectivity as normally understood, and it feels as if I always held this view. I doubt that’s the case. I do know that when I started blogging the big revolution had already happened, as it’s clear by reading this early post. I also know that my view has kept changing, but it seems that it’s merely becoming richer, not changing in a radical way. To spill my beans in full, right now I believe that any single method to understand the world around us must have limitations. It will be suited to pick out certain features of reality, but will also hide some other ones. Thus, by necessity, no single assertion about the world out there should be considered “objective” to the point of being unquestionable. As a result, I’m developing a deep dislike for the canonical writing style in science and philosophy. To my eyes, it looks designed to hide the elements of subjectivity that inevitably inform any piece of research (or thinking). In other words, it requires authors to deceive, which feels counterproductive to me.


All these changes contributed to something that continues to puzzle me. I once believed that coherence was a genuine indicator of value – in practical terms, I still do. However, I’m also growing more and more convinced that cognitive dissonance is a necessary ability of well-formed human beings. Without the ability of holding incompatible beliefs, humans would find it very hard to thrive. This topic is huge and controversial: hopefully I’ll find the energy to write about it explicitly. For now, it’s amusing to notice that “changing one’s mind” implies a break in coherence across the time-domain, which, I argue, is both necessary and generally a good thing.


Music is important to me. To remain sane, making and listening to music helps me a great deal.


Until my late twenties, I was convinced that light, apparently simple and mainstream pop songs had no value (not even those which resisted the test of time). I then started playing in a band: as it happens, most of us wanted to play the stuff I thought wasn’t interesting. Playing it made me change my mind. In short, I now realise that much of the value of music is that it makes people happy, at a negligible cost. Even the things I (still) don’t like make someone happy. Overall, I can’t imagine what could be considered as more useful than making people happy. Thus, suddenly, the awe inspired by some performers of mainstream pop, starts making sense. A better understanding of the technical skill, the sophisticated sensibility and the performance discipline helps solidifying my current view.


In my early teens I lumped them amongst the “pop, pointless” lot. My best mate disagreed, and kept telling me I was wrong (for probably a couple of years, or perhaps less: time flows slower when you’re young). He succeeded in changing my mind, but only obliquely: he persuaded me to give them a chance. Listening to their music, especially their less famous tunes, did the rest. I am now convinced they were and still are underestimated. I mention this because it’s the only case I can recall where I was actively persuaded by someone else.

People and society:

As my professional career meandered in new directions, it forced me to change my mind in quite spectacular ways, twice. These are both special cases, because I can link these changes to specific situations.

“Bad” people are inevitably unhappy.

I used to believe that selfish, vindictive and mean people (Jerks, if you prefer) must be living in a never-ending nightmare. Despised or, at best, feared by most, they deprive themselves of what truly matters. I now think that I was (badly) wrong: genuine psychopaths don’t give a toot and can be (often are?) as happy as any human can be.

How did this change happen? At the start of my professional career, I was in close contact with a genuine, highly successful psychopath: very smart, extremely charming and completely a-moral. When things worked for him, he was genuinely happy. He did care for his family, so did have a source of human warmth, and apparently that was enough: for all the rest, personal success was all that mattered – on top of that, exercising power, for the sake of it, appeared to give him genuine pleasure. Having professional success and a reasonably stable family, he was as happy as any human can hope to be.

Selfish and ruthless people are those who inevitably end up with managerial roles.

In the first 10+ years of my professional life, this rule held true. I then started working in SSRU and changed my mind. Since moving to London, I’ve been managed by extremely smart, caring and well-rounded human beings. Within the limited reach of my current work-environment, managers actually care for the people they manage: thus, I have to admit that my belief was wrong. I still can’t really figure out what makes it possible, though. With a consistent pattern involving multiple individuals and lasting 10+ years, I’m pretty sure it isn’t chance, but the necessary and sufficient conditions are eluding me: it still feels a little bit like a miracle.

Specific People:

Perhaps surprisingly, I usually change my mind about people in a fairly predictable way. I do, as most people, instantly form an opinion about everyone I meet, in the super-rapid, “automatic” way. If this opinion is negative, I am usually aware that I might be wrong, so it happens fairly often that, by learning more about the person in question, I will succeed in changing my (conscious/explicit) initial judgement. However, with surprising frequency, over time I frequently re-switch back to a negative view. This is another pattern that might be worth considering separately.


I’ll use Corbyn as the paradigmatic example of the process I’ve observed numerous times. I met him almost a decade ago: he did a short intervention at a rally I was co-organising. My immediate reaction was: this chap reasons by applying rigid ideological positions to everything – i.e., he does not think, he merely applies pre-existing rules to new situations. I didn’t like the man at all.
Sometime after his election to the leadership of the Labour party I changed my mind. Finally, there was someone saying what needs to be said: he was the only politician I’ve known in my lifetime able to expose the failing of Neoliberalism in a way that could actually reach the masses. I did like that, and I still do. So I concluded that there must have been more than what met my eye in the first encounter. Since then, I’ve flipped back, with a vengeance: I do accept that his intellect is more elastic than I initially thought, but now I also think that he is self-interested much more than is generally appreciated. I can make sense of his political actions over the last three years only by accepting that he’s not as interested in the common good as he would like us to believe. He appears to be happy to act in ways that are designed to preserve his position in the Party instead.

Boris Becker.

As a kid, for one or two years, I hated him – couldn’t watch him play. For reasons I’ve forgotten, I then forced myself to watch an entire match anyway. A couple of hours later I had flipped, and enjoyed watching his tennis ever since. I mention this here because it is perhaps the older instance of a change of mind (on something I cared about) that I can recall – it is also the only abrupt change that I can actually recollect.

Steven Pinker.

Given my original interest for cognitive science, even if I always disliked his way of presenting himself, I did have a bit of an intellectual crush for his views and work on cognition. I now think that he’s a self-righteous impostor, because he’s not at all interested in intellectual honesty. In his case, I know exactly what made me change my mind: it’s this discussion, in which he uses rhetorical tricks to obscure and deceive – the opposite of intellectual honesty. He then started working on his more popular books, and all my bad impressions got confirmed multiple times. This change did take some time: I was already contemplating it quite seriously when the discussion in question was published. In my mind, the specific trigger counts as the classic “last straw”.


That’s it! This is my provisional list, limited to some of the things I actually do care about. The most common element in the big and small changes listed above is that, aside for one case, all of these changes took time. Years, to be precise; ten or more, for the bigger ones. This matters to me, because it informs my actions, especially online. It makes no sense to debate anyone with the aim of changing their mind. On the internet, by definition, it all happens too quickly. I do debate online, every now and then, but I try to do it when I have a genuine interest in understanding what justifies a position I disagree with. I do hope that (my) online/offline activism can change some people’s mind (for the better, hopefully!), but I’m also convinced that if it does work, I won’t be there to witness the change. Significant rethinks simply take too long.

Another common element is that many of the changes above have a simple (and somewhat reassuring) ’cause’: learning. As I learned more and more about a given subject, my views changed. Sometimes radically; more often, subtly. This is probably healthy (and is perhaps uninteresting), so I suppose I should be happy about it, even if it’s disappointingly predictable.

Tagged with: , , , ,
Posted in Ethics, Philosophy, Premises, Psychology

Corbyn, #Brexit and the “clever” long game that doesn’t exist.

The Brexit clock is ticking. As David Allen Green correctly states, Brexit will automatically happen, unless “something exceptional” will prevent it. It is still possible to #StopBrexit, many routes are still open and public opinion is probably shifting, but exceptional things take time to materialise, so it’s time to make a collective effort to bring them into being.

Image by See Li (CC BY 2.0) source.

Question is: how? There certainly are many ways, but in my opinion, the best option we have is to “help” Labour to stop being “clever” and listen to its base instead. The reasons are simple: it would be good for the country, good for the party, it would honour Corbyn’s commitments to make his party more democratic, and, above all, pure maths tells us that it could work. This brings about a second question: why on earth it isn’t Labour trying to stop Brexit? The reason, I’m afraid, is that Corbyn and McDonnell are thoroughly Eurosceptic, as their behaviour amply demonstrates.

I have been on record arguing that the supposed “constructive ambiguity” displayed by Labour is a self-inflicted trap, and I’m glad to note that I’m not alone (by Simon Tilford). In here, I will make my argument explicit and suggest lines of action for all my fellow Remainers.

Corbyn’s position.

To understand what is going on, I am convinced that we need to discount words (everyone agrees that politicians lie) and look at real deeds instead. All evidence points in one direction. Moreover, a long time ago, I did briefly meet the man himself (he gave a short speech at a rally I helped organising), and my first-hand impression confirms what the facts say. Corbyn is a mainstream social-democrat. What makes him slightly peculiar is that his broadly socialist positions have not changed significantly ever since he started his political career. This is important, because it tells us a few things:

  1. Corbyn is almost certainly sincere. Unlike most politicians, his opinions don’t follow the polls. He has principles and advocates for them.
  2. The second side of the same coin is that his principles are fixed. He apparently has discovered the best ideology in his youth and will stick to it until he dies.

This is important, because his principles make him a natural Eurosceptic (if we forget tiny details like international cooperation and peacemaking…), and it is likely that it is impossible to make him change his mind on purely theoretical grounds. What remains to be seen, is if an argument based on the consequences of the current situation can. If, as I believe, the choice is either stopping Brexit or experiencing catastrophic political obliteration, perhaps we can convince him to do the right thing. The evidence that we can come from the referendum himself: he did reluctantly back #Remain, after all.

Corbyn’s opposition.

A legitimate question that needs answering is: why bother? If Corbyn is a staunch Eurosceptic, wouldn’t it be better to replace him? My answer is no, for two reasons.

First, replacing him soon enough is impossible. The PLP tried to remove him multiple times and failed consistently. The unexpected gains of Labour at the last general election sealed the deal. Only a catastrophic electoral result can currently trigger a successful leadership challenge, and that’s one catastrophe that is not desirable, as well as one that won’t happen in time.
Second, Corbyn is the only (semi)successful politician who is openly opposing the failed, but ubiquitous, neo-liberal ideology. We need him to continue doing so. 10 years of austerity have made his point, and he suddenly found himself able to make his (never-changing) case convincingly (or almost convincingly, considering that he did not win the last GE!). In the long run, if we want to stop out-of-control inequality and avoid widespread conflict or ecological suicide, we need to crush the prevailing neo-liberal delusions. Right now, Corbyn is the only credible politician that tries to do so, and does it with reasonable success. The side effect is that it’s almost inevitable that whoever may replace him will be offering something different, i.e., something more like New Labour. We need to promote social democracy, not neo-liberalism-light!

All things considered, we can’t hope to replace Corbyn, and, moreover, we should not wish to. Therefore, our best option is to “help” him to change course and stop Brexit. In my opinion, this is possible specifically because the current path leads to a very predictable disaster. To see why, we need to look ahead.

The grim prospect.

If Labour will not oppose Brexit, many different things may happen, but only two scenarios look likely.

Scenario 1: the EU gets all / most of the blame.

This is obviously what the current government hopes to achieve. It is possible that they will find a way to dilute the (now undeniable) economic downside (of all forms) of Brexit, or that they will somehow manage to negotiate a soft-enough way out. If the following hardships will be somewhat limited or well hidden, it is possible that the Conservatives will manage to claim victory while assigning all the blame to the EU. What this does to international cooperation is ominous, but that’s a different story. In this scenario, Labour will (rightly) take the bigger hit, as most Remainers will see through the government lies and blame Labour for not doing anything about it. We’ll get more Tory governments, more austerity, less human rights, etcetera.

Scenario 2: an unmitigated disaster.

If Brexit will go ahead and be hard enough, discontent will shoot through the roof. The Labour membership and electorate will inevitably blame the current leadership, forcing and winning a leadership challenge. We’ll get a new Labour leader, chosen from the ones that did fight to remain, and most likely, we’ll get offered a re-hashed New Labour-like set of policies. In other words, both major UK parties will again espouse Neo-Liberalism to greater or lesser extent. Within Labour, Corbyn will be blamed and be relegated to a well-deserved state of irrelevance.

Naturally, neither scenario represents what Corbyn hopes to achieve. What he presumably is wishing for is a third option:

Scenario 3: wishful thinking.

In this case people will magically forget that Labour did not oppose the Brexit disaster and will instead turn against the Tory government. How this could even be possible if Labour will continue to vote with the Conservatives on all major Brexit-enabling matters is a total mystery. It is pure pie in the sky, eat and have your cake, pink unicorn kind of delusion. One could have a faint hope to achieve such a result by starting to oppose Brexit, but to do so only when it will be too late. But this can’t be Corbyn’s plan, can it? After all, he *is* a man of principle, and wouldn’t actively deceive the whole nation, right?

Be as it may, I can’t see any other option (please do suggest more likely scenarios, if you see any!), leading me to the expected conclusion.

We need to change Corbyn’s mind.

I’ve tried many times, but I am unable to make sense of the current Labour’s stance on Brexit. A “jobs-first Brexit” is a chimera, it is now undeniable that the UK would be better-off within the EU. Moreover, a supposedly clever long game does not exist: if Corbyn does have a plan, it is failing. Insisting on the current line of (in)action is the best strategy to commit political suicide. It has stop now!

If Labour wants to retain any hope to win a general election in the foreseeable future, and if Corbyn wants to be around when it happens, the only possible strategy is to oppose Brexit. This can be done by forcing the Government’s hand and make their shambolic handling of negotiations untenable. The current government is very obviously the most incompetent and farcical one that the UK has had in living memory. It doesn’t even have a Tory majority. How hard can it be to make them stumble one more decisive time? Giving them a fatal blow may not be the easiest thing to do, but it is not the hardest either. Enough Tory MPs would be happy to stop Brexit, so it’s possible that all that’s required is for Labour to stop voting with the government, it isn’t rocket science – it’s called “opposition”!
If a new general election can be triggered soon enough, and if Labour will fight it on a solidly Remain platform, it is quite possible to win an outright majority, this time. After all, we have to admit that the current government could not be helping much more, not even if they tried…

Alternatively, Labour could take the Solomonic escape route and start campaigning for a second referendum (take the best deal on offer OR remain: a choice between deal or no deal is neither meaningful nor sensible). Having hated the first referendum with all my heart, this isn’t my preferred option (Referendums do promote the venomous “will of the people” rhetoric), but it is still incommensurably better than both likely scenarios.

What to do?

Luckily, you can help, if you found this article vaguely convincing.
You could write to your local Labour MP (and MEP, or candidate, and/or branch) expressing your wish to see a change in Labour’s official policy. All Labour members should get in touch with their local branch and inform them that they will rescind their membership unless Labour will start fighting against Brexit (I would recommend mentioning a cut-off date!). On social media, you can make your position known to Labour MPs, MEPs, Labour sections, as well as journalists, editors and opinion leaders. At the next local elections, you can visibly offer some financial support to any candidate that openly supports #Remain, and then vote for them (vote tactically and make your choice known)! You can (and should) sign the relevant petition, leverage the #FBPE crowd to do the same, etcetera. Once a clear objective is agreed (force Labour to fight against Brexit – properly, this time), there is no limit to what could be achieved.
As for Corbyn, I do hope he can be persuaded: after all, if he does want to make the UK a better place (as I believe), fighting against Brexit is the only promising strategy.

Please feel free to share and republish/repackage this article as widely as possible. My blog is licensed under Creative Commons for a reason!

Tagged with: , , , ,
Posted in Politics

Inequality and progress, privilege and barnacles

I’ve recently been to Cape Town, for work. It is impossible for someone like me to visit South Africa without asking “What am I doing with my privileges?” (see previous post). The scale of abject poverty would be overwhelming in itself, but in Cape Town it is paired with unapologetic displays of eye-watering, heavily-guarded wealth. To my eyes, Cape Town is the embodiment of everything that’s wrong with our current world, even if I’m told that Johannesburg is worse.

Darwin and the Barnacle: The Story of One Tiny Creature and History’s Most Spectacular Scientific Breakthrough. By Rebecca Stott.

Also recently, I’ve read a delightful book: Darwin and the Barnacle, by Rebecca Stott. This little gem tells the story of what Darwin was up to, before writing the Origins, but after conceiving the idea of evolution by natural selection. Turns out he dedicated about 8 years of his life to the classification of barnacles (Cirripedia). Why? Because he was a clever chap, and, in my own interpretation, he wanted to achieve two primary objectives. First, he needed to establish himself as a highly respected zoologist – he knew very well that a solid reputation was necessary to ensure his big idea would get a fair hearing. Second, he wanted to see how his idea on the origin of species would influence the meticulous work of classification.

It is impossible, for me, to read this book without noticing how much privilege Darwin enjoyed, but at the same time, one cannot possibly miss how well he used it.

The picture of Darwin that Stott paints is that of a wealthy individual who could spend all his time on something as unproductive as cataloguing barnacles. He earned a living by investing on land and stocks, after receiving an education at his family’s expense. Moreover, his work was dependent on a world-wide network of fellow zoologists, most of whom shared a similar lifestyle. Naturally, this network was possible only because of the postal system, which in turn was kept together by the systematic exploitation on which colonial empires were built. If that wasn’t enough, the first chapters of the book also provide an insight on the cultural milieu in which Darwin conceived his idea. Besides famous predecessors like Lamarck, Stott mentions Darwin’s university teachers and peers, their own influences, and so forth. All white men (most likely), all wealthy enough to pursue intellectual endeavours which afforded little or no obvious economic returns, spanning for generations. In other words, it is impossible to read “Darwin and the barnacle” without gaining an insight on how much privilege and inequality have been necessary to allow one of the most important scientific breakthroughs in the history of humanity. Take away Darwin’s privileges, the global system of inequality on which the British Empire was built (along with the other colonial powers) and Darwin would not have been able to do all the groundwork which secured his standing and thus ensured his ground-breaking book could not pass unnoticed. Take away the similar privilege of the countless thinkers who influenced him, and you’ll feel safe to bet that Darwin could not have conceived the idea of Natural Selection at all.

What this tells us (Stott’s writing is good enough to let you feel it in your bones) is that, in the world as it was organised back then, inequality enabled discovery and (scientific/technological) progress. I have little doubt the argument can stretch back for countless centuries, so I’ll leave it to my readers to look into their preferred breakthrough and see if privilege and inequality played an enabling part.

The result is as distasteful as obvious. Intellectual progress requires time to think. Time to collect and exchange ideas, time for debate, time spent studying, collecting or generating evidence, and so forth. Crucially, it requires time to develop gargantuan numbers of new ideas, most of which will turn out to be inadequate (we can remember Lamarck, to remain within the topic), but will nevertheless enable further improvements. Making mistakes (and lots of them!) is how we learn most of the important lessons. Ground-breaking thinking requires to be able to safely make mistakes, without risking too much of one’s livelihood, at least. None of the above can be done in significant amounts if everyone has to invest most of their time actively earning a living. If we move into the present world, people like me can spend a lot of time doing just that, but it is undeniable that I enjoy an amount of privilege that could probably make Darwin himself green with envy. Most people in academia are paid to study the stuff they love, how can you beat that?

In short: if we want progress, we need some privilege and inequality. Ugh.

Not nice, but not groundbreaking either. We all knew it already, right? Perhaps, but I get the feeling that we are happy to ignore this shared knowledge. First of all, most of us feel that we’ve earned our current standing. I know I do. However, this feeling is wrong, but not just a little wrong: it is positively delusional, as summarised in this comic strip, by Toby Morris (this is required reading!).

Moreover, inequality is increasingly prominent in public discourse: scholars and commentators agree that inequality is growing in uncontrollable and dangerous ways. However, I fear that the situation is frequently discussed superficially. Reading the news, I get the impression that inequality is treated as somewhat inevitable. It is indeed inevitable, but its shape and effects aren’t. If we could start to collectively question the otherwise invisible neoliberal assumptions that underpin most of our society, I am pretty sure that we would find better ways to manage, or even exploit, the inevitable inequality that comes with complex social organisations. There are, in the public discourse, some voices who are able to challenge said assumptions, but again, most of them seem to forget a couple of details. On one hand, all of the public voices who are able to challenge the established order are able to do so because of the privileges that such order confers them. On the other, most of these voices tend to work on the assumption that inequality is bad and should be eliminated. Too bad that such a vision doesn’t work. It is indefensible. Take an idealised society, where everyone is equal. How does it look? It doesn’t, that’s how. It is not possible. Not even hunter-gatherers have perfectly flat societies. Elders have privileges, good looks confer unearned status, and so forth. If a society allows specialisation, and specialisation is required in every technology-based society, different specialisations will confer different opportunities. These differences will propagate along kinship and relational networks (if I know about computers, my kids would probably grow up familiar with computers, etc.). Thus, no social organisation can be based on absolute equality: organising entails unequal distribution of opportunities.

Once again, this is all obvious to the point of being boring, but it needs to be tackled head-on. Doing so paints the problem of inequality in a different light: our problem is not the existence of inequality. Our problem is twofold, it is about excessive accumulation and misuse. On one side, there simply is too much inequality. On the other, if we don’t accept that privilege can be useful, we can easily fail to reap its potential benefits. As a result, too much privilege goes to waste. That is, I’d argue, the actual problem that we face.


According to Oxfam:

The 1,810 dollar billionaires on the 2016 Forbes list, 89% of whom are men, own $6.5 trillion – as much wealth as the bottom 70% of humanity.

I would guess that roughly 99% of the privileges that such wealth could confer are going to waste. It is quite fair to assume that most of these people spend their days trying to maximise their wealth. After all, in a capitalist society, this is what they are supposed to be doing. [The Forbes’ Billionaires list, used to collect the figures above, makes it perfectly clear: in the link above the list is preceded by the “Today’s Winners & Losers” section, emphasising our inclination of framing such matters in competitive terms.]
Some of these people do spend some of their time trying to facilitate progress, that much is true. But is it enough? Nah – it can’t be: billionaires have disproportionate amounts of power, and very little constraints. At the very least, their concerns will be biased by their own position and cannot possibly be considered to be diverse enough to approximately reflect the needs and desires of humanity as a whole.


If we don’t recognise that privilege is supposed to be useful (for society as a whole, not just the privileged!), we can (and do) encourage people to waste it. We return to Cape Town. What good are enormous mansions used a few weeks per year, for the holidays of some billionaire? None or almost none. What does society gain by building enormous yachts that sail across the world to be used in the week-end (and be reached by private jet)? Close to nothing. Who needs the latest Ferrari? Nobody. What does society gain from the existence of few individuals who have enough spending power to buy entire countries?

I think and hope that this state of affairs is not necessary or inevitable. What facilitates it is a collection of self-serving ideas: that privilege is earned, that the act of earning it guarantees something useful was done in the process, and that therefore privilege can be spent/used without second thoughts.

I disagree. Privilege is usually the result of luck (95%, or thereabout). Moreover, as Darwin’s encounter with barnacles testifies, privilege should be used, not dilapidated. Thus, we get two streams of consequences, and an overarching question.

The question is: when is inequality too much? If some privilege is both inevitable and potentially useful, it follows that there probably exists a Goldilocks zone for inequality. Enough to allow an adequate number of people to take risks, write novels, play music, invest in apparently idle intellectual quests, etc., but not much more than that.

[Note: it would be wrong to assume that such a Goldilocks inequality zone is fixed. It is probably a function of how society is organised and of general availability of resources. It would be equally wrong to assume that such an ideal status always correlates with periods of high cultural or technological advancement. High inequality will generally allow some people to pursue intellectual endeavours; however, the results might be visible even when an unnecessarily high amount of privilege goes to waste.]

Consequences: global.

Forbes’ figures leave little room for doubt. This article (by David Leonhardt) tells us that it’s getting worse. If there isn’t too much inequality right now (there is, but let’s pretend) and the trend isn’t inverted, there soon will be. Thus, we reach the predictable conclusion that we need to invert the trend. That’s OK, only problem is that we don’t know how to do it. To be fair, we have plenty of reasons to worry that we can’t: the current political developments suggests that we are actually busy electing politicians who are determined to facilitate the rise of inequality (even when they are not directly funded by those who want to protect existing privileges)… If something is to be done, I’d start by NOT electing such people.

Consequences: personal.

What is a little less predictable is what all of this means for you and me. If you are reading this, you have an internet connection and some time to read. Thus, almost without exception, every person reading this article probably has some spare time and some mental energy to invest – I do, clearly… The question is: how are we using our spare resources? Are we allowing our privilege to be wasted? Probably.

Thus, we reconnect with my previous post. I am incredibly lucky. I am trying to use my luck: I have the presumption that I am not bad at thinking, therefore I try to do just that. I think (and write), in my own idiosyncratic way.
Is it enough? Probably not, but at least I’m trying.

Tagged with: , , ,
Posted in Evolution, Philosophy, Politics, Science

Naïve philosophaster: on method and privilege

In which I start counting my privileges and reflect on how I am trying to use them.

That’s correct, I did write “use [my privileges]”. This expression is questionable and relies on important implicit assumptions. In fact, one purpose of this post is to allow writing the follow-up, where I’ll unpack these assumptions and discuss them explicitly. My overarching aim is to explain why I think that privileges should be kept under control, but used, not eliminated. However, before tackling the core argument, I think I need to spend some time looking at my own circumstances.

The Cathedral of Learning. Intimidating and self-referential? Image by Thomas J. Grant (CC BY-SA 3.0)

A few years ago I wrote a self-describing blurb for my Twitter profile. The full text is:

Former (molecular) neurobiologist, now software developer. Science junkie, evidence seeker, naïve philosophaster, music lover.

With so little space at my disposal, it still surprises me how long this text lasted, and how comfortable I still am with it. It is ageing well. Most of it is factual, describes what I do, where I come from and what makes me tick. The odd element is in the title of this post: naïve philosophaster. This expression is my attempt to express the serendipitous method that I found myself following, which in turn screams of privilege. If find the circumstance interesting and problematic. I also think that it allows to form the basis of a wider set of ideas, giving me an excuse to pretend that what follows is not only about me, me, me[1].

My blessings.

I was born white, male, heterosexual, into a middle class, intellectual and caring family (if somewhat troubled). Financial worries were limited to the long-term: food, clothes and shelter were not a pressing or recurring concern. I arrived into this world as European, just in time to fit in the last generations where it was normal to end up being better-off than the respective parents. I could go through university without needing to work, and could therefore complete my studies in reasonable time while also cultivating extra-curricular interests. As a result, I have never been unemployed, not even for a day.

How lucky is that? Considering luck alone, I must be among the top 0.0001% individuals that have ever roamed the planet. That’s right, remember the rhetoric about the 1%? Forget it. I got it much, much better than that. The only thing that isn’t quite perfect about the situation I was born-in comes from a good dose of familiar trouble in my childhood. I was not a happy child, but luckily(!), I was not scarred for life. What the early trouble left me is some awareness of what trauma can do to people, no matter how privileged. I also learned that logic and rationality are never the main driver of human actions.

Overall, in the global scale, considering the history of humanity, virtually no-one had it better than me.

I have an extremely satisfying job, which allows me to put the bulk of my mental energies into something that is very clearly contributing to the common good. I am not rich in the 1% sense, not if the 1% is limited to the Western block, but I don’t need to worry about pennies either. That’s one of the ways in which I can use my rare set of privileges: I could perhaps find a better-paying job, but I would have to compromise the perceived utility of the job itself, so I choose not to even look. But is it enough? No, I don’t think so. The additional bit is implied by what I mean with “naïve philosophaster”.

The value of naïvety.

My natural inclinations are quite intellectual. I’m at home amongst abstractions, and when I’m facing some difficulty, my instinctive reaction is to sit down and think. Not having kids, I get to enjoy having some extra time, and when I’m lucky(!), I may even have some spare brainpower. It took me almost 40 years to realise that my lucubrations might be (with some luck!) somewhat valuable. By doing so I found myself facing the question: OK, so how should I invest my spare mental energies? I wanted something that wouldn’t drain me (the real job should remain my priority), but hopefully complement my professional persona. This blog is the result, and naïve philosophastry is how I do it.

Some of the “how” comes by deliberate choice, some of it from luck(!) or serendipity. If the question was: how should I spend the spare resources that I have? “Build on your strengths”, was my not entirely conscious answer. I’m good at grasping the big picture (AKA: I get quickly bored if I have to look at minute details!), I know something about biology, computers, neuro- and cognitive-science, I also have an interest in politics (in case you didn’t notice!). Given these “strengths”, it came natural to me to try to use them by thinking and writing. The hard part was to accept that my thinking, no matter how imperfect, needs to be made public, if (and only if) the hope is that it may be useful[2].

The serendipitous part is the method I have apparently settled-in. It goes like this: I find a problem that interests me. I allow and encourage myself to think and read about it in whichever way happens to be possible and relatively easy. Then I sit and write down my thoughts. This forces me to construct some kind of discourse which appears to be sufficiently coherent. If such coherence fails to materialise, I can supplement with some extra reading. However, at this stage I would normally stop whenever I’ll manage to put some order in my thoughts. After doing so, I publish the result here and/or in whichever medium appears to be suitable (mostly here!).

This solidifies my starting position, and comes with the important side effect that it hopefully allows some originality to slip in. At this stage, I can and usually do notice weak spots in my reasoning, I might also naturally grow an interest in points of view that appear to challenge my reasoning – with luck(!) I might even receive valuable criticism and feedback. This is where (hopefully) my naïvety starts decreasing. More reading happens, directed by what I perceive as gaps or weaknesses in my own position. I can then iterate: starting from a little less ignorance, see what subset of my original ideas still seem to make sense and repeat the process until boredom supervenes[3].

Why am I inflicting this onto my readers? Because what I am doing seems to work, at least in the sense that it is not guaranteed to be a waste of time[4]. Trouble is, once again, I can do what I do only because of luck. I’m a philosophaster, because I don’t do philosophy professionally. However, this circumstance allows me to do philosophy a little differently, and thus comes with the hope of doing it, in some very limited respect, better (ugh!).

Professional philosophers, like most/all academics, don’t normally have the privilege of indulging in their own idiosyncratic thought processes. In order to make a living, they ought to start by securing their BA, which mostly consists in eliminating every visible naïvety. They then have to get a PhD, which requires to build some original thoughts, but on the mandatory condition of demonstrating that such thoughts are based on a thorough understanding of the pre-existing ones. Publishing papers and books then also follows the same pattern: there simply isn’t any room for naïvety (a side effect is, I fear, some promotion of intellectual dishonesty: admitting that one’s reading/understanding doesn’t cover every possible aspect of a given topic is academically unacceptable – but, alas, some ignorance is, IMO, inevitable). I know that the official approach does make sense: it prevents people from producing the same old ideas (or mistakes) over and over again. However, it also creates new problems. To earn the right of being taken seriously, modern-day philosophers need to immerse themselves in pre-existing frameworks. They have to accept and build upon one or the other paradigm (in most cases). This inevitably has two undesirable consequences:

  1. Allows people to concentrate on underdeveloped corners of a given framework, providing a seemingly endless supply of  low-gain, low-risk routes to securing a career in the field.
  2. By promoting hyper-specialisation, current expectations also constrain and stifle the appearance of big-picture, or out-of-the-box, paradigm-changing new ideas.

In other words, I do think that professional philosophy is indeed disproportionately geared towards promoting the study of Chmess. Luckily(!), since I earn my money elsewhere, I have the possibility of playing the naïvety game, which neatly side-steps what I perceive as the most common pitfalls of professional philosophy.

This is not to say that naïvety is a virtue, it comes with obvious drawbacks – in fairness, my approach only makes sense in light of how it differs from the mainstream – it is somewhat parasitic. In terms of drawbacks, I am certainly maximising my chances of wasting time by re-producing unoriginal ideas or mistakes. I can afford this risk: if I have fun in the process, no real waste is involved. Secondary risk is wasting the time of my tiny readership: on this, I am selfishly happy to let you take your chances!
Moreover, I make my mistakes in public and I visibly refuse to tick most or all the official “seriousness” boxes. In fact, I am now advertising my refusal. By doing so, I am lowering my chances of being taken seriously. This may become a problem if I will stumble on an idea that is both valuable and new. Given the low probability of such eventuality, I guess I can afford leaving this problem unsolved, for now, even if I do harbour the worry that my current method is self-defeating, for sociological reasons.


I am lucky to a point that defies comprehension. Even in my attempt of using my luck in a productive way I end up exploiting my luck even more. Naïve philosophastry certainly isn’t the best or most parsimonious method around, however, it suits my situation, comes effortlessly to me and ensures I have fun along the way. At the same time, it allows to sidestep some of the major flaws of the mainstream method (in my perception). Thus, I exploit my privilege, by being a Naïve Philosophaster. What remains to be seen is whether I should acknowledge my self-serving biases, and do something radically different.
Instead of exploiting my privileges, shouldn’t I renounce them? It seems logical. My excuse for not doing so will come in the next post. Stay tuned, and please do feel free to have a go at me in the meantime.

Notes and Bibliography:

[1] Being about me, writing this post has been harder than usual. I want to thank some of my Twitter friends (Stuart Boardman, Paul Harland and Abeba Birhane) for their kind and useful pre-publication feedback, as well as for collectively providing the courage to click “Publish”.

[2] “Useful” is left underspecified on purpose. I don’t need to know exactly how my ideas might be useful. They may make someone look at their own beliefs in a new light, spark some debate, help a student with some coursework, inspire a new train of thoughts, expose a mistake, etcetera. In all cases, even the ones I can’t imagine, for an idea to be useful to anyone but me, it needs to be available outside my own head.

[3] Boredom is an involuntary heuristic system that (hopefully) signals diminishing returns. Once I stall and fail to detect the potential for significant progress, boredom automatically steps in, forcing me to move on. This is also where I think the process risks failing: I may and probably do fail to follow up; once I feel my ideas feel settled and well informed, I might lose interest before writing down the result of the whole charade.

[4] I do know that some of my posts here are (or have been) used in university courses, moreover, my blogging experience is helping me at work. Thus, I do have some reasons to believe that what I’m doing is not entirely wrong.


Dennett, D. C. (2006). Higher-order truths about chmess. Topoi, 25(1), 39-41.

Tagged with: , ,
Posted in Ethics, Philosophy, Premises

Predictive Processing: the long road ahead.

In the previous posts in this series I’ve proposed an extreme synthesis of the Predictive Processing (PP) idea, as proposed by Andy Clark in “Surfing Uncertainty” – I concluded with a post that summarised why I think PP is the most promising idea currently on offer in the entire neuroscience field. In this post I will do the opposite: exciting and extremely powerful ideas should never go unchallenged. Thus, I will produce a short list of what I see as the main problems that PP either fails to solve or even generates by its own accord.

Audience: who is this post for?

If PP is true, why so many different neural structures? Image by Thomas Schultz. CC BY-SA 3.0

This post is significantly different from the previous ones in the series. Previously, I tried to summarise my understanding of the PP framework. First of all, I wanted to check if my understanding was good enough, at least according to my own standards(!): by trying to put together a decent summary I forced myself to see if the picture did fit and if it appeared to cover enough ground. Secondarily, I thought this exercise could be useful to newcomers. PP isn’t exactly the most approachable framework. Thus, I was (/am) hoping that my effort could double up as useful introduction to PP, at least, it could help deciding if and how PP is worth deeper scrutiny. Having done the above, however imperfectly, it’s time to change gear and move onto criticism. Once again, this helps me to understand what I should look out for: a neat list might direct my future readings, based on their potential to address what I think are the most important shortcomings and/or gaps in the PP story.

In terms of audience, this means that my ideal reader has changed. I would like to receive scrutiny and counter-criticism from people who are already invested in the PP framework. In return, my list might help PP specialists to see their topic from a fresh perspective, which may be useful to spot weak points (if I’m doing it right) and/or areas that require more accessible explanations (if I’m not!).

Method: what do I think I’m doing?

Given my high ambitions, it’s worth adding also some warnings, in the form of an explicit admission of why what follows is necessarily far from ideal. I write here because I enjoy it, but I have a quite demanding day job, which has nothing to do with neuroscience and/or PP itself. Thus, I cannot, nor wish-to systematically read most or all of the literature on the subject. What I do, is approach the topic with a flâneurish attitude: I do actively look for things to read, but only invest my limited spare time in reading what happens to attract my attention, for whatever reason.

As a consequence, I expect that many of the points I’ll address below have been raised before, and that many possible solutions have been proposed already. What I will mention is selected (out of a longer list) either because I think that a given issue really needs to be addressed as soon as possible (and in great detail) or because I think that there is no available consensus on the possible “solutions”. In both cases, I might be wrong, in which case I would greatly appreciate some feedback (specifically: pointers about what I should read next!).

Predictive Processing: a list of issues and upcoming challenges.

I will list most of my criticism in the shortest manner that I’m capable of. I will try to explain at least why I think a given question should count as a genuine problem. I do plan to expand on several or all points in follow-up posts. The following list is roughly organised from the more technical/specific to the more general/conceptual.

1. Does PP require filtering and partitioning?

If PP layers specialise in predicting certain kinds of features, does this require to filter incoming sensory streams and to segregate the results of different filters along separate PP pathways? Starting from the periphery, PP states that there must exist a “Level-0” which receives “a prediction” from Level-1 and matches it with the raw sensory input. Let’s imagine that Level-1 specialises in predicting direction of edges/lines/segments in the visual field (to make my point understandable – in fairness, any kind of feature might be the specific concern). Even if the prediction is 100% accurate, a lot of the original signal will not be predicted, for Level-1 only deals with a limited set of features; hence, most of the original input to Level-0 will always travel up to Level-1. In PP terms, this would/should count as an Error Signal (ES). However, if the job of Level-1 is do deal with edges/lines/segments alone, the signal it receives from Level-0 will never account for errors alone. Level-1 will therefore need to be able to discern between residual sensory input that could not have been predicted (at this level) and error signal that results from wrong predictions.
This simple observation calls for an additional element: either on levels 0 or 1 some filtering mechanism, on top of vanilla PP, is required. This filtering could be used to distinguish ES from the residual sensory signal. Alternatively the filtering may happen upstream, ensuring each level receives only the kind of signal that fits its particular role. Moreover, what is filtered out at one level needs to be directed to some different PP unit.

Thus, we end with:

  1. At least one additional mechanism, dedicated to filtering. More importantly, different filters will apply at different levels and PP units. Thus, at each/most/many levels, different filters are likely to entail very different neural architectures.
  2. Re-routing of signals so to apply different filters to the same raw signal, leading to parallel PP as well as instances of level skipping, where a particular filtered signal jumps one or multiple levels to serve as input at the appropriate layer.
  3. If I’m getting this right, it is then possible that an additional system is required to reconstruct the bigger picture, once different features have been segmented and predicted (or not).

In other words, I don’t think that the purely perceptual part of PP, as proposed/synthesised by Clark, is complete – some additional element is missing.

2. Attention.

The account of attention proposed by PP is remarkably elegant, extremely powerful, and strikingly comprehensive. Attention is very hard to pinpoint, due to its duplicitous (or multifaceted) nature. To start with, attentional mechanisms can be triggered in both bottom-up (a sudden noise) and top-down (trying to decipher the handwriting of a medical practitioner) fashions. At first sight, precision weighting (PW) allows to account for this dichotomous triggering, which is one reason why PP looks so promising. However, I do not think that PW alone can account for all the observable phenomena – from my own (perhaps idiosyncratic) point of view, both the bottom-up and top-down stories seem incomplete, at best.

Bottom-up: a sudden and unpredicted loud bang is certainly able to generate bottom-up attention. Problem is: attention to what? Not the noise itself: being sudden and brief, by the time the attention mechanism has managed to be fully activated, it is likely that the noise has disappeared. In fact, what happens is that our attention is diverted towards the rough spatial location where we have estimated that the noise originated. This makes adaptive sense, but, as far as I can tell, nothing in the proposed PP mechanisms is able to explain how. Yes, a big ES was received (we failed to predict the sudden noise), but somehow, attention then becomes focused on multiple channels, directed to specific features of such channels, and perhaps won’t even involve the channel where the original strong ES was generated.
Top-down: similarly, if I’m trying to decipher my doctor’s hasty scribbles, PP suggests that I would do so by enhancing some error signal (requiring to put more time and effort in minimising it). Once again, the obvious question is: how does my brain decide what error signal should be amplified? In this particular case, it will involve a relatively high (conceptual) layer in the visual pathway, but most definitely, not the whole visual processing hierarchy. For example, detecting the exact hue of the scribbles isn’t very relevant to the task (doesn’t change much if they were made with a blue or black pen) and motion detectors (predictors) won’t be very useful in this particular case. It follows that attention needs to be able to focus not only on specific areas of the receptive field (in this case, specific parts of the visual “scene”) but also at particular (context dependent) layers in the PP pathway. Simply enhancing the strength of the error signal along the visual pathway (which is what is possible according to the vanilla interpretation of PP and PW) is very obviously not enough. We need to be able to enhance a specific error signal, identified in a coordinates space which is at least three-dimensional. The architecture of the visual pathway may allow to enhance only a particular area of the visual field, this would account for 2 dimensions, leaving the third (what kind of feature is attention supposed to focus on) unaccounted.
Once again, it seems to me that some fundamental ingredient is missing. It is possible that this ingredient is identical or closely related to the one I’ve identified in the previous section.

3. Heterogeneity.

PP describes a general processing style which is supposedly able to account for most of our mental abilities. From perception (of internal and external states), to attention, sense making, planning and action control. Very little does not fit in. This is one reason why PP is so attractive, but comes with its own cost. If the same processing style is deployed across more or less all of the brain functions, the variability of neural structures within the brain becomes an anomaly that requires a PP-specific explanation. As far as I can tell, this explanation is currently only sketched. If my first worry (above) is justified, I do suspect that what will count as a solution about filtering and partitioning might end up to account for various specialised structures that are particularly optimised for filtering and/or predicting specific features of incoming signals.

4. Development.

Once again, the versatility of the PP story generates its own new questions. If PP circuits (whatever they will turn out to be) are able handle most of brain functions, it follows that to enhance the abilities of a given brain, what is needed is simply more of the same. This is not what is observed along the development of brains (synaptic pruning, anyone?). There is a lot that needs explained in terms of how different structures develop in a PP-specific way. Once this is done, one also needs to explain related phenomena such repurposing of specialised areas (reading, for example) and proceed to figuring out how specialised areas change in size following training, exercise or disuse. Perhaps this is also where synaesthesia enters the picture.

5. Learning.

What specific systems allow the PP circuitry to adjust predictions in order to learn how to quash error signals? It seems to me that PP should be enriched with one or more hypothesis tackling how, given ErrorA (corresponding to PredictionA), the layer that received it will produce a new PredictionA1 which should better reduce the ES. This is an extremely complicated business. To start with, PredictionA and ErrorA both might contain clues on how PredictionA should be modified, but other clues could be present in virtually any other processing layer. Presumably, the brain has some system of fishing out relevant information, but nothing in PP helped me getting a glimpse of what such a mechanism might be. Timing issues also get in the way: by the time PredictionA1 is issued, new sensory input would have been generated, making it even harder to produce the right prediction for the new (still to-be-assessed situation). If a prediction is badly wrong, how does the brain get back in track, instead of getting it progressively more wrong?

6. Evolution.

Some elements of the PP story are well placed within a bigger evolutionary outlook. Perhaps too well! The perspective offered by Friston via the deployment of Markov Blankets is in fact able to extend the PP lens all the way back to unicellular organisms… Thus, more new questions emerge! If PP-like mechanisms are possible (or even necessary!) within single cells, what different function were neurons selected for? When/where exactly in the phylogenetic tree do neurons start to organise around error minimisation? Does that coincide with the point where error signals get relegated to signals between neurons? Speaking of which: why should error signals be transmitted exclusively between neurons? Are we sure they don’t involve other types of cells?
If PP circuitry is so versatile, what accounts for the vast difference in abilities across different species (and even different individuals  – see also points 1, 3, 4 and 5 above)? Looking at humans: what explains our special abilities (for example, the unprecedented specialisations which allow language and mental time travel)? If PP accounts for it, does it mean that PP circuitry is not present in organisms that show no trace of such abilities? If it does not, what additional ingredient enables some faculties especially in humans?

7. Plants, sleep and dreaming.

Sleep and perhaps dreaming seem to be conserved features, present (in variable forms) across most animals, probably even insects. PP proposes to be an architecture that solves the basic “persistence” problem of all living forms (via the Free Energy Principle – FEP, see conclusion): is PP present in some form also in plants? If not, why not? Assuming we can leave vegetables aside, can we expect PP to be present across most animals? If we can, should we conclude that sleep and dreaming correlate with PP? In such a case, what is the relation supposed to be? Does PP itself produce the requirement of sleeping and dreaming? How? If not, why not?
[In fairness, Clark does address some of the questions above in his book. As far as I’m concerned, I would guess that learning, sleep and dreaming will eventually be accounted for by one single “solution”, see my ancient thoughts, to learn why.]

8. Pain and pleasure.

If we are describing the signal exchanges to/from and within brains, I would expect that any such account will somehow account for pain and pleasure signals. In PP, perhaps pain can be conceptualised as an error signal that refuses to be predicted away (thus being well placed to attract our conscious attention). This idea seems promising to me (would account for some structural hyper-prior, forcing pain “errors” to never be fully predicted). If so, how does PP allow for such “systematic” inability to predict something? Especially with chronic pain, such predictions should be quite easily produced! Even if this idea is on track, how do we explain pleasure? It can’t simply be the opposite: a signal that always gets predicted away. That’s because we are all quite good at giving attention to pleasurable sensations… In other words, I can’t see how PP can directly account for the special qualities of painful and pleasant stimuli, or even start explaining what distinguishes one from the other.

9. Consciousness.

This issue follows from the point above. It doesn’t seem that PP itself is able to account for the phenomenal element of consciousness (the “what is it like” aspect, or phenomenal consciousness – PC). Once a brain has produced a good enough global prediction of the last train of inputs, what exactly controls why we can consciously perceive some features and not others? How does attention influence what we are conscious of? What constitutes the undesirability of painful sensations? What makes pleasure desirable? Are all PP-based systems conscious? If not, what additional system produces PC? What accounts for the loss of PC during dreamless sleep?
In short, PP appears to remain solidly outside the scope of Chalmers’ Hard Problem of consciousness. This is perhaps the biggest problem that I see. If PP explains perception (but does it? If PP does not include an explanation of why we perceive some things and not others, does it account for perception at all?), attention, mental-time travel, planning and action, but in no way accounts for PC, what function does PC fulfil? If accepting PP entails epiphenomenalism, as far as I’m concerned, it follows that PP must be nonsensical, pretty much as epiphenomenalism itself.


The list above is incomplete. It took me a very long time to write this post also because I had to find a way to organise my thoughts and establish some reasonable criteria to decide what could be left out. The biggest omission is about the Free Energy Principle. This is because criticising FEP requires a full book, cannot be done in a few lines. Secondarily, such criticism might be aimed at a too broad target, and thus fail to be constructive. [For the gluttons: I’ve covered the brightest side of FEP here, while some hints of criticism are in this discussion.]

Overall, it seems to be pretty obvious that PP, as a theoretical framework (and/or, depending on your preferences: a scientific paradigm, a scientific programme) is far from complete. This is expected and entirely justified. As anyone with some familiarity with the history of science should know, new ideas require time to reach maturity, they necessarily start off by being incomplete, sometimes directly contradicted by some pre-existing evidence, and not necessarily self-consistent either. That’s normal. Thus, this post is not intended to curb our enthusiasm, it is intended to focus it in (hopefully) useful ways. My quasi-arbitrary critique above might help focussing our attention in interesting directions. Or at least, it might help me: I will appreciate all feedback, and in particular reading suggestions in response to any of the points raised here. Thank you!

Clark, A (2016). Surfing Uncertainty: Prediction, Action, and the Embodied Mind Oxford Scholarship DOI: 10.1093/acprof:oso/9780190217013.003.0011

Tagged with: , , , , , , ,
Posted in Neuroscience, Philosophy
Follow me on Twitter

All original content published on this blog is licensed under a Creative Commons Attribution-NonCommercial 4.0 International License.
Creative Commons Licence
Please feel free to re-use and adapt. I would appreciate if you'll let me know about any reuse, you may do so via twitter or the comments section. Thanks!

%d bloggers like this: