Inequality and progress, privilege and barnacles

I’ve recently been to Cape Town, for work. It is impossible for someone like me to visit South Africa without asking “What am I doing with my privileges?” (see previous post). The scale of abject poverty would be overwhelming in itself, but in Cape Town it is paired with unapologetic displays of eye-watering, heavily-guarded wealth. To my eyes, Cape Town is the embodiment of everything that’s wrong with our current world, even if I’m told that Johannesburg is worse.

Darwin and the Barnacle: The Story of One Tiny Creature and History’s Most Spectacular Scientific Breakthrough. By Rebecca Stott.

Also recently, I’ve read a delightful book: Darwin and the Barnacle, by Rebecca Stott. This little gem tells the story of what Darwin was up to, before writing the Origins, but after conceiving the idea of evolution by natural selection. Turns out he dedicated about 8 years of his life to the classification of barnacles (Cirripedia). Why? Because he was a clever chap, and, in my own interpretation, he wanted to achieve two primary objectives. First, he needed to establish himself as a highly respected zoologist – he knew very well that a solid reputation was necessary to ensure his big idea would get a fair hearing. Second, he wanted to see how his idea on the origin of species would influence the meticulous work of classification.

It is impossible, for me, to read this book without noticing how much privilege Darwin enjoyed, but at the same time, one cannot possibly miss how well he used it.

The picture of Darwin that Stott paints is that of a wealthy individual who could spend all his time on something as unproductive as cataloguing barnacles. He earned a living by investing on land and stocks, after receiving an education at his family’s expense. Moreover, his work was dependent on a world-wide network of fellow zoologists, most of whom shared a similar lifestyle. Naturally, this network was possible only because of the postal system, which in turn was kept together by the systematic exploitation on which colonial empires were built. If that wasn’t enough, the first chapters of the book also provide an insight on the cultural milieu in which Darwin conceived his idea. Besides famous predecessors like Lamarck, Stott mentions Darwin’s university teachers and peers, their own influences, and so forth. All white men (most likely), all wealthy enough to pursue intellectual endeavours which afforded little or no obvious economic returns, spanning for generations. In other words, it is impossible to read “Darwin and the barnacle” without gaining an insight on how much privilege and inequality have been necessary to allow one of the most important scientific breakthroughs in the history of humanity. Take away Darwin’s privileges, the global system of inequality on which the British Empire was built (along with the other colonial powers) and Darwin would not have been able to do all the groundwork which secured his standing and thus ensured his ground-breaking book could not pass unnoticed. Take away the similar privilege of the countless thinkers who influenced him, and you’ll feel safe to bet that Darwin could not have conceived the idea of Natural Selection at all.

What this tells us (Stott’s writing is good enough to let you feel it in your bones) is that, in the world as it was organised back then, inequality enabled discovery and (scientific/technological) progress. I have little doubt the argument can stretch back for countless centuries, so I’ll leave it to my readers to look into their preferred breakthrough and see if privilege and inequality played an enabling part.

The result is as distasteful as obvious. Intellectual progress requires time to think. Time to collect and exchange ideas, time for debate, time spent studying, collecting or generating evidence, and so forth. Crucially, it requires time to develop gargantuan numbers of new ideas, most of which will turn out to be inadequate (we can remember Lamarck, to remain within the topic), but will nevertheless enable further improvements. Making mistakes (and lots of them!) is how we learn most of the important lessons. Ground-breaking thinking requires to be able to safely make mistakes, without risking too much of one’s livelihood, at least. None of the above can be done in significant amounts if everyone has to invest most of their time actively earning a living. If we move into the present world, people like me can spend a lot of time doing just that, but it is undeniable that I enjoy an amount of privilege that could probably make Darwin himself green with envy. Most people in academia are paid to study the stuff they love, how can you beat that?

In short: if we want progress, we need some privilege and inequality. Ugh.

Not nice, but not groundbreaking either. We all knew it already, right? Perhaps, but I get the feeling that we are happy to ignore this shared knowledge. First of all, most of us feel that we’ve earned our current standing. I know I do. However, this feeling is wrong, but not just a little wrong: it is positively delusional, as summarised in this comic strip, by Toby Morris (this is required reading!).

Moreover, inequality is increasingly prominent in public discourse: scholars and commentators agree that inequality is growing in uncontrollable and dangerous ways. However, I fear that the situation is frequently discussed superficially. Reading the news, I get the impression that inequality is treated as somewhat inevitable. It is indeed inevitable, but its shape and effects aren’t. If we could start to collectively question the otherwise invisible neoliberal assumptions that underpin most of our society, I am pretty sure that we would find better ways to manage, or even exploit, the inevitable inequality that comes with complex social organisations. There are, in the public discourse, some voices who are able to challenge said assumptions, but again, most of them seem to forget a couple of details. On one hand, all of the public voices who are able to challenge the established order are able to do so because of the privileges that such order confers them. On the other, most of these voices tend to work on the assumption that inequality is bad and should be eliminated. Too bad that such a vision doesn’t work. It is indefensible. Take an idealised society, where everyone is equal. How does it look? It doesn’t, that’s how. It is not possible. Not even hunter-gatherers have perfectly flat societies. Elders have privileges, good looks confer unearned status, and so forth. If a society allows specialisation, and specialisation is required in every technology-based society, different specialisations will confer different opportunities. These differences will propagate along kinship and relational networks (if I know about computers, my kids would probably grow up familiar with computers, etc.). Thus, no social organisation can be based on absolute equality: organising entails unequal distribution of opportunities.

Once again, this is all obvious to the point of being boring, but it needs to be tackled head-on. Doing so paints the problem of inequality in a different light: our problem is not the existence of inequality. Our problem is twofold, it is about excessive accumulation and misuse. On one side, there simply is too much inequality. On the other, if we don’t accept that privilege can be useful, we can easily fail to reap its potential benefits. As a result, too much privilege goes to waste. That is, I’d argue, the actual problem that we face.

Accumulation:

According to Oxfam:

The 1,810 dollar billionaires on the 2016 Forbes list, 89% of whom are men, own $6.5 trillion – as much wealth as the bottom 70% of humanity.

I would guess that roughly 99% of the privileges that such wealth could confer are going to waste. It is quite fair to assume that most of these people spend their days trying to maximise their wealth. After all, in a capitalist society, this is what they are supposed to be doing. [The Forbes’ Billionaires list, used to collect the figures above, makes it perfectly clear: in the link above the list is preceded by the “Today’s Winners & Losers” section, emphasising our inclination of framing such matters in competitive terms.]
Some of these people do spend some of their time trying to facilitate progress, that much is true. But is it enough? Nah – it can’t be: billionaires have disproportionate amounts of power, and very little constraints. At the very least, their concerns will be biased by their own position and cannot possibly be considered to be diverse enough to approximately reflect the needs and desires of humanity as a whole.

Misuse:

If we don’t recognise that privilege is supposed to be useful (for society as a whole, not just the privileged!), we can (and do) encourage people to waste it. We return to Cape Town. What good are enormous mansions used a few weeks per year, for the holidays of some billionaire? None or almost none. What does society gain by building enormous yachts that sail across the world to be used in the week-end (and be reached by private jet)? Close to nothing. Who needs the latest Ferrari? Nobody. What does society gain from the existence of few individuals who have enough spending power to buy entire countries?

I think and hope that this state of affairs is not necessary or inevitable. What facilitates it is a collection of self-serving ideas: that privilege is earned, that the act of earning it guarantees something useful was done in the process, and that therefore privilege can be spent/used without second thoughts.

I disagree. Privilege is usually the result of luck (95%, or thereabout). Moreover, as Darwin’s encounter with barnacles testifies, privilege should be used, not dilapidated. Thus, we get two streams of consequences, and an overarching question.

The question is: when is inequality too much? If some privilege is both inevitable and potentially useful, it follows that there probably exists a Goldilocks zone for inequality. Enough to allow an adequate number of people to take risks, write novels, play music, invest in apparently idle intellectual quests, etc., but not much more than that.

[Note: it would be wrong to assume that such a Goldilocks inequality zone is fixed. It is probably a function of how society is organised and of general availability of resources. It would be equally wrong to assume that such an ideal status always correlates with periods of high cultural or technological advancement. High inequality will generally allow some people to pursue intellectual endeavours; however, the results might be visible even when an unnecessarily high amount of privilege goes to waste.]

Consequences: global.

Forbes’ figures leave little room for doubt. This article (by David Leonhardt) tells us that it’s getting worse. If there isn’t too much inequality right now (there is, but let’s pretend) and the trend isn’t inverted, there soon will be. Thus, we reach the predictable conclusion that we need to invert the trend. That’s OK, only problem is that we don’t know how to do it. To be fair, we have plenty of reasons to worry that we can’t: the current political developments suggests that we are actually busy electing politicians who are determined to facilitate the rise of inequality (even when they are not directly funded by those who want to protect existing privileges)… If something is to be done, I’d start by NOT electing such people.

Consequences: personal.

What is a little less predictable is what all of this means for you and me. If you are reading this, you have an internet connection and some time to read. Thus, almost without exception, every person reading this article probably has some spare time and some mental energy to invest – I do, clearly… The question is: how are we using our spare resources? Are we allowing our privilege to be wasted? Probably.

Thus, we reconnect with my previous post. I am incredibly lucky. I am trying to use my luck: I have the presumption that I am not bad at thinking, therefore I try to do just that. I think (and write), in my own idiosyncratic way.
Is it enough? Probably not, but at least I’m trying.

Tagged with: , , ,
Posted in Evolution, Philosophy, Politics, Science

Naïve philosophaster: on method and privilege

In which I start counting my privileges and reflect on how I am trying to use them.

That’s correct, I did write “use [my privileges]”. This expression is questionable and relies on important implicit assumptions. In fact, one purpose of this post is to allow writing the follow-up, where I’ll unpack these assumptions and discuss them explicitly. My overarching aim is to explain why I think that privileges should be kept under control, but used, not eliminated. However, before tackling the core argument, I think I need to spend some time looking at my own circumstances.

The Cathedral of Learning. Intimidating and self-referential? Image by Thomas J. Grant (CC BY-SA 3.0)

A few years ago I wrote a self-describing blurb for my Twitter profile. The full text is:

Former (molecular) neurobiologist, now software developer. Science junkie, evidence seeker, naïve philosophaster, music lover.

With so little space at my disposal, it still surprises me how long this text lasted, and how comfortable I still am with it. It is ageing well. Most of it is factual, describes what I do, where I come from and what makes me tick. The odd element is in the title of this post: naïve philosophaster. This expression is my attempt to express the serendipitous method that I found myself following, which in turn screams of privilege. If find the circumstance interesting and problematic. I also think that it allows to form the basis of a wider set of ideas, giving me an excuse to pretend that what follows is not only about me, me, me[1].

My blessings.

I was born white, male, heterosexual, into a middle class, intellectual and caring family (if somewhat troubled). Financial worries were limited to the long-term: food, clothes and shelter were not a pressing or recurring concern. I arrived into this world as European, just in time to fit in the last generations where it was normal to end up being better-off than the respective parents. I could go through university without needing to work, and could therefore complete my studies in reasonable time while also cultivating extra-curricular interests. As a result, I have never been unemployed, not even for a day.

How lucky is that? Considering luck alone, I must be among the top 0.0001% individuals that have ever roamed the planet. That’s right, remember the rhetoric about the 1%? Forget it. I got it much, much better than that. The only thing that isn’t quite perfect about the situation I was born-in comes from a good dose of familiar trouble in my childhood. I was not a happy child, but luckily(!), I was not scarred for life. What the early trouble left me is some awareness of what trauma can do to people, no matter how privileged. I also learned that logic and rationality are never the main driver of human actions.

Overall, in the global scale, considering the history of humanity, virtually no-one had it better than me.

I have an extremely satisfying job, which allows me to put the bulk of my mental energies into something that is very clearly contributing to the common good. I am not rich in the 1% sense, not if the 1% is limited to the Western block, but I don’t need to worry about pennies either. That’s one of the ways in which I can use my rare set of privileges: I could perhaps find a better-paying job, but I would have to compromise the perceived utility of the job itself, so I choose not to even look. But is it enough? No, I don’t think so. The additional bit is implied by what I mean with “naïve philosophaster”.

The value of naïvety.

My natural inclinations are quite intellectual. I’m at home amongst abstractions, and when I’m facing some difficulty, my instinctive reaction is to sit down and think. Not having kids, I get to enjoy having some extra time, and when I’m lucky(!), I may even have some spare brainpower. It took me almost 40 years to realise that my lucubrations might be (with some luck!) somewhat valuable. By doing so I found myself facing the question: OK, so how should I invest my spare mental energies? I wanted something that wouldn’t drain me (the real job should remain my priority), but hopefully complement my professional persona. This blog is the result, and naïve philosophastry is how I do it.

Some of the “how” comes by deliberate choice, some of it from luck(!) or serendipity. If the question was: how should I spend the spare resources that I have? “Build on your strengths”, was my not entirely conscious answer. I’m good at grasping the big picture (AKA: I get quickly bored if I have to look at minute details!), I know something about biology, computers, neuro- and cognitive-science, I also have an interest in politics (in case you didn’t notice!). Given these “strengths”, it came natural to me to try to use them by thinking and writing. The hard part was to accept that my thinking, no matter how imperfect, needs to be made public, if (and only if) the hope is that it may be useful[2].

The serendipitous part is the method I have apparently settled-in. It goes like this: I find a problem that interests me. I allow and encourage myself to think and read about it in whichever way happens to be possible and relatively easy. Then I sit and write down my thoughts. This forces me to construct some kind of discourse which appears to be sufficiently coherent. If such coherence fails to materialise, I can supplement with some extra reading. However, at this stage I would normally stop whenever I’ll manage to put some order in my thoughts. After doing so, I publish the result here and/or in whichever medium appears to be suitable (mostly here!).

This solidifies my starting position, and comes with the important side effect that it hopefully allows some originality to slip in. At this stage, I can and usually do notice weak spots in my reasoning, I might also naturally grow an interest in points of view that appear to challenge my reasoning – with luck(!) I might even receive valuable criticism and feedback. This is where (hopefully) my naïvety starts decreasing. More reading happens, directed by what I perceive as gaps or weaknesses in my own position. I can then iterate: starting from a little less ignorance, see what subset of my original ideas still seem to make sense and repeat the process until boredom supervenes[3].

Why am I inflicting this onto my readers? Because what I am doing seems to work, at least in the sense that it is not guaranteed to be a waste of time[4]. Trouble is, once again, I can do what I do only because of luck. I’m a philosophaster, because I don’t do philosophy professionally. However, this circumstance allows me to do philosophy a little differently, and thus comes with the hope of doing it, in some very limited respect, better (ugh!).

Professional philosophers, like most/all academics, don’t normally have the privilege of indulging in their own idiosyncratic thought processes. In order to make a living, they ought to start by securing their BA, which mostly consists in eliminating every visible naïvety. They then have to get a PhD, which requires to build some original thoughts, but on the mandatory condition of demonstrating that such thoughts are based on a thorough understanding of the pre-existing ones. Publishing papers and books then also follows the same pattern: there simply isn’t any room for naïvety (a side effect is, I fear, some promotion of intellectual dishonesty: admitting that one’s reading/understanding doesn’t cover every possible aspect of a given topic is academically unacceptable – but, alas, some ignorance is, IMO, inevitable). I know that the official approach does make sense: it prevents people from producing the same old ideas (or mistakes) over and over again. However, it also creates new problems. To earn the right of being taken seriously, modern-day philosophers need to immerse themselves in pre-existing frameworks. They have to accept and build upon one or the other paradigm (in most cases). This inevitably has two undesirable consequences:

  1. Allows people to concentrate on underdeveloped corners of a given framework, providing a seemingly endless supply of  low-gain, low-risk routes to securing a career in the field.
  2. By promoting hyper-specialisation, current expectations also constrain and stifle the appearance of big-picture, or out-of-the-box, paradigm-changing new ideas.

In other words, I do think that professional philosophy is indeed disproportionately geared towards promoting the study of Chmess. Luckily(!), since I earn my money elsewhere, I have the possibility of playing the naïvety game, which neatly side-steps what I perceive as the most common pitfalls of professional philosophy.

This is not to say that naïvety is a virtue, it comes with obvious drawbacks – in fairness, my approach only makes sense in light of how it differs from the mainstream – it is somewhat parasitic. In terms of drawbacks, I am certainly maximising my chances of wasting time by re-producing unoriginal ideas or mistakes. I can afford this risk: if I have fun in the process, no real waste is involved. Secondary risk is wasting the time of my tiny readership: on this, I am selfishly happy to let you take your chances!
Moreover, I make my mistakes in public and I visibly refuse to tick most or all the official “seriousness” boxes. In fact, I am now advertising my refusal. By doing so, I am lowering my chances of being taken seriously. This may become a problem if I will stumble on an idea that is both valuable and new. Given the low probability of such eventuality, I guess I can afford leaving this problem unsolved, for now, even if I do harbour the worry that my current method is self-defeating, for sociological reasons.

Conclusion

I am lucky to a point that defies comprehension. Even in my attempt of using my luck in a productive way I end up exploiting my luck even more. Naïve philosophastry certainly isn’t the best or most parsimonious method around, however, it suits my situation, comes effortlessly to me and ensures I have fun along the way. At the same time, it allows to sidestep some of the major flaws of the mainstream method (in my perception). Thus, I exploit my privilege, by being a Naïve Philosophaster. What remains to be seen is whether I should acknowledge my self-serving biases, and do something radically different.
Instead of exploiting my privileges, shouldn’t I renounce them? It seems logical. My excuse for not doing so will come in the next post. Stay tuned, and please do feel free to have a go at me in the meantime.

Notes and Bibliography:

[1] Being about me, writing this post has been harder than usual. I want to thank some of my Twitter friends (Stuart Boardman, Paul Harland and Abeba Birhane) for their kind and useful pre-publication feedback, as well as for collectively providing the courage to click “Publish”.

[2] “Useful” is left underspecified on purpose. I don’t need to know exactly how my ideas might be useful. They may make someone look at their own beliefs in a new light, spark some debate, help a student with some coursework, inspire a new train of thoughts, expose a mistake, etcetera. In all cases, even the ones I can’t imagine, for an idea to be useful to anyone but me, it needs to be available outside my own head.

[3] Boredom is an involuntary heuristic system that (hopefully) signals diminishing returns. Once I stall and fail to detect the potential for significant progress, boredom automatically steps in, forcing me to move on. This is also where I think the process risks failing: I may and probably do fail to follow up; once I feel my ideas feel settled and well informed, I might lose interest before writing down the result of the whole charade.

[4] I do know that some of my posts here are (or have been) used in university courses, moreover, my blogging experience is helping me at work. Thus, I do have some reasons to believe that what I’m doing is not entirely wrong.

Bibliography

Dennett, D. C. (2006). Higher-order truths about chmess. Topoi, 25(1), 39-41.

Tagged with: , ,
Posted in Ethics, Philosophy, Premises

Predictive Processing: the long road ahead.

In the previous posts in this series I’ve proposed an extreme synthesis of the Predictive Processing (PP) idea, as proposed by Andy Clark in “Surfing Uncertainty” – I concluded with a post that summarised why I think PP is the most promising idea currently on offer in the entire neuroscience field. In this post I will do the opposite: exciting and extremely powerful ideas should never go unchallenged. Thus, I will produce a short list of what I see as the main problems that PP either fails to solve or even generates by its own accord.

Audience: who is this post for?

If PP is true, why so many different neural structures? Image by Thomas Schultz. CC BY-SA 3.0

This post is significantly different from the previous ones in the series. Previously, I tried to summarise my understanding of the PP framework. First of all, I wanted to check if my understanding was good enough, at least according to my own standards(!): by trying to put together a decent summary I forced myself to see if the picture did fit and if it appeared to cover enough ground. Secondarily, I thought this exercise could be useful to newcomers. PP isn’t exactly the most approachable framework. Thus, I was (/am) hoping that my effort could double up as useful introduction to PP, at least, it could help deciding if and how PP is worth deeper scrutiny. Having done the above, however imperfectly, it’s time to change gear and move onto criticism. Once again, this helps me to understand what I should look out for: a neat list might direct my future readings, based on their potential to address what I think are the most important shortcomings and/or gaps in the PP story.

In terms of audience, this means that my ideal reader has changed. I would like to receive scrutiny and counter-criticism from people who are already invested in the PP framework. In return, my list might help PP specialists to see their topic from a fresh perspective, which may be useful to spot weak points (if I’m doing it right) and/or areas that require more accessible explanations (if I’m not!).

Method: what do I think I’m doing?

Given my high ambitions, it’s worth adding also some warnings, in the form of an explicit admission of why what follows is necessarily far from ideal. I write here because I enjoy it, but I have a quite demanding day job, which has nothing to do with neuroscience and/or PP itself. Thus, I cannot, nor wish-to systematically read most or all of the literature on the subject. What I do, is approach the topic with a flâneurish attitude: I do actively look for things to read, but only invest my limited spare time in reading what happens to attract my attention, for whatever reason.

As a consequence, I expect that many of the points I’ll address below have been raised before, and that many possible solutions have been proposed already. What I will mention is selected (out of a longer list) either because I think that a given issue really needs to be addressed as soon as possible (and in great detail) or because I think that there is no available consensus on the possible “solutions”. In both cases, I might be wrong, in which case I would greatly appreciate some feedback (specifically: pointers about what I should read next!).

Predictive Processing: a list of issues and upcoming challenges.

I will list most of my criticism in the shortest manner that I’m capable of. I will try to explain at least why I think a given question should count as a genuine problem. I do plan to expand on several or all points in follow-up posts. The following list is roughly organised from the more technical/specific to the more general/conceptual.

1. Does PP require filtering and partitioning?

If PP layers specialise in predicting certain kinds of features, does this require to filter incoming sensory streams and to segregate the results of different filters along separate PP pathways? Starting from the periphery, PP states that there must exist a “Level-0” which receives “a prediction” from Level-1 and matches it with the raw sensory input. Let’s imagine that Level-1 specialises in predicting direction of edges/lines/segments in the visual field (to make my point understandable – in fairness, any kind of feature might be the specific concern). Even if the prediction is 100% accurate, a lot of the original signal will not be predicted, for Level-1 only deals with a limited set of features; hence, most of the original input to Level-0 will always travel up to Level-1. In PP terms, this would/should count as an Error Signal (ES). However, if the job of Level-1 is do deal with edges/lines/segments alone, the signal it receives from Level-0 will never account for errors alone. Level-1 will therefore need to be able to discern between residual sensory input that could not have been predicted (at this level) and error signal that results from wrong predictions.
This simple observation calls for an additional element: either on levels 0 or 1 some filtering mechanism, on top of vanilla PP, is required. This filtering could be used to distinguish ES from the residual sensory signal. Alternatively the filtering may happen upstream, ensuring each level receives only the kind of signal that fits its particular role. Moreover, what is filtered out at one level needs to be directed to some different PP unit.

Thus, we end with:

  1. At least one additional mechanism, dedicated to filtering. More importantly, different filters will apply at different levels and PP units. Thus, at each/most/many levels, different filters are likely to entail very different neural architectures.
  2. Re-routing of signals so to apply different filters to the same raw signal, leading to parallel PP as well as instances of level skipping, where a particular filtered signal jumps one or multiple levels to serve as input at the appropriate layer.
  3. If I’m getting this right, it is then possible that an additional system is required to reconstruct the bigger picture, once different features have been segmented and predicted (or not).

In other words, I don’t think that the purely perceptual part of PP, as proposed/synthesised by Clark, is complete – some additional element is missing.

2. Attention.

The account of attention proposed by PP is remarkably elegant, extremely powerful, and strikingly comprehensive. Attention is very hard to pinpoint, due to its duplicitous (or multifaceted) nature. To start with, attentional mechanisms can be triggered in both bottom-up (a sudden noise) and top-down (trying to decipher the handwriting of a medical practitioner) fashions. At first sight, precision weighting (PW) allows to account for this dichotomous triggering, which is one reason why PP looks so promising. However, I do not think that PW alone can account for all the observable phenomena – from my own (perhaps idiosyncratic) point of view, both the bottom-up and top-down stories seem incomplete, at best.

Bottom-up: a sudden and unpredicted loud bang is certainly able to generate bottom-up attention. Problem is: attention to what? Not the noise itself: being sudden and brief, by the time the attention mechanism has managed to be fully activated, it is likely that the noise has disappeared. In fact, what happens is that our attention is diverted towards the rough spatial location where we have estimated that the noise originated. This makes adaptive sense, but, as far as I can tell, nothing in the proposed PP mechanisms is able to explain how. Yes, a big ES was received (we failed to predict the sudden noise), but somehow, attention then becomes focused on multiple channels, directed to specific features of such channels, and perhaps won’t even involve the channel where the original strong ES was generated.
Top-down: similarly, if I’m trying to decipher my doctor’s hasty scribbles, PP suggests that I would do so by enhancing some error signal (requiring to put more time and effort in minimising it). Once again, the obvious question is: how does my brain decide what error signal should be amplified? In this particular case, it will involve a relatively high (conceptual) layer in the visual pathway, but most definitely, not the whole visual processing hierarchy. For example, detecting the exact hue of the scribbles isn’t very relevant to the task (doesn’t change much if they were made with a blue or black pen) and motion detectors (predictors) won’t be very useful in this particular case. It follows that attention needs to be able to focus not only on specific areas of the receptive field (in this case, specific parts of the visual “scene”) but also at particular (context dependent) layers in the PP pathway. Simply enhancing the strength of the error signal along the visual pathway (which is what is possible according to the vanilla interpretation of PP and PW) is very obviously not enough. We need to be able to enhance a specific error signal, identified in a coordinates space which is at least three-dimensional. The architecture of the visual pathway may allow to enhance only a particular area of the visual field, this would account for 2 dimensions, leaving the third (what kind of feature is attention supposed to focus on) unaccounted.
Once again, it seems to me that some fundamental ingredient is missing. It is possible that this ingredient is identical or closely related to the one I’ve identified in the previous section.

3. Heterogeneity.

PP describes a general processing style which is supposedly able to account for most of our mental abilities. From perception (of internal and external states), to attention, sense making, planning and action control. Very little does not fit in. This is one reason why PP is so attractive, but comes with its own cost. If the same processing style is deployed across more or less all of the brain functions, the variability of neural structures within the brain becomes an anomaly that requires a PP-specific explanation. As far as I can tell, this explanation is currently only sketched. If my first worry (above) is justified, I do suspect that what will count as a solution about filtering and partitioning might end up to account for various specialised structures that are particularly optimised for filtering and/or predicting specific features of incoming signals.

4. Development.

Once again, the versatility of the PP story generates its own new questions. If PP circuits (whatever they will turn out to be) are able handle most of brain functions, it follows that to enhance the abilities of a given brain, what is needed is simply more of the same. This is not what is observed along the development of brains (synaptic pruning, anyone?). There is a lot that needs explained in terms of how different structures develop in a PP-specific way. Once this is done, one also needs to explain related phenomena such repurposing of specialised areas (reading, for example) and proceed to figuring out how specialised areas change in size following training, exercise or disuse. Perhaps this is also where synaesthesia enters the picture.

5. Learning.

What specific systems allow the PP circuitry to adjust predictions in order to learn how to quash error signals? It seems to me that PP should be enriched with one or more hypothesis tackling how, given ErrorA (corresponding to PredictionA), the layer that received it will produce a new PredictionA1 which should better reduce the ES. This is an extremely complicated business. To start with, PredictionA and ErrorA both might contain clues on how PredictionA should be modified, but other clues could be present in virtually any other processing layer. Presumably, the brain has some system of fishing out relevant information, but nothing in PP helped me getting a glimpse of what such a mechanism might be. Timing issues also get in the way: by the time PredictionA1 is issued, new sensory input would have been generated, making it even harder to produce the right prediction for the new (still to-be-assessed situation). If a prediction is badly wrong, how does the brain get back in track, instead of getting it progressively more wrong?

6. Evolution.

Some elements of the PP story are well placed within a bigger evolutionary outlook. Perhaps too well! The perspective offered by Friston via the deployment of Markov Blankets is in fact able to extend the PP lens all the way back to unicellular organisms… Thus, more new questions emerge! If PP-like mechanisms are possible (or even necessary!) within single cells, what different function were neurons selected for? When/where exactly in the phylogenetic tree do neurons start to organise around error minimisation? Does that coincide with the point where error signals get relegated to signals between neurons? Speaking of which: why should error signals be transmitted exclusively between neurons? Are we sure they don’t involve other types of cells?
If PP circuitry is so versatile, what accounts for the vast difference in abilities across different species (and even different individuals  – see also points 1, 3, 4 and 5 above)? Looking at humans: what explains our special abilities (for example, the unprecedented specialisations which allow language and mental time travel)? If PP accounts for it, does it mean that PP circuitry is not present in organisms that show no trace of such abilities? If it does not, what additional ingredient enables some faculties especially in humans?

7. Plants, sleep and dreaming.

Sleep and perhaps dreaming seem to be conserved features, present (in variable forms) across most animals, probably even insects. PP proposes to be an architecture that solves the basic “persistence” problem of all living forms (via the Free Energy Principle – FEP, see conclusion): is PP present in some form also in plants? If not, why not? Assuming we can leave vegetables aside, can we expect PP to be present across most animals? If we can, should we conclude that sleep and dreaming correlate with PP? In such a case, what is the relation supposed to be? Does PP itself produce the requirement of sleeping and dreaming? How? If not, why not?
[In fairness, Clark does address some of the questions above in his book. As far as I’m concerned, I would guess that learning, sleep and dreaming will eventually be accounted for by one single “solution”, see my ancient thoughts, to learn why.]

8. Pain and pleasure.

If we are describing the signal exchanges to/from and within brains, I would expect that any such account will somehow account for pain and pleasure signals. In PP, perhaps pain can be conceptualised as an error signal that refuses to be predicted away (thus being well placed to attract our conscious attention). This idea seems promising to me (would account for some structural hyper-prior, forcing pain “errors” to never be fully predicted). If so, how does PP allow for such “systematic” inability to predict something? Especially with chronic pain, such predictions should be quite easily produced! Even if this idea is on track, how do we explain pleasure? It can’t simply be the opposite: a signal that always gets predicted away. That’s because we are all quite good at giving attention to pleasurable sensations… In other words, I can’t see how PP can directly account for the special qualities of painful and pleasant stimuli, or even start explaining what distinguishes one from the other.

9. Consciousness.

This issue follows from the point above. It doesn’t seem that PP itself is able to account for the phenomenal element of consciousness (the “what is it like” aspect, or phenomenal consciousness – PC). Once a brain has produced a good enough global prediction of the last train of inputs, what exactly controls why we can consciously perceive some features and not others? How does attention influence what we are conscious of? What constitutes the undesirability of painful sensations? What makes pleasure desirable? Are all PP-based systems conscious? If not, what additional system produces PC? What accounts for the loss of PC during dreamless sleep?
In short, PP appears to remain solidly outside the scope of Chalmers’ Hard Problem of consciousness. This is perhaps the biggest problem that I see. If PP explains perception (but does it? If PP does not include an explanation of why we perceive some things and not others, does it account for perception at all?), attention, mental-time travel, planning and action, but in no way accounts for PC, what function does PC fulfil? If accepting PP entails epiphenomenalism, as far as I’m concerned, it follows that PP must be nonsensical, pretty much as epiphenomenalism itself.

Conclusion.

The list above is incomplete. It took me a very long time to write this post also because I had to find a way to organise my thoughts and establish some reasonable criteria to decide what could be left out. The biggest omission is about the Free Energy Principle. This is because criticising FEP requires a full book, cannot be done in a few lines. Secondarily, such criticism might be aimed at a too broad target, and thus fail to be constructive. [For the gluttons: I’ve covered the brightest side of FEP here, while some hints of criticism are in this discussion.]

Overall, it seems to be pretty obvious that PP, as a theoretical framework (and/or, depending on your preferences: a scientific paradigm, a scientific programme) is far from complete. This is expected and entirely justified. As anyone with some familiarity with the history of science should know, new ideas require time to reach maturity, they necessarily start off by being incomplete, sometimes directly contradicted by some pre-existing evidence, and not necessarily self-consistent either. That’s normal. Thus, this post is not intended to curb our enthusiasm, it is intended to focus it in (hopefully) useful ways. My quasi-arbitrary critique above might help focussing our attention in interesting directions. Or at least, it might help me: I will appreciate all feedback, and in particular reading suggestions in response to any of the points raised here. Thank you!

ResearchBlogging.org

Clark, A (2016). Surfing Uncertainty: Prediction, Action, and the Embodied Mind Oxford Scholarship DOI: 10.1093/acprof:oso/9780190217013.003.0011

Tagged with: , , , , , , ,
Posted in Neuroscience, Philosophy

Machine Learning, the usual Bat and deflationary epistemology

What does it feel like to be a mechanical Batman?
Original image by Andrew Martin [CC0 1.0].

This is a quick, semi-serious follow-up to my first Twitter poll. In a rare moment of impulsivity, I’ve recently posted a deliberately awkward question on twitter. A few respondents did notice that something was amiss, an indeed, an explanation is due, hence this post. The subject does demand a lengthier treatment, which is in my plans; for today, I’m hoping that what follows will not sound entirely ungrounded.

I rarely act impulsively, but maybe I should do it more often? Predictably, my poll did not collect many votes, however, I could not hope for better results: adding my own vote, we get a perfect 50-50 split. There appears to be no agreement on the matter, so perhaps the question was worth asking…

The Question itself

Here is the original tweet:

Why did I pose the question?

In extreme synthesis: I guessed the reactions will be thought-provoking, for me, at least.

I wasn’t wrong. I was also hoping not to find too much agreement, as a split opinion in this case would give me a chance to propose some additional lucubrations.

My interest can be summarised as follows:

  1. To my eyes, the question can only make proper sense if one is aware of two distinct debates. In philosophy of mind, most of the discussions revolve around foundational questions such as: how does phenomenal experience get generated? Is it reducible to physical mechanisms?
    On the other hand, as real life applications of Artificial Intelligence are becoming quasi ubiquitous, other questions are becoming important and even urgent: there is an important demand to make machine-learning algorithms auditable, accountable and/or generally “explainable”. Thus, I was curious to see what my Twitter bubble would make of my mix’n’match provocation. I think I didn’t include the “huh?” option in order to force people to try harder and see if they could figure out what the connection might be. In hindsight, it wasn’t a bad choice, perhaps.
  2. I was also being a bit mischievous, because by forcing people to double-check their reaction (by not allowing to answer “huh?”) I sort-of forced some to make an incorrect choice. The only way I can see to make sense of the question is by recognising (at least at level of intuition) that there is a connection. If someone saw no connection at all, then the “correct” answer would indeed have been “huh? question is malformed, can’t figure why it’s worth asking”. Thus, knowing that within my twitter reach there are plenty of very clever people, I was semi-consciously curious to see if anyone will call me out. At least two did, to my great satisfaction! (With my apologies.)
  3.  Both debates (point 1. above) are, IMVHO, informed by mistakes. I wanted to explore the intuition that these mistakes share a common root. Which then immediately becomes the reason why my answer is “No, it isn’t a coincidence“.

This leads me to the second part of this brief response, it’s time to spill my beans and write down what I think.

My answer: no, it isn’t a coincidence.

My position has to do with what it means to know/understand something and how my own deflationary epistemology allows to make sense of a good number of problems. I’m pointing at some sort of illusionism in terms of knowledge (as in: “knowledge isn’t what we think it is“). I’m not planning to fully unpack the above in here, but will use my question to explain a little.
[Note. I will do so from one angle only: a full exploration requires to show how the same manoeuvre works along many different paths and brings to more or less the same conclusions.]

The route I’ll pick today is about the mistakes I mentioned above. In AI (or better: Machine Learning – ML), (informed) people are both rightly and mistakenly(!!!) asking to work towards producing ML systems that can be “explained”. Specifically, because of the enormous importance that ML-based decision-making is acquiring in our society, (informed) people want  the ML algorithms to be auditable. When a given machine makes a non-trivial choice, we want to be able to know “why did this system pick A and not B?”. The reason to demand such “transparent” ML systems is obvious, important and entirely correct: after all, we *need* to be able to detect and correct mistakes.

However, I fear that it is impossible to fully satisfy this demand. This has to do with reduction and our epistemological limits. Starting with the latter, if the question is “why did this system pick A and not B?”, the set of what could count as acceptable answers does not, by definition, contain the correct answers. ML systems are built to deal with otherwise unmanageable high number of variables, each having the potential of contributing to the output, and usually the final result is indeed determined by small contributions of a very high number of input variables. Thus, saying “Machine picked A because…” requires to list the contribution of many factors, and explain how they influenced the training phase as well as their relative contribution in the current choice. Unfortunately, no human can make sense of such an answer! What we’d like instead are answers like “…because the training set was biased towards A” or “…because most training data points to A”. Trouble is, both kinds of answers are oversimplifications, to the point of being wrong and pointless.

To put it in another way: when we are applying ML to a domain that justifies the use of ML, the complexity of the domain in question guarantees that the easiest way for us to learn what the ML system will output is to let the system compute the response. If we had an alternative, “better” (simpler) way of doing it, we would use this simpler system directly and let intractable ML systems alone, right?

Looking at the same scenario in terms of reduction, what we find is that ML is used precisely when reducing a problem to a handful of tractable variables simply doesn’t work (or we don’t know how to make it work). Thus, the interesting/useful results provided by ML are precisely those we are currently unable to reduce to simpler, more explainable, algorithms. QED: we can’t know why the machine picked “A” precisely because we asked the machine in the first place!

In terms of deflationary epistemology: we can only fully “understand” simple stuff, most of us (including me) can hold in working memory only less than ten variables, working out how they interact without the aid of external mind-extensions (pen and paper, calculator, spreadsheet, ML systems, etc.) is simply not possible. In other words, we can’t understand ML-driven choices because we ask ML to operate on domains that we can’t reduce to stuff we can consciously process.

This leads me to our bat – or better, a bit closer to our (mis)understanding of phenomenal consciousness. Image recognition is the typical domain where only ML systems can match our own abilities (we could not design any “simpler” way of doing it). [Coincidence? No!] Of course humans are, according to their own standards, quite good at image recognition. However, not a single one of us has a clear (and demonstrable) idea of how we do it. We do it all the time, but we do it unconsciously. Yes, we “recognise” red among other colours, which leads us to say that there is a specific “how it is like” to perceive redness. But how we do recognise redness (or anything at all) is entirely obscure to us. Introspection can tell us exactly nothing about the mechanism that allows us to discern colours. Neuroscience is starting to produce some incomplete answers, but it is merely scratching the surface.
[Reminder: colour perception is spectacularly complex, do I need to mention “the dress“?]

Thus, we must conclude that humans (and probably mammals, if not most animal forms), just as ML systems, are able to make discriminations that rely on contributions made by a high numbers of variables. I hope that we can now agree that humans are unable to consciously explain exactly how equivalent tasks are performed by machines and biological organisms alike. This inability is a function of the complexity of the task, not a function of what system performs it.

[Note: I am not talking of what counts as “scientific explanations”, I am referring here to what we can grasp and feel without external aids.]

In the case of biological image-recognition, we don’t know how the mechanisms in question work, but we do know that even if we did (in scientific terms), we would not be able to produce explanations that are simple-enough to be understood by most humans (not without having to laboriously study for many years): in the case of ML, we know everything about the mechanisms, but we still can’t find the answers we’re seeking. This is because we want “simple” answers, simple enough to be understood, at least. The simplicity of the desired answers is the common factor between the two “unknowns” mentioned in my poll.

Thus, we reach my conclusion. We can’t (consciously) know how it feels to be a bat: even if we knew the mechanism (as per ML), we would not have the capacity of reasoning all the way up to forming the correct idea (such idea, in order to be correct, includes too many variables, so we wouldn’t be able to hold it in our limited conscious minds).
Thus, the answer to my question is (from my own perspective!) a definitive: “No, not a coincidence”. The common factor is how limited our conscious understanding can be.

Conclusion

My own hunch may well be wrong, however, the fact that the poll results are split (based on a tiny sample size!) is hopefully an indication that the question is not as absurd as it may appear at first sight. Please do feel free to add your own thoughts in the comments (or via Twitter, if you prefer). Thanks for reading and to all the poll responders!

Tagged with: , , , ,
Posted in Consciousness, Philosophy, Stupidity

Predictive Processing: one theory to rule them all…

After discussing some of the basic concepts behind the Predictive Processing (PP) framework, it’s time to explore why I think it was worth the effort. In short, the explanatory power that PP seems to have is, as far as I can tell, unprecedented in neuroscience. No theory that I’ve been exposed to has ever managed to get close to the width and depth encompassed by the PP proposal. One way to see why is to concentrate on one key element and briefly mention some of the phenomena it might explain. My choice is precision weighting (PW), a mechanism that suggests many possible implications. In this post, I will explore the ones that I find more striking.

[Note: this post is part of a series inspired by Andy Clark’s “Surfing Uncertainty” book. For a general introduction to the Predictive Brain idea, see this post and the literature cited therein. The concept of precision weighting and how it fits in the PP framework is discussed in a previous post.]

Many illusions can be explained in terms of PP. Image adapted from Flikr by Robson# CC BY 2.0

A short recap: when a sensory stimulus is transduced (collected and transformed in a nervous signal), PP hypothesises that it will reach a sequence of neural layers, each busy producing predictions that try to match the signal arriving from the layer below. [In this convention, lower levels are those situated closer to sensory organs.] Each layer will issue a prediction to the layer below, and will concurrently match the prediction it receives from above with the incoming signal from below. The matching will result in a “difference” signal (or, better, an Error Signal – ES) which is presumed to be the main/only signal that a given layer will send upwards. The ES thus carries upwards only the information that could not be predicted, or, if you prefer, only the surprising and newsworthy elements of the original raw sensory stimuli. We have explored before two additional ingredients:

  1. For such a system to work, it is necessary that whenever a signal passes from one layer to the other, it must carry some information about its expected precision/confidence. [We have also seen why it is reasonable to conflate precision and confidence into one single “measure”.] PW allows a given layer to dynamically give more importance to the error/sensory signal arriving from below or to the prediction issued from above. It is generally assumed that precision/confidence information is encoded as the gain (strength) of a given signal.
  2. Such an architecture is proposed to continue uninterrupted from layers that deal with sensory information, all the way to layers that are concerned with action selection and control. In this latter case, the ES will (or might) also be used to control muscles/effectors. Reducing the confidence (gain) of motor-related prediction signals will thus allow to “plan” actions, without triggering actual movements.

We have also seen before that, at levels concerned with integrating information coming from different senses, PW becomes important to deal with possible conflicts. For example, when watching TV, sounds will not seem to come from the TV speakers, but from the images themselves, as visual stimuli come with much higher spatial precision than acoustic ones. Thus, PW proposes to explain how sensory stimuli can be integrated, as well as why and how a perfect matching isn’t required.

When trying to understand how a complex system/mechanism works, it is often very useful to explore anomalies, especially when one is proposing a strictly mechanistic explanation of the inner workings of such systems. This makes perfect sense: any given mechanism must be constrained, and therefore it is reasonable to expect that it will not work particularly well under unusual circumstances. Moreover, particular idiosyncrasies will be specific to given mechanisms (different implementations will be characterised by different anomalies). This means that studying where things “go wrong” allows to match failures with hypothetical mechanisms: some mechanisms will be expected to fail in one way, some in an other. Thus, a theory of perception that happens to easily accommodate known (and hard to explain) perceptual anomalies (such as what happens when watching TV) and/or neurological conditions, will look more promising than one that doesn’t. For us, this consideration means that it makes sense to look at how PP proposes to explain some of such failings with the aid of PW.

One such anomaly is the rather spectacular rubber hand illusion:

To say it with Seth (2013):

[S]tatistical correlations among highly precision-weighted sensory signals (vision, touch) could overcome prediction errors in a different modality (proprioception)

In other words, proprioception isn’t very precise, or, more specifically, produces reliable signals about movement and changes in forces; thus, in the unusual experimental conditions (people are expected not to move their hidden hand), and given enough time, the relatively high precision signals coming from sight and touch can take precedence, forcing the overall system to explain them (that is: successfully predict them) by assuming the rubber hand is the real one.

Perhaps more interestingly, it’s also possible to relate PW to more natural anomalous conditions. One way to describe this line of thought it to ask: what would happen if the delicate balancing of precision versus confidence is systematically biased in one or the other direction?

On one extreme, we could imagine the situation where predictions tend to have too much weight. The result would be an overall system that relies too little on the supervision of sensory input and is therefore more likely to make systematic mistakes. If the imbalance is strong enough, the whole system will occasionally get flooded with abnormal errors (whenever the predictions happen to be very wrong, but issued with high confidence/gain), triggering an equally abnormal need to revise the predictions themselves, which could then realise a self-sustaining vicious cycle: more top-heavy, misinformed predictions will be issued, producing more floods of error signals, requiring even more revisions in the predictions themselves. The result would be the establishment of ungrounded expectations, which would then have visible impact on both perception (how the subject experiences the outside world) and on the overall understanding of the outside world itself (beliefs). Recall that according to PP, prior expectations are intrinsically able to shape perceptions themselves. Wrong perceptions, when they are indeed very wrong, are normally called hallucinations, while wrong beliefs can be seen as delusions. Sounds familiar? Indeed, the combination of both represents the “positive symptoms” of schizophrenia. In short, a systematic bias towards prediction confidence, if PP is broadly correct, would produce a system which is unable to self-correct.

On the opposite extreme, what would happen if issued predictions are not trusted enough? In such cases, prior knowledge would fail to help interpreting incoming signals, making it harder and harder to ‘explain away’ a given stimulus, as even the right predictions might struggle to quash out the incoming signals (which will then be interpreted incorrectly as a genuine ES). A subject afflicted with this condition will be able to react correctly to very familiar situations, where confidence in the prediction is highest and is therefore strong enough to reduce the ES. On the other hand, in new and ambiguous situations, predictions will systematically struggle to perform their function even when correct, and will therefore force the subject to attempt re-evaluating the current situation over and over. This would allow to gradually increase confidence in the issued predictions, and thus regain the ability of appropriately react to the outside world, at the cost of an abnormally high investment of time and attention. It’s easy to predict that such subjects will naturally tend to avoid unfamiliar circumstances and that they will also find it hard to correctly navigate the maze of ambiguities that we call natural language. In this case, an excess of error signal doesn’t lead to hallucinations and delusions because the “supervision” of sensory information happens to be too high (not too weak!), and thus only very precise predictions, i.e. those able to exactly match the stimuli, will have the best chance of reducing error signals to manageable levels. Once again, this kind of condition should also sound familiar: it is tantalisingly similar to autism. It’s worth noting that this approach is entirely compatible (indeed, I see it as a proposal of how the general principle might be implemented) with the well established view that autism is connected to an impaired ability to use Bayesian inference; for the details, see Pellicano and Burr (2012).

This leads me to the matter of attention. According to Friston and many of the PP proponents (see Feldman and Friston, 2010), attention is the perceivable result of highly weighted error signals. On the face of it, it makes perfect sense: what should we pay attention to? To whatever is news to us, and therefore, to what we struggle to predict. Moreover, we should be able to direct attention according to our current task: this can be readily done by reducing the confidence on the predictions we are making. By doing so, we’d amplify the residual error signals concerned with what we are paying attention to, making only very precise predictions (precise in the sense of being a perfect match of the incoming signal) able to reduce prediction error. This reinforces the view of autism sketched above: autistic individuals would thus be unable to command their attention and would instead be forced to attend any stimulus that isn’t readily explained away.

Conclusion

Predictive Processing, once enriched with the concept of Precision Weighting, is able to propose a preliminary sketch that includes reasonable explanations of how we manage to make sense of the world, learn from sensory information, plan, execute and control actions, pay attention to something and/or get our attention diverted by sudden and unexpected stimuli. Moreover, our abilities of dreaming and daydreaming are easily accommodated (in ways I might explore in the future). If this wasn’t enough, it also aspires to explain why and how certain well-known pathologies work, and is generally able to accommodate many perceptual illusions and anomalies. In other words, one single theory is proposing to explain much of what the brain does. This in a nutshell is why I’ve dedicated so much of my spare time to this subject: for the first time I get the impression that we might have some hope to understand how brains work – we now have a candidate theory which is potentially able to offer a unifying interpretative lens. Otherwise, without a set of general and encompassing principles, all our (increasing) understanding would be (has been) condemned to remain local, applicable only within a given restricted frame of reference (how neurons communicate, how edges are detected in vision, and so forth).
Given my background in neuroscience, I expect that my excitement comes with no surprise. Fair enough: but is my enthusiasm justified? Perhaps. To answer this question in the following posts I will look at what I find unconvincing or underdeveloped in the PP world. I might also use the occasion to err on the overconfidence side(!) and propose some of my ideas on how to tackle such difficulties.

Bibliography

ResearchBlogging.org

 

Clark, A (2016). Surfing Uncertainty: Prediction, Action, and the Embodied Mind Oxford Scholarship DOI: 10.1093/acprof:oso/9780190217013.003.0011

Feldman, H., & Friston, K. J. (2010). Attention, uncertainty, and free-energy. Frontiers in human neuroscience, 4.

Pellicano, E., & Burr, D. (2012). When the world becomes ‘too real’: a Bayesian explanation of autistic perception. Trends in cognitive sciences, 16(10), 504-510.

Seth, A. K. (2013). Interoceptive inference, emotion, and the embodied self. Trends in cognitive sciences, 17(11), 565-573. DOI: http://dx.doi.org/10.1016/j.tics.2013.09.007.

 

Tagged with: , , , , , , , , , ,
Posted in Neuroscience, Psychology

Not quite wrong enough

In my last posts on politics I’ve made a few predictions. Wrong predictions! In this post, I want to acknowledge my errors, reflect on what they mean, and perhaps make a few more in the process. In a nutshell, the root of my mistakes is clear: the initial directions taken by both May’s government and Trump’s administration have been openly fascistic and seemed to encounter little resistance, especially in May’s case. This sent sending me down the path of the gloomiest predictions. Luckily, I was wrong (with my immense relief), but unfortunately, not quite wrong enough.

What I got wrong.

On the US side, keeping in mind that I only have second-hand knowledge of the situation, I had underestimated the strength of constitutional checks and balances (along with the volume of bottom-up dissent). I don’t think I had also overestimated Trump’s capacity to exploit the situation, if anything, I was expecting him to make more mistakes, driven by his massive ego. The specific prediction I got wrong was that Trump would exploit, if not facilitate, internal unrest, and use the consequent emergencies to suffocate the system of checks and balances that limit the executive powers of the presidency. I also predicted that this kind of scenario will unfold really quickly, and I can’t emphasise enough how happy I am to realise that I was wrong. Happier every day. I honestly have no good explanation on why I was wrong, but I do fear that the reasons why unrest might explode at any time are still valid. I am also still convinced that riots, or any form of civil unrest that is widespread enough to disrupt productivity, can still be exploited by Trump’s administration to undermine the democratic institutions of the country. Thus, I’m left in a state of fearful hope: what if I got only the timing wrong, while my worst fears are still valid? I can only hope I was entirely wrong!

On the UK side, my fear was that the authoritarian inclinations of Theresa May, and a good proportion of her Tory supporters, was backed by a decent amount of competence and that her fascistic aspirations could go unrecognised by both the main stream media and by a sizeable proportion of the electorate. Luckily, a crucial assumption was entirely wrong: despite the fact that May did have a reputation for high competence, Theresa May called a snap election without having a convincing reason to do so. She then ran the worst campaign I’ve ever witnessed, and in doing so, demonstrated to the country and the whole world how utterly incompetent she is (along with her whole team, one would think). I feel that my mistake was entirely justified: yes, you can never overestimate human stupidity, but assuming that your adversaries are a bunch of witless morons is a very obvious act of self-harm.

From the seminal mistake above (it appears that May herself, as well as her strategist, genuinely don’t have a clue), a second mistake followed. I have also predicted that “Corbyn and McDonnell are sleepwalking into their own obliteration“. Under the assumption that the Tories wouldn’t shoot themselves on the foot by their own initiative (an assumption that one is forced to make, when thinking about strategy), this could have been the case. However, I did underestimate two things: (1) Corbyn’s ability to appear genuine, along with the renewed appeal of his sensible domestic policies. (2) How well the deliberate ambiguity of Brexit would work.

There isn’t much to say about (1). Corbyn appears sincere, and he probably is, broadly speaking; I am 92% sure that he does mean well, although I can’t be persuaded that he genuinely believes in the open approach to decision-making he advocates (I can’t, because he never follows his own advice!). On point (2), there is much to be said, giving me the chance of making even more (hopefully wrong!) predictions.

Mistakes you need to make.

Along with problems that are good to have and problems that should not be solved, another mantra of mine is that some mistakes need to be made. The typical example is when there is a lesson to be learnt: sometimes making a mistake (preferably under controlled circumstances, where the consequences can be minimised) is the only effective way to permanently learn the lesson and reduce the likelihood of making the same mistake again, when stakes might be too high. [There is an interesting argument about the roles of parenting and education to be made here, perhaps something worth a separate discussion.] In the case of one of the wrong predictions I’ve made in the past 6-8 months, however, the mistakes I’ve made were mistakes that should not be avoided, which is different, and interesting in itself (to me, at least). It’s useful to learn to detect and react appropriately to these kind of counter-intuitive situations, so I’ll write down my reasoning here, doing so solidifies it (useful for me) and might be thought-provoking to my occasional readers. It is also very relevant to the current political situation, so please bear with me.

Mistakes that should not be avoided are a specific case of mistaken predictions, which may happen when the act of issuing a prediction can influence the outcome. In my case, I’m living in a society that is manifesting numerous warning signs: there is a very visible drive towards authoritarianism/fascism. Making the prediction that other parts of society will counterbalance this drive automatically and inevitably weakens the defences is question: if you are confident there is no danger, you will not spend your energies resisting it. If everyone involved feels the same, they will not push in the other direction, leaving the original drive free to steer society in the wrong direction. Thus, anyone who recognises such an unusual feedback is faced with a choice. One option is to issue the prediction one would hope is right (or, more weakly, choose to remain silent, because of it): people will recognise and reject fascism. This prediction automatically undermines itself, so in terms of predictable effects, it helps brining about the undesired outcome. The other option is to sound the alarm, hoping to be wrong. Doing so makes it more likely that things will turn out well.
It is paradoxical: the act of expressing a prediction is bound to reduce the likelihood that the prediction is correct. I know very well that in the case of my own prediction, its own effect is tiny enough to be well below being detectable. I don’t care. If everyone chose to play it safe, fascism would encounter zero resistance; I am not going to be complicit.

Overall, the choice above is not really a choice, not if you care about the outcome more than about your own track record. The only reasonable thing to do is pick the second option and shout the alarm as loud as possible.

In short: I could not be happier to acknowledge that my specific prediction (there is an authoritarian drive in the UK and it is not being met by an appropriate backlash) was wrong. For now. The situation might change: for as long as the worrying signs are present I will continue to call for countermeasures.

There are self-fulfilling prophecies, but also self-undermining ones, one ought to recognise them and act accordingly.

Consequences

I’ve learned one lesson: I do not know enough about what is happening in the US. Situation still looks very alarming, I still think a shitstorm might explode anytime, but I know there are many forces at play, most of them unknown to me. This makes all of my predictions moot, so I may as well avoid making them.

In the case of the UK, I’m happy to keep getting it wrong: here is my assessment of the current situation.

  1. The macroscopic and unprecedented mistakes made by the Tories are certainly due, at least in part, to their own hubris. They thought Corbyn was a lame duck and underestimated their own weaknesses (see above: they relied on a self-undermining prediction, ha!). Assuming they will repeat the same mistake again would be utterly foolish.
  2. The strongest rhetorical weapon of the Tories has been somewhat weakened, but it is not neutralised. It is self-evident that some Tories have been betting on the failure of the Brexit negotiations. In such a case, there is little doubt that the plan was to put all the blame onto the evil (undemocratic, unaccountable, etc.) European bureaucracy. To make this move effective, the Tories need to re-establish their own credibility, which isn’t easy, but I am not ready to bet that it’s impossible.
  3. Corbyn and McDonnell might still be sleepwalking into their own obliteration. If the Tories will find a way to neutralise their own hubris, they will automatically expose the blind self-righteousness of Corbyn and the Labour left (see below). In other words, the outcome of the 2017 General Election makes it more likely that Labour will fall on the same hubristic trap that has almost destroyed the current Tory leadership. We must try to compensate for this, which requires to actively push in the opposite direction.
  4. As far as Brexit goes, it would be a mistake to assume that it is now likely that Brexit will not happen. Once again, making this prediction inherently undermines it. Thus, the only reasonable strategy is to keep fighting against Brexit. The best way to do so hasn’t changed one inch (for some of my ideas, see this post and the preceding ones).

One entirely positive effect of the last election is that it is now visibly wrong to assume that the neoliberal overreach (links to an excellent article by Simon Wren-Lewis, see also this equally good one by Simon Tilford) is the only kind of rhetoric that chimes with the public. The importance of this change cannot be overestimated (by Dougald Hine) and is due to the relentless efforts of Corbyn and co. (as well as many con-causes, obviously). Yes, while acknowledging my own mistakes I also want to highlight what they did do well! Specifically, this historic change of mood is happening also because Corbyn and his team have forcefully ignored all advice intended to move them towards the so-called centre ground. I applaud their resilience, with all my heart. I also worry that the same resilience will mean they will keep favouring Brexit, and do so in a covert and oblique way (as they are doing now).

Taking an ambiguous stance while working towards a covert objective will inevitably backfire (only question is when and how). Most of Corbyn’s capital is in the form of personal credibility. He appears genuine and trustworthy, probably for good reasons. However, this capital can be destroyed in the blink of an eye: it will disappear instantly, if the electorate will conclude that Brexit was a bad idea and that Corbyn backed it all along. Moreover, sooner or later, Corbyn will have to abandon the current ambiguity, he will need to choose between an act of national self-harm (implicitly affirming that he doesn’t care for the well-being of his electors, not if that means compromising on his ideals), or to revise his world-view and accept that the EU is a problem that is worth having (see here and here). Depending on his previous actions, Corbyn might find himself already forced to pick the first option, which would be catastrophic.

Brexit is bad for the country and worse for the international scene. Backing it means backing the wrong forces of history. Anyone who cares for peace, international stability and development should be busy managing or fixing the many problems that afflict the EU. Choosing to help destroying the most effective peace-making project in the history of humanity is inexcusable and foolish.

For us single individuals, the course of action is therefore obvious.
We need to keep saying that Brexit is the worst decision the UK could take. We need to point out that it was taken on the basis of false information, the public was systematically misled, we need to remind everyone that the choice of 37.47% of the electorate cannot be misrepresented as “the will of the people”. We also need to keep asking Labour to stop backing Brexit. Brexit is self-destructive, contrary to all the values shared across the party (admittedly, it is not entirely incompatible with the values that distinguish the Labour’s left); but above all, it is morally indefensible.

Tagged with: , , , , , , ,
Posted in Ethics, Politics, Stupidity
Follow me on Twitter

All original content published on this blog is licensed under a Creative Commons Attribution-NonCommercial 4.0 International License.
Creative Commons Licence
Please feel free to re-use and adapt. I would appreciate if you'll let me know about any reuse, you may do so via twitter or the comments section. Thanks!

%d bloggers like this: