Changing my mind: the big list

How do we change our minds? I find this question fascinating and important. To start: only fools never change their mind. Secondarily, it is surprisingly hard to remember on what I actually did change my mind, and equally difficult to pinpoint why. Finally, understanding how and why our opinions change is of fundamental importance in political discourse, because of the obvious link to the art of persuasion. [A delightful tangent arises also when considering history and philosophy of science: apparently, it’s not all about logic and “facts”.] In this post, I’ll start building a list of things on which I have changed my mind, I encourage every reader to do the same: it’s a fascinating endeavour.

Image by AZQuotes (Quote Source).

Some time ago Adam Elkus tweeted a self-challenge: he would tweet one thing on which he changed his mind for each “like” he received. [Unfortunately, the impressively long thread that resulted appears to be deleted.] Shortly after, Artem Kaznatcheev directed my attention to it: we ended up agreeing that it’s a good idea to keep such records.

I am not aiming at producing a complete list, instead, I’ll try to stick to subjects I do care about. The “strong feelings” filter is important to me: on one side, I’m convinced that our core beliefs are the least likely to change.On the other, it’s in our interest to change our views, if/when they happen to be wrong or harmful.

To try impose some order in an otherwise messy list, I’ll group entries by broad topics, starting with what feel like the most important changes. I’ll conclude with a summary of what seem to be the most frequent elements that appear in the disparate instances. In time, I hope to extend the present list with new entries, and perhaps with “feature-length” posts about one or the other specific case.

Big, world-view changes:

I will start with the most remarkable flip-flops, where changing my mind required to re-adjust a significant proportion of linked beliefs. Unsurprisingly, it’s a short list. Moreover, only one change in this section did start well after reaching adulthood.

What it means to be an adult: with effects on my understanding of competence.

As a child, I displayed a remarkable case of cognitive dissonance. I believed that:

A: Being an adult means “knowing what you are talking about”, and
B: Most adults are idiots, commanded by their emotions and unaware of this fact.

Growing up, I tried hard to achieve A, striving to learn what felt important and trying to develop reasonable ways to know when it was OK for me to express my thought (e.g., when I wasn’t risking to be badly wrong). Growing old, I’ve realised that nobody knows what they are talking about – instead, some rare and noteworthy individuals are able to express opinions while acknowledging and accounting for their own ignorance. In the process, I’ve developed a keen interest in epistemology.
This change is interesting to me on multiple fronts. First and foremost, it’s a prime example of why the ability to concurrently hold incompatible beliefs is useful: cognitive dissonance is, at least sometimes, an asset – in my case, what I now consider a false belief (A), helped me to become what I am now (in what feels as a useful way). Secondarily, this change consolidated my idea of competence: you have achieved competence on a given domain when you can attach reasonably reliable confidence intervals to your own predictions. It’s a view founded on the acceptance that what we don’t know always surpasses what we do know. Finally, it’s the reason why I maintain this  blog: I write to test, clarify and improve my ideas – trying to be mindful of my own ignorance.

Overall, this change initiated during adolescence (if not before), as I became articulate enough to try expressing beliefs like (A) and (B). Since I grew more and more convinced that (B) is fundamentally correct (as a rule of thumb – we are all slaves to our emotions), (A) had to give way.

Apparently, this path is quite common, here is proof (recommended soundtrack for this post):

Politics – how to achieve change and the role of radical positions/rhetoric.

For as long as I can remember, I always preferred building bridges to winning. I like to compete, but on the condition that the confrontation is seen by all participants as a mutual way to help each other at becoming better at whatever it is that we’re competing on. I think this is a useful attitude to maintain, but in my case, I can claim no credit for it: it’s not something I’ve learned with effort and dedication, it comes natural to me – I’ve got it for free. As a result of this inclination of mine, I always had an instinctive dislike for radical and uncompromising political stances. In many cases, I still do. However, in recent years I’ve changed my stance by introducing a very important class of exceptions. Specifically, I’ve realised that when a given group of people is marginalised, deemed irrelevant, and/or otherwise oppressed, the power imbalance that sustains the situation makes it impossible to change the status quo by deploying only persuasion and bridge-building strategies. This change of mind may be subtle, but has remarkable consequences. For example, as a young adult, I loathed radical feminism: I thought it was unquestionably counter-productive in that it facilitated a self-sustaining and fruitless confrontation. The same applied to probably most positions (see quote in the picture above: it’s really hard to remember what I thought before changing my mind) where the weak side in a power struggle advertised itself as combative and intransigent.

Right now, my view could not be more different. When one wants to eliminate a long-lasting power imbalance, if historical precedents are an indication, it seems to me that it is necessary to deploy a fair amount of intransigence. Specifically, it’s necessary to have a group of people who very clearly, and very publicly, won’t accept anything less than the complete elimination of such an imbalance (even if, or maybe especially when a complete resolution is manifestly impossible). If such a group is very visible and stubbornly refusing to be silenced, only then a separate group of more conciliatory activists (those who will accept or even seek small improvements as a form of progress) can become effective. I still believe that the first group will have the negative effect of fostering confrontation and entrenching, and that in most cases it is the second group is the one which can more effectively achieve desirable results. However, I now think that activists of the second kind can be effective only if, when and while the first kind is well established (crucially: when their common opposition believes that the radical activists will never cease and desist). Thus, even if my own predispositions force me to deploy and/or endorse the second strategy, I’m now fully convinced that the first approach is necessary and has to coexist with the first. (There is a parallel here with my views on Cognitive Dissonance, perhaps worth a future post).

How did I make this change? This story is too long to fit in here, but I’ll tickle my readers’ curiosity by mentioning the two crucial elements that contributed: the new atheism movement (yes, I know!) and the patience of many women, especially Abeba Birhane.

Science and epistemology – objectivity.

I presume that when I started my BA I thought I was in the business of understanding how the world works in a fairly straightforward manner. However, I can’t really be sure (see pic on top): while I was changing my mind on competence, I am now guessing that also another change was happening. Right now, I don’t believe in objectivity as normally understood, and it feels as if I always held this view. I doubt that’s the case. I do know that when I started blogging the big revolution had already happened, as it’s clear by reading this early post. I also know that my view has kept changing, but it seems that it’s merely becoming richer, not changing in a radical way. To spill my beans in full, right now I believe that any single method to understand the world around us must have limitations. It will be suited to pick out certain features of reality, but will also hide some other ones. Thus, by necessity, no single assertion about the world out there should be considered “objective” to the point of being unquestionable. As a result, I’m developing a deep dislike for the canonical writing style in science and philosophy. To my eyes, it looks designed to hide the elements of subjectivity that inevitably inform any piece of research (or thinking). In other words, it requires authors to deceive, which feels counterproductive to me.

Coherence.

All these changes contributed to something that continues to puzzle me. I once believed that coherence was a genuine indicator of value – in practical terms, I still do. However, I’m also growing more and more convinced that cognitive dissonance is a necessary ability of well-formed human beings. Without the ability of holding incompatible beliefs, humans would find it very hard to thrive. This topic is huge and controversial: hopefully I’ll find the energy to write about it explicitly. For now, it’s amusing to notice that “changing one’s mind” implies a break in coherence across the time-domain, which, I argue, is both necessary and generally a good thing.

Music.

Music is important to me. To remain sane, making and listening to music helps me a great deal.

Pop.

Until my late twenties, I was convinced that light, apparently simple and mainstream pop songs had no value (not even those which resisted the test of time). I then started playing in a band: as it happens, most of us wanted to play the stuff I thought wasn’t interesting. Playing it made me change my mind. In short, I now realise that much of the value of music is that it makes people happy, at a negligible cost. Even the things I (still) don’t like make someone happy. Overall, I can’t imagine what could be considered as more useful than making people happy. Thus, suddenly, the awe inspired by some performers of mainstream pop, starts making sense. A better understanding of the technical skill, the sophisticated sensibility and the performance discipline helps solidifying my current view.

Queen.

In my early teens I lumped them amongst the “pop, pointless” lot. My best mate disagreed, and kept telling me I was wrong (for probably a couple of years, or perhaps less: time flows slower when you’re young). He succeeded in changing my mind, but only obliquely: he persuaded me to give them a chance. Listening to their music, especially their less famous tunes, did the rest. I am now convinced they were and still are underestimated. I mention this because it’s the only case I can recall where I was actively persuaded by someone else.

People and society:

As my professional career meandered in new directions, it forced me to change my mind in quite spectacular ways, twice. These are both special cases, because I can link these changes to specific situations.

“Bad” people are inevitably unhappy.

I used to believe that selfish, vindictive and mean people (Jerks, if you prefer) must be living in a never-ending nightmare. Despised or, at best, feared by most, they deprive themselves of what truly matters. I now think that I was (badly) wrong: genuine psychopaths don’t give a toot and can be (often are?) as happy as any human can be.

How did this change happen? At the start of my professional career, I was in close contact with a genuine, highly successful psychopath: very smart, extremely charming and completely a-moral. When things worked for him, he was genuinely happy. He did care for his family, so did have a source of human warmth, and apparently that was enough: for all the rest, personal success was all that mattered – on top of that, exercising power, for the sake of it, appeared to give him genuine pleasure. Having professional success and a reasonably stable family, he was as happy as any human can hope to be.

Selfish and ruthless people are those who inevitably end up with managerial roles.

In the first 10+ years of my professional life, this rule held true. I then started working in SSRU and changed my mind. Since moving to London, I’ve been managed by extremely smart, caring and well-rounded human beings. Within the limited reach of my current work-environment, managers actually care for the people they manage: thus, I have to admit that my belief was wrong. I still can’t really figure out what makes it possible, though. With a consistent pattern involving multiple individuals and lasting 10+ years, I’m pretty sure it isn’t chance, but the necessary and sufficient conditions are eluding me: it still feels a little bit like a miracle.

Specific People:

Perhaps surprisingly, I usually change my mind about people in a fairly predictable way. I do, as most people, instantly form an opinion about everyone I meet, in the super-rapid, “automatic” way. If this opinion is negative, I am usually aware that I might be wrong, so it happens fairly often that, by learning more about the person in question, I will succeed in changing my (conscious/explicit) initial judgement. However, with surprising frequency, over time I frequently re-switch back to a negative view. This is another pattern that might be worth considering separately.

Corbyn.

I’ll use Corbyn as the paradigmatic example of the process I’ve observed numerous times. I met him almost a decade ago: he did a short intervention at a rally I was co-organising. My immediate reaction was: this chap reasons by applying rigid ideological positions to everything – i.e., he does not think, he merely applies pre-existing rules to new situations. I didn’t like the man at all.
Sometime after his election to the leadership of the Labour party I changed my mind. Finally, there was someone saying what needs to be said: he was the only politician I’ve known in my lifetime able to expose the failing of Neoliberalism in a way that could actually reach the masses. I did like that, and I still do. So I concluded that there must have been more than what met my eye in the first encounter. Since then, I’ve flipped back, with a vengeance: I do accept that his intellect is more elastic than I initially thought, but now I also think that he is self-interested much more than is generally appreciated. I can make sense of his political actions over the last three years only by accepting that he’s not as interested in the common good as he would like us to believe. He appears to be happy to act in ways that are designed to preserve his position in the Party instead.

Boris Becker.

As a kid, for one or two years, I hated him – couldn’t watch him play. For reasons I’ve forgotten, I then forced myself to watch an entire match anyway. A couple of hours later I had flipped, and enjoyed watching his tennis ever since. I mention this here because it is perhaps the older instance of a change of mind (on something I cared about) that I can recall – it is also the only abrupt change that I can actually recollect.

Steven Pinker.

Given my original interest for cognitive science, even if I always disliked his way of presenting himself, I did have a bit of an intellectual crush for his views and work on cognition. I now think that he’s a self-righteous impostor, because he’s not at all interested in intellectual honesty. In his case, I know exactly what made me change my mind: it’s this discussion, in which he uses rhetorical tricks to obscure and deceive – the opposite of intellectual honesty. He then started working on his more popular books, and all my bad impressions got confirmed multiple times. This change did take some time: I was already contemplating it quite seriously when the discussion in question was published. In my mind, the specific trigger counts as the classic “last straw”.

Conclusion.

That’s it! This is my provisional list, limited to some of the things I actually do care about. The most common element in the big and small changes listed above is that, aside for one case, all of these changes took time. Years, to be precise; ten or more, for the bigger ones. This matters to me, because it informs my actions, especially online. It makes no sense to debate anyone with the aim of changing their mind. On the internet, by definition, it all happens too quickly. I do debate online, every now and then, but I try to do it when I have a genuine interest in understanding what justifies a position I disagree with. I do hope that (my) online/offline activism can change some people’s mind (for the better, hopefully!), but I’m also convinced that if it does work, I won’t be there to witness the change. Significant rethinks simply take too long.

Another common element is that many of the changes above have a simple (and somewhat reassuring) ’cause’: learning. As I learned more and more about a given subject, my views changed. Sometimes radically; more often, subtly. This is probably healthy (and is perhaps uninteresting), so I suppose I should be happy about it, even if it’s disappointingly predictable.

Tagged with: , , , ,
Posted in Ethics, Philosophy, Premises, Psychology

Corbyn, #Brexit and the “clever” long game that doesn’t exist.

The Brexit clock is ticking. As David Allen Green correctly states, Brexit will automatically happen, unless “something exceptional” will prevent it. It is still possible to #StopBrexit, many routes are still open and public opinion is probably shifting, but exceptional things take time to materialise, so it’s time to make a collective effort to bring them into being.

Image by See Li (CC BY 2.0) source.

Question is: how? There certainly are many ways, but in my opinion, the best option we have is to “help” Labour to stop being “clever” and listen to its base instead. The reasons are simple: it would be good for the country, good for the party, it would honour Corbyn’s commitments to make his party more democratic, and, above all, pure maths tells us that it could work. This brings about a second question: why on earth it isn’t Labour trying to stop Brexit? The reason, I’m afraid, is that Corbyn and McDonnell are thoroughly Eurosceptic, as their behaviour amply demonstrates.

I have been on record arguing that the supposed “constructive ambiguity” displayed by Labour is a self-inflicted trap, and I’m glad to note that I’m not alone (by Simon Tilford). In here, I will make my argument explicit and suggest lines of action for all my fellow Remainers.

Corbyn’s position.

To understand what is going on, I am convinced that we need to discount words (everyone agrees that politicians lie) and look at real deeds instead. All evidence points in one direction. Moreover, a long time ago, I did briefly meet the man himself (he gave a short speech at a rally I helped organising), and my first-hand impression confirms what the facts say. Corbyn is a mainstream social-democrat. What makes him slightly peculiar is that his broadly socialist positions have not changed significantly ever since he started his political career. This is important, because it tells us a few things:

  1. Corbyn is almost certainly sincere. Unlike most politicians, his opinions don’t follow the polls. He has principles and advocates for them.
  2. The second side of the same coin is that his principles are fixed. He apparently has discovered the best ideology in his youth and will stick to it until he dies.

This is important, because his principles make him a natural Eurosceptic (if we forget tiny details like international cooperation and peacemaking…), and it is likely that it is impossible to make him change his mind on purely theoretical grounds. What remains to be seen, is if an argument based on the consequences of the current situation can. If, as I believe, the choice is either stopping Brexit or experiencing catastrophic political obliteration, perhaps we can convince him to do the right thing. The evidence that we can come from the referendum himself: he did reluctantly back #Remain, after all.

Corbyn’s opposition.

A legitimate question that needs answering is: why bother? If Corbyn is a staunch Eurosceptic, wouldn’t it be better to replace him? My answer is no, for two reasons.

First, replacing him soon enough is impossible. The PLP tried to remove him multiple times and failed consistently. The unexpected gains of Labour at the last general election sealed the deal. Only a catastrophic electoral result can currently trigger a successful leadership challenge, and that’s one catastrophe that is not desirable, as well as one that won’t happen in time.
Second, Corbyn is the only (semi)successful politician who is openly opposing the failed, but ubiquitous, neo-liberal ideology. We need him to continue doing so. 10 years of austerity have made his point, and he suddenly found himself able to make his (never-changing) case convincingly (or almost convincingly, considering that he did not win the last GE!). In the long run, if we want to stop out-of-control inequality and avoid widespread conflict or ecological suicide, we need to crush the prevailing neo-liberal delusions. Right now, Corbyn is the only credible politician that tries to do so, and does it with reasonable success. The side effect is that it’s almost inevitable that whoever may replace him will be offering something different, i.e., something more like New Labour. We need to promote social democracy, not neo-liberalism-light!

All things considered, we can’t hope to replace Corbyn, and, moreover, we should not wish to. Therefore, our best option is to “help” him to change course and stop Brexit. In my opinion, this is possible specifically because the current path leads to a very predictable disaster. To see why, we need to look ahead.

The grim prospect.

If Labour will not oppose Brexit, many different things may happen, but only two scenarios look likely.

Scenario 1: the EU gets all / most of the blame.

This is obviously what the current government hopes to achieve. It is possible that they will find a way to dilute the (now undeniable) economic downside (of all forms) of Brexit, or that they will somehow manage to negotiate a soft-enough way out. If the following hardships will be somewhat limited or well hidden, it is possible that the Conservatives will manage to claim victory while assigning all the blame to the EU. What this does to international cooperation is ominous, but that’s a different story. In this scenario, Labour will (rightly) take the bigger hit, as most Remainers will see through the government lies and blame Labour for not doing anything about it. We’ll get more Tory governments, more austerity, less human rights, etcetera.

Scenario 2: an unmitigated disaster.

If Brexit will go ahead and be hard enough, discontent will shoot through the roof. The Labour membership and electorate will inevitably blame the current leadership, forcing and winning a leadership challenge. We’ll get a new Labour leader, chosen from the ones that did fight to remain, and most likely, we’ll get offered a re-hashed New Labour-like set of policies. In other words, both major UK parties will again espouse Neo-Liberalism to greater or lesser extent. Within Labour, Corbyn will be blamed and be relegated to a well-deserved state of irrelevance.

Naturally, neither scenario represents what Corbyn hopes to achieve. What he presumably is wishing for is a third option:

Scenario 3: wishful thinking.

In this case people will magically forget that Labour did not oppose the Brexit disaster and will instead turn against the Tory government. How this could even be possible if Labour will continue to vote with the Conservatives on all major Brexit-enabling matters is a total mystery. It is pure pie in the sky, eat and have your cake, pink unicorn kind of delusion. One could have a faint hope to achieve such a result by starting to oppose Brexit, but to do so only when it will be too late. But this can’t be Corbyn’s plan, can it? After all, he *is* a man of principle, and wouldn’t actively deceive the whole nation, right?

Be as it may, I can’t see any other option (please do suggest more likely scenarios, if you see any!), leading me to the expected conclusion.

We need to change Corbyn’s mind.

I’ve tried many times, but I am unable to make sense of the current Labour’s stance on Brexit. A “jobs-first Brexit” is a chimera, it is now undeniable that the UK would be better-off within the EU. Moreover, a supposedly clever long game does not exist: if Corbyn does have a plan, it is failing. Insisting on the current line of (in)action is the best strategy to commit political suicide. It has stop now!

If Labour wants to retain any hope to win a general election in the foreseeable future, and if Corbyn wants to be around when it happens, the only possible strategy is to oppose Brexit. This can be done by forcing the Government’s hand and make their shambolic handling of negotiations untenable. The current government is very obviously the most incompetent and farcical one that the UK has had in living memory. It doesn’t even have a Tory majority. How hard can it be to make them stumble one more decisive time? Giving them a fatal blow may not be the easiest thing to do, but it is not the hardest either. Enough Tory MPs would be happy to stop Brexit, so it’s possible that all that’s required is for Labour to stop voting with the government, it isn’t rocket science – it’s called “opposition”!
If a new general election can be triggered soon enough, and if Labour will fight it on a solidly Remain platform, it is quite possible to win an outright majority, this time. After all, we have to admit that the current government could not be helping much more, not even if they tried…

Alternatively, Labour could take the Solomonic escape route and start campaigning for a second referendum (take the best deal on offer OR remain: a choice between deal or no deal is neither meaningful nor sensible). Having hated the first referendum with all my heart, this isn’t my preferred option (Referendums do promote the venomous “will of the people” rhetoric), but it is still incommensurably better than both likely scenarios.

What to do?

Luckily, you can help, if you found this article vaguely convincing.
You could write to your local Labour MP (and MEP, or candidate, and/or branch) expressing your wish to see a change in Labour’s official policy. All Labour members should get in touch with their local branch and inform them that they will rescind their membership unless Labour will start fighting against Brexit (I would recommend mentioning a cut-off date!). On social media, you can make your position known to Labour MPs, MEPs, Labour sections, as well as journalists, editors and opinion leaders. At the next local elections, you can visibly offer some financial support to any candidate that openly supports #Remain, and then vote for them (vote tactically and make your choice known)! You can (and should) sign the relevant petition, leverage the #FBPE crowd to do the same, etcetera. Once a clear objective is agreed (force Labour to fight against Brexit – properly, this time), there is no limit to what could be achieved.
As for Corbyn, I do hope he can be persuaded: after all, if he does want to make the UK a better place (as I believe), fighting against Brexit is the only promising strategy.

Please feel free to share and republish/repackage this article as widely as possible. My blog is licensed under Creative Commons for a reason!

Tagged with: , , , ,
Posted in Politics

Inequality and progress, privilege and barnacles

I’ve recently been to Cape Town, for work. It is impossible for someone like me to visit South Africa without asking “What am I doing with my privileges?” (see previous post). The scale of abject poverty would be overwhelming in itself, but in Cape Town it is paired with unapologetic displays of eye-watering, heavily-guarded wealth. To my eyes, Cape Town is the embodiment of everything that’s wrong with our current world, even if I’m told that Johannesburg is worse.

Darwin and the Barnacle: The Story of One Tiny Creature and History’s Most Spectacular Scientific Breakthrough. By Rebecca Stott.

Also recently, I’ve read a delightful book: Darwin and the Barnacle, by Rebecca Stott. This little gem tells the story of what Darwin was up to, before writing the Origins, but after conceiving the idea of evolution by natural selection. Turns out he dedicated about 8 years of his life to the classification of barnacles (Cirripedia). Why? Because he was a clever chap, and, in my own interpretation, he wanted to achieve two primary objectives. First, he needed to establish himself as a highly respected zoologist – he knew very well that a solid reputation was necessary to ensure his big idea would get a fair hearing. Second, he wanted to see how his idea on the origin of species would influence the meticulous work of classification.

It is impossible, for me, to read this book without noticing how much privilege Darwin enjoyed, but at the same time, one cannot possibly miss how well he used it.

The picture of Darwin that Stott paints is that of a wealthy individual who could spend all his time on something as unproductive as cataloguing barnacles. He earned a living by investing on land and stocks, after receiving an education at his family’s expense. Moreover, his work was dependent on a world-wide network of fellow zoologists, most of whom shared a similar lifestyle. Naturally, this network was possible only because of the postal system, which in turn was kept together by the systematic exploitation on which colonial empires were built. If that wasn’t enough, the first chapters of the book also provide an insight on the cultural milieu in which Darwin conceived his idea. Besides famous predecessors like Lamarck, Stott mentions Darwin’s university teachers and peers, their own influences, and so forth. All white men (most likely), all wealthy enough to pursue intellectual endeavours which afforded little or no obvious economic returns, spanning for generations. In other words, it is impossible to read “Darwin and the barnacle” without gaining an insight on how much privilege and inequality have been necessary to allow one of the most important scientific breakthroughs in the history of humanity. Take away Darwin’s privileges, the global system of inequality on which the British Empire was built (along with the other colonial powers) and Darwin would not have been able to do all the groundwork which secured his standing and thus ensured his ground-breaking book could not pass unnoticed. Take away the similar privilege of the countless thinkers who influenced him, and you’ll feel safe to bet that Darwin could not have conceived the idea of Natural Selection at all.

What this tells us (Stott’s writing is good enough to let you feel it in your bones) is that, in the world as it was organised back then, inequality enabled discovery and (scientific/technological) progress. I have little doubt the argument can stretch back for countless centuries, so I’ll leave it to my readers to look into their preferred breakthrough and see if privilege and inequality played an enabling part.

The result is as distasteful as obvious. Intellectual progress requires time to think. Time to collect and exchange ideas, time for debate, time spent studying, collecting or generating evidence, and so forth. Crucially, it requires time to develop gargantuan numbers of new ideas, most of which will turn out to be inadequate (we can remember Lamarck, to remain within the topic), but will nevertheless enable further improvements. Making mistakes (and lots of them!) is how we learn most of the important lessons. Ground-breaking thinking requires to be able to safely make mistakes, without risking too much of one’s livelihood, at least. None of the above can be done in significant amounts if everyone has to invest most of their time actively earning a living. If we move into the present world, people like me can spend a lot of time doing just that, but it is undeniable that I enjoy an amount of privilege that could probably make Darwin himself green with envy. Most people in academia are paid to study the stuff they love, how can you beat that?

In short: if we want progress, we need some privilege and inequality. Ugh.

Not nice, but not groundbreaking either. We all knew it already, right? Perhaps, but I get the feeling that we are happy to ignore this shared knowledge. First of all, most of us feel that we’ve earned our current standing. I know I do. However, this feeling is wrong, but not just a little wrong: it is positively delusional, as summarised in this comic strip, by Toby Morris (this is required reading!).

Moreover, inequality is increasingly prominent in public discourse: scholars and commentators agree that inequality is growing in uncontrollable and dangerous ways. However, I fear that the situation is frequently discussed superficially. Reading the news, I get the impression that inequality is treated as somewhat inevitable. It is indeed inevitable, but its shape and effects aren’t. If we could start to collectively question the otherwise invisible neoliberal assumptions that underpin most of our society, I am pretty sure that we would find better ways to manage, or even exploit, the inevitable inequality that comes with complex social organisations. There are, in the public discourse, some voices who are able to challenge said assumptions, but again, most of them seem to forget a couple of details. On one hand, all of the public voices who are able to challenge the established order are able to do so because of the privileges that such order confers them. On the other, most of these voices tend to work on the assumption that inequality is bad and should be eliminated. Too bad that such a vision doesn’t work. It is indefensible. Take an idealised society, where everyone is equal. How does it look? It doesn’t, that’s how. It is not possible. Not even hunter-gatherers have perfectly flat societies. Elders have privileges, good looks confer unearned status, and so forth. If a society allows specialisation, and specialisation is required in every technology-based society, different specialisations will confer different opportunities. These differences will propagate along kinship and relational networks (if I know about computers, my kids would probably grow up familiar with computers, etc.). Thus, no social organisation can be based on absolute equality: organising entails unequal distribution of opportunities.

Once again, this is all obvious to the point of being boring, but it needs to be tackled head-on. Doing so paints the problem of inequality in a different light: our problem is not the existence of inequality. Our problem is twofold, it is about excessive accumulation and misuse. On one side, there simply is too much inequality. On the other, if we don’t accept that privilege can be useful, we can easily fail to reap its potential benefits. As a result, too much privilege goes to waste. That is, I’d argue, the actual problem that we face.

Accumulation:

According to Oxfam:

The 1,810 dollar billionaires on the 2016 Forbes list, 89% of whom are men, own $6.5 trillion – as much wealth as the bottom 70% of humanity.

I would guess that roughly 99% of the privileges that such wealth could confer are going to waste. It is quite fair to assume that most of these people spend their days trying to maximise their wealth. After all, in a capitalist society, this is what they are supposed to be doing. [The Forbes’ Billionaires list, used to collect the figures above, makes it perfectly clear: in the link above the list is preceded by the “Today’s Winners & Losers” section, emphasising our inclination of framing such matters in competitive terms.]
Some of these people do spend some of their time trying to facilitate progress, that much is true. But is it enough? Nah – it can’t be: billionaires have disproportionate amounts of power, and very little constraints. At the very least, their concerns will be biased by their own position and cannot possibly be considered to be diverse enough to approximately reflect the needs and desires of humanity as a whole.

Misuse:

If we don’t recognise that privilege is supposed to be useful (for society as a whole, not just the privileged!), we can (and do) encourage people to waste it. We return to Cape Town. What good are enormous mansions used a few weeks per year, for the holidays of some billionaire? None or almost none. What does society gain by building enormous yachts that sail across the world to be used in the week-end (and be reached by private jet)? Close to nothing. Who needs the latest Ferrari? Nobody. What does society gain from the existence of few individuals who have enough spending power to buy entire countries?

I think and hope that this state of affairs is not necessary or inevitable. What facilitates it is a collection of self-serving ideas: that privilege is earned, that the act of earning it guarantees something useful was done in the process, and that therefore privilege can be spent/used without second thoughts.

I disagree. Privilege is usually the result of luck (95%, or thereabout). Moreover, as Darwin’s encounter with barnacles testifies, privilege should be used, not dilapidated. Thus, we get two streams of consequences, and an overarching question.

The question is: when is inequality too much? If some privilege is both inevitable and potentially useful, it follows that there probably exists a Goldilocks zone for inequality. Enough to allow an adequate number of people to take risks, write novels, play music, invest in apparently idle intellectual quests, etc., but not much more than that.

[Note: it would be wrong to assume that such a Goldilocks inequality zone is fixed. It is probably a function of how society is organised and of general availability of resources. It would be equally wrong to assume that such an ideal status always correlates with periods of high cultural or technological advancement. High inequality will generally allow some people to pursue intellectual endeavours; however, the results might be visible even when an unnecessarily high amount of privilege goes to waste.]

Consequences: global.

Forbes’ figures leave little room for doubt. This article (by David Leonhardt) tells us that it’s getting worse. If there isn’t too much inequality right now (there is, but let’s pretend) and the trend isn’t inverted, there soon will be. Thus, we reach the predictable conclusion that we need to invert the trend. That’s OK, only problem is that we don’t know how to do it. To be fair, we have plenty of reasons to worry that we can’t: the current political developments suggests that we are actually busy electing politicians who are determined to facilitate the rise of inequality (even when they are not directly funded by those who want to protect existing privileges)… If something is to be done, I’d start by NOT electing such people.

Consequences: personal.

What is a little less predictable is what all of this means for you and me. If you are reading this, you have an internet connection and some time to read. Thus, almost without exception, every person reading this article probably has some spare time and some mental energy to invest – I do, clearly… The question is: how are we using our spare resources? Are we allowing our privilege to be wasted? Probably.

Thus, we reconnect with my previous post. I am incredibly lucky. I am trying to use my luck: I have the presumption that I am not bad at thinking, therefore I try to do just that. I think (and write), in my own idiosyncratic way.
Is it enough? Probably not, but at least I’m trying.

Tagged with: , , ,
Posted in Evolution, Philosophy, Politics, Science

Naïve philosophaster: on method and privilege

In which I start counting my privileges and reflect on how I am trying to use them.

That’s correct, I did write “use [my privileges]”. This expression is questionable and relies on important implicit assumptions. In fact, one purpose of this post is to allow writing the follow-up, where I’ll unpack these assumptions and discuss them explicitly. My overarching aim is to explain why I think that privileges should be kept under control, but used, not eliminated. However, before tackling the core argument, I think I need to spend some time looking at my own circumstances.

The Cathedral of Learning. Intimidating and self-referential? Image by Thomas J. Grant (CC BY-SA 3.0)

A few years ago I wrote a self-describing blurb for my Twitter profile. The full text is:

Former (molecular) neurobiologist, now software developer. Science junkie, evidence seeker, naïve philosophaster, music lover.

With so little space at my disposal, it still surprises me how long this text lasted, and how comfortable I still am with it. It is ageing well. Most of it is factual, describes what I do, where I come from and what makes me tick. The odd element is in the title of this post: naïve philosophaster. This expression is my attempt to express the serendipitous method that I found myself following, which in turn screams of privilege. If find the circumstance interesting and problematic. I also think that it allows to form the basis of a wider set of ideas, giving me an excuse to pretend that what follows is not only about me, me, me[1].

My blessings.

I was born white, male, heterosexual, into a middle class, intellectual and caring family (if somewhat troubled). Financial worries were limited to the long-term: food, clothes and shelter were not a pressing or recurring concern. I arrived into this world as European, just in time to fit in the last generations where it was normal to end up being better-off than the respective parents. I could go through university without needing to work, and could therefore complete my studies in reasonable time while also cultivating extra-curricular interests. As a result, I have never been unemployed, not even for a day.

How lucky is that? Considering luck alone, I must be among the top 0.0001% individuals that have ever roamed the planet. That’s right, remember the rhetoric about the 1%? Forget it. I got it much, much better than that. The only thing that isn’t quite perfect about the situation I was born-in comes from a good dose of familiar trouble in my childhood. I was not a happy child, but luckily(!), I was not scarred for life. What the early trouble left me is some awareness of what trauma can do to people, no matter how privileged. I also learned that logic and rationality are never the main driver of human actions.

Overall, in the global scale, considering the history of humanity, virtually no-one had it better than me.

I have an extremely satisfying job, which allows me to put the bulk of my mental energies into something that is very clearly contributing to the common good. I am not rich in the 1% sense, not if the 1% is limited to the Western block, but I don’t need to worry about pennies either. That’s one of the ways in which I can use my rare set of privileges: I could perhaps find a better-paying job, but I would have to compromise the perceived utility of the job itself, so I choose not to even look. But is it enough? No, I don’t think so. The additional bit is implied by what I mean with “naïve philosophaster”.

The value of naïvety.

My natural inclinations are quite intellectual. I’m at home amongst abstractions, and when I’m facing some difficulty, my instinctive reaction is to sit down and think. Not having kids, I get to enjoy having some extra time, and when I’m lucky(!), I may even have some spare brainpower. It took me almost 40 years to realise that my lucubrations might be (with some luck!) somewhat valuable. By doing so I found myself facing the question: OK, so how should I invest my spare mental energies? I wanted something that wouldn’t drain me (the real job should remain my priority), but hopefully complement my professional persona. This blog is the result, and naïve philosophastry is how I do it.

Some of the “how” comes by deliberate choice, some of it from luck(!) or serendipity. If the question was: how should I spend the spare resources that I have? “Build on your strengths”, was my not entirely conscious answer. I’m good at grasping the big picture (AKA: I get quickly bored if I have to look at minute details!), I know something about biology, computers, neuro- and cognitive-science, I also have an interest in politics (in case you didn’t notice!). Given these “strengths”, it came natural to me to try to use them by thinking and writing. The hard part was to accept that my thinking, no matter how imperfect, needs to be made public, if (and only if) the hope is that it may be useful[2].

The serendipitous part is the method I have apparently settled-in. It goes like this: I find a problem that interests me. I allow and encourage myself to think and read about it in whichever way happens to be possible and relatively easy. Then I sit and write down my thoughts. This forces me to construct some kind of discourse which appears to be sufficiently coherent. If such coherence fails to materialise, I can supplement with some extra reading. However, at this stage I would normally stop whenever I’ll manage to put some order in my thoughts. After doing so, I publish the result here and/or in whichever medium appears to be suitable (mostly here!).

This solidifies my starting position, and comes with the important side effect that it hopefully allows some originality to slip in. At this stage, I can and usually do notice weak spots in my reasoning, I might also naturally grow an interest in points of view that appear to challenge my reasoning – with luck(!) I might even receive valuable criticism and feedback. This is where (hopefully) my naïvety starts decreasing. More reading happens, directed by what I perceive as gaps or weaknesses in my own position. I can then iterate: starting from a little less ignorance, see what subset of my original ideas still seem to make sense and repeat the process until boredom supervenes[3].

Why am I inflicting this onto my readers? Because what I am doing seems to work, at least in the sense that it is not guaranteed to be a waste of time[4]. Trouble is, once again, I can do what I do only because of luck. I’m a philosophaster, because I don’t do philosophy professionally. However, this circumstance allows me to do philosophy a little differently, and thus comes with the hope of doing it, in some very limited respect, better (ugh!).

Professional philosophers, like most/all academics, don’t normally have the privilege of indulging in their own idiosyncratic thought processes. In order to make a living, they ought to start by securing their BA, which mostly consists in eliminating every visible naïvety. They then have to get a PhD, which requires to build some original thoughts, but on the mandatory condition of demonstrating that such thoughts are based on a thorough understanding of the pre-existing ones. Publishing papers and books then also follows the same pattern: there simply isn’t any room for naïvety (a side effect is, I fear, some promotion of intellectual dishonesty: admitting that one’s reading/understanding doesn’t cover every possible aspect of a given topic is academically unacceptable – but, alas, some ignorance is, IMO, inevitable). I know that the official approach does make sense: it prevents people from producing the same old ideas (or mistakes) over and over again. However, it also creates new problems. To earn the right of being taken seriously, modern-day philosophers need to immerse themselves in pre-existing frameworks. They have to accept and build upon one or the other paradigm (in most cases). This inevitably has two undesirable consequences:

  1. Allows people to concentrate on underdeveloped corners of a given framework, providing a seemingly endless supply of  low-gain, low-risk routes to securing a career in the field.
  2. By promoting hyper-specialisation, current expectations also constrain and stifle the appearance of big-picture, or out-of-the-box, paradigm-changing new ideas.

In other words, I do think that professional philosophy is indeed disproportionately geared towards promoting the study of Chmess. Luckily(!), since I earn my money elsewhere, I have the possibility of playing the naïvety game, which neatly side-steps what I perceive as the most common pitfalls of professional philosophy.

This is not to say that naïvety is a virtue, it comes with obvious drawbacks – in fairness, my approach only makes sense in light of how it differs from the mainstream – it is somewhat parasitic. In terms of drawbacks, I am certainly maximising my chances of wasting time by re-producing unoriginal ideas or mistakes. I can afford this risk: if I have fun in the process, no real waste is involved. Secondary risk is wasting the time of my tiny readership: on this, I am selfishly happy to let you take your chances!
Moreover, I make my mistakes in public and I visibly refuse to tick most or all the official “seriousness” boxes. In fact, I am now advertising my refusal. By doing so, I am lowering my chances of being taken seriously. This may become a problem if I will stumble on an idea that is both valuable and new. Given the low probability of such eventuality, I guess I can afford leaving this problem unsolved, for now, even if I do harbour the worry that my current method is self-defeating, for sociological reasons.

Conclusion

I am lucky to a point that defies comprehension. Even in my attempt of using my luck in a productive way I end up exploiting my luck even more. Naïve philosophastry certainly isn’t the best or most parsimonious method around, however, it suits my situation, comes effortlessly to me and ensures I have fun along the way. At the same time, it allows to sidestep some of the major flaws of the mainstream method (in my perception). Thus, I exploit my privilege, by being a Naïve Philosophaster. What remains to be seen is whether I should acknowledge my self-serving biases, and do something radically different.
Instead of exploiting my privileges, shouldn’t I renounce them? It seems logical. My excuse for not doing so will come in the next post. Stay tuned, and please do feel free to have a go at me in the meantime.

Notes and Bibliography:

[1] Being about me, writing this post has been harder than usual. I want to thank some of my Twitter friends (Stuart Boardman, Paul Harland and Abeba Birhane) for their kind and useful pre-publication feedback, as well as for collectively providing the courage to click “Publish”.

[2] “Useful” is left underspecified on purpose. I don’t need to know exactly how my ideas might be useful. They may make someone look at their own beliefs in a new light, spark some debate, help a student with some coursework, inspire a new train of thoughts, expose a mistake, etcetera. In all cases, even the ones I can’t imagine, for an idea to be useful to anyone but me, it needs to be available outside my own head.

[3] Boredom is an involuntary heuristic system that (hopefully) signals diminishing returns. Once I stall and fail to detect the potential for significant progress, boredom automatically steps in, forcing me to move on. This is also where I think the process risks failing: I may and probably do fail to follow up; once I feel my ideas feel settled and well informed, I might lose interest before writing down the result of the whole charade.

[4] I do know that some of my posts here are (or have been) used in university courses, moreover, my blogging experience is helping me at work. Thus, I do have some reasons to believe that what I’m doing is not entirely wrong.

Bibliography

Dennett, D. C. (2006). Higher-order truths about chmess. Topoi, 25(1), 39-41.

Tagged with: , ,
Posted in Ethics, Philosophy, Premises

Predictive Processing: the long road ahead.

In the previous posts in this series I’ve proposed an extreme synthesis of the Predictive Processing (PP) idea, as proposed by Andy Clark in “Surfing Uncertainty” – I concluded with a post that summarised why I think PP is the most promising idea currently on offer in the entire neuroscience field. In this post I will do the opposite: exciting and extremely powerful ideas should never go unchallenged. Thus, I will produce a short list of what I see as the main problems that PP either fails to solve or even generates by its own accord.

Audience: who is this post for?

If PP is true, why so many different neural structures? Image by Thomas Schultz. CC BY-SA 3.0

This post is significantly different from the previous ones in the series. Previously, I tried to summarise my understanding of the PP framework. First of all, I wanted to check if my understanding was good enough, at least according to my own standards(!): by trying to put together a decent summary I forced myself to see if the picture did fit and if it appeared to cover enough ground. Secondarily, I thought this exercise could be useful to newcomers. PP isn’t exactly the most approachable framework. Thus, I was (/am) hoping that my effort could double up as useful introduction to PP, at least, it could help deciding if and how PP is worth deeper scrutiny. Having done the above, however imperfectly, it’s time to change gear and move onto criticism. Once again, this helps me to understand what I should look out for: a neat list might direct my future readings, based on their potential to address what I think are the most important shortcomings and/or gaps in the PP story.

In terms of audience, this means that my ideal reader has changed. I would like to receive scrutiny and counter-criticism from people who are already invested in the PP framework. In return, my list might help PP specialists to see their topic from a fresh perspective, which may be useful to spot weak points (if I’m doing it right) and/or areas that require more accessible explanations (if I’m not!).

Method: what do I think I’m doing?

Given my high ambitions, it’s worth adding also some warnings, in the form of an explicit admission of why what follows is necessarily far from ideal. I write here because I enjoy it, but I have a quite demanding day job, which has nothing to do with neuroscience and/or PP itself. Thus, I cannot, nor wish-to systematically read most or all of the literature on the subject. What I do, is approach the topic with a flâneurish attitude: I do actively look for things to read, but only invest my limited spare time in reading what happens to attract my attention, for whatever reason.

As a consequence, I expect that many of the points I’ll address below have been raised before, and that many possible solutions have been proposed already. What I will mention is selected (out of a longer list) either because I think that a given issue really needs to be addressed as soon as possible (and in great detail) or because I think that there is no available consensus on the possible “solutions”. In both cases, I might be wrong, in which case I would greatly appreciate some feedback (specifically: pointers about what I should read next!).

Predictive Processing: a list of issues and upcoming challenges.

I will list most of my criticism in the shortest manner that I’m capable of. I will try to explain at least why I think a given question should count as a genuine problem. I do plan to expand on several or all points in follow-up posts. The following list is roughly organised from the more technical/specific to the more general/conceptual.

1. Does PP require filtering and partitioning?

If PP layers specialise in predicting certain kinds of features, does this require to filter incoming sensory streams and to segregate the results of different filters along separate PP pathways? Starting from the periphery, PP states that there must exist a “Level-0” which receives “a prediction” from Level-1 and matches it with the raw sensory input. Let’s imagine that Level-1 specialises in predicting direction of edges/lines/segments in the visual field (to make my point understandable – in fairness, any kind of feature might be the specific concern). Even if the prediction is 100% accurate, a lot of the original signal will not be predicted, for Level-1 only deals with a limited set of features; hence, most of the original input to Level-0 will always travel up to Level-1. In PP terms, this would/should count as an Error Signal (ES). However, if the job of Level-1 is do deal with edges/lines/segments alone, the signal it receives from Level-0 will never account for errors alone. Level-1 will therefore need to be able to discern between residual sensory input that could not have been predicted (at this level) and error signal that results from wrong predictions.
This simple observation calls for an additional element: either on levels 0 or 1 some filtering mechanism, on top of vanilla PP, is required. This filtering could be used to distinguish ES from the residual sensory signal. Alternatively the filtering may happen upstream, ensuring each level receives only the kind of signal that fits its particular role. Moreover, what is filtered out at one level needs to be directed to some different PP unit.

Thus, we end with:

  1. At least one additional mechanism, dedicated to filtering. More importantly, different filters will apply at different levels and PP units. Thus, at each/most/many levels, different filters are likely to entail very different neural architectures.
  2. Re-routing of signals so to apply different filters to the same raw signal, leading to parallel PP as well as instances of level skipping, where a particular filtered signal jumps one or multiple levels to serve as input at the appropriate layer.
  3. If I’m getting this right, it is then possible that an additional system is required to reconstruct the bigger picture, once different features have been segmented and predicted (or not).

In other words, I don’t think that the purely perceptual part of PP, as proposed/synthesised by Clark, is complete – some additional element is missing.

2. Attention.

The account of attention proposed by PP is remarkably elegant, extremely powerful, and strikingly comprehensive. Attention is very hard to pinpoint, due to its duplicitous (or multifaceted) nature. To start with, attentional mechanisms can be triggered in both bottom-up (a sudden noise) and top-down (trying to decipher the handwriting of a medical practitioner) fashions. At first sight, precision weighting (PW) allows to account for this dichotomous triggering, which is one reason why PP looks so promising. However, I do not think that PW alone can account for all the observable phenomena – from my own (perhaps idiosyncratic) point of view, both the bottom-up and top-down stories seem incomplete, at best.

Bottom-up: a sudden and unpredicted loud bang is certainly able to generate bottom-up attention. Problem is: attention to what? Not the noise itself: being sudden and brief, by the time the attention mechanism has managed to be fully activated, it is likely that the noise has disappeared. In fact, what happens is that our attention is diverted towards the rough spatial location where we have estimated that the noise originated. This makes adaptive sense, but, as far as I can tell, nothing in the proposed PP mechanisms is able to explain how. Yes, a big ES was received (we failed to predict the sudden noise), but somehow, attention then becomes focused on multiple channels, directed to specific features of such channels, and perhaps won’t even involve the channel where the original strong ES was generated.
Top-down: similarly, if I’m trying to decipher my doctor’s hasty scribbles, PP suggests that I would do so by enhancing some error signal (requiring to put more time and effort in minimising it). Once again, the obvious question is: how does my brain decide what error signal should be amplified? In this particular case, it will involve a relatively high (conceptual) layer in the visual pathway, but most definitely, not the whole visual processing hierarchy. For example, detecting the exact hue of the scribbles isn’t very relevant to the task (doesn’t change much if they were made with a blue or black pen) and motion detectors (predictors) won’t be very useful in this particular case. It follows that attention needs to be able to focus not only on specific areas of the receptive field (in this case, specific parts of the visual “scene”) but also at particular (context dependent) layers in the PP pathway. Simply enhancing the strength of the error signal along the visual pathway (which is what is possible according to the vanilla interpretation of PP and PW) is very obviously not enough. We need to be able to enhance a specific error signal, identified in a coordinates space which is at least three-dimensional. The architecture of the visual pathway may allow to enhance only a particular area of the visual field, this would account for 2 dimensions, leaving the third (what kind of feature is attention supposed to focus on) unaccounted.
Once again, it seems to me that some fundamental ingredient is missing. It is possible that this ingredient is identical or closely related to the one I’ve identified in the previous section.

3. Heterogeneity.

PP describes a general processing style which is supposedly able to account for most of our mental abilities. From perception (of internal and external states), to attention, sense making, planning and action control. Very little does not fit in. This is one reason why PP is so attractive, but comes with its own cost. If the same processing style is deployed across more or less all of the brain functions, the variability of neural structures within the brain becomes an anomaly that requires a PP-specific explanation. As far as I can tell, this explanation is currently only sketched. If my first worry (above) is justified, I do suspect that what will count as a solution about filtering and partitioning might end up to account for various specialised structures that are particularly optimised for filtering and/or predicting specific features of incoming signals.

4. Development.

Once again, the versatility of the PP story generates its own new questions. If PP circuits (whatever they will turn out to be) are able handle most of brain functions, it follows that to enhance the abilities of a given brain, what is needed is simply more of the same. This is not what is observed along the development of brains (synaptic pruning, anyone?). There is a lot that needs explained in terms of how different structures develop in a PP-specific way. Once this is done, one also needs to explain related phenomena such repurposing of specialised areas (reading, for example) and proceed to figuring out how specialised areas change in size following training, exercise or disuse. Perhaps this is also where synaesthesia enters the picture.

5. Learning.

What specific systems allow the PP circuitry to adjust predictions in order to learn how to quash error signals? It seems to me that PP should be enriched with one or more hypothesis tackling how, given ErrorA (corresponding to PredictionA), the layer that received it will produce a new PredictionA1 which should better reduce the ES. This is an extremely complicated business. To start with, PredictionA and ErrorA both might contain clues on how PredictionA should be modified, but other clues could be present in virtually any other processing layer. Presumably, the brain has some system of fishing out relevant information, but nothing in PP helped me getting a glimpse of what such a mechanism might be. Timing issues also get in the way: by the time PredictionA1 is issued, new sensory input would have been generated, making it even harder to produce the right prediction for the new (still to-be-assessed situation). If a prediction is badly wrong, how does the brain get back in track, instead of getting it progressively more wrong?

6. Evolution.

Some elements of the PP story are well placed within a bigger evolutionary outlook. Perhaps too well! The perspective offered by Friston via the deployment of Markov Blankets is in fact able to extend the PP lens all the way back to unicellular organisms… Thus, more new questions emerge! If PP-like mechanisms are possible (or even necessary!) within single cells, what different function were neurons selected for? When/where exactly in the phylogenetic tree do neurons start to organise around error minimisation? Does that coincide with the point where error signals get relegated to signals between neurons? Speaking of which: why should error signals be transmitted exclusively between neurons? Are we sure they don’t involve other types of cells?
If PP circuitry is so versatile, what accounts for the vast difference in abilities across different species (and even different individuals  – see also points 1, 3, 4 and 5 above)? Looking at humans: what explains our special abilities (for example, the unprecedented specialisations which allow language and mental time travel)? If PP accounts for it, does it mean that PP circuitry is not present in organisms that show no trace of such abilities? If it does not, what additional ingredient enables some faculties especially in humans?

7. Plants, sleep and dreaming.

Sleep and perhaps dreaming seem to be conserved features, present (in variable forms) across most animals, probably even insects. PP proposes to be an architecture that solves the basic “persistence” problem of all living forms (via the Free Energy Principle – FEP, see conclusion): is PP present in some form also in plants? If not, why not? Assuming we can leave vegetables aside, can we expect PP to be present across most animals? If we can, should we conclude that sleep and dreaming correlate with PP? In such a case, what is the relation supposed to be? Does PP itself produce the requirement of sleeping and dreaming? How? If not, why not?
[In fairness, Clark does address some of the questions above in his book. As far as I’m concerned, I would guess that learning, sleep and dreaming will eventually be accounted for by one single “solution”, see my ancient thoughts, to learn why.]

8. Pain and pleasure.

If we are describing the signal exchanges to/from and within brains, I would expect that any such account will somehow account for pain and pleasure signals. In PP, perhaps pain can be conceptualised as an error signal that refuses to be predicted away (thus being well placed to attract our conscious attention). This idea seems promising to me (would account for some structural hyper-prior, forcing pain “errors” to never be fully predicted). If so, how does PP allow for such “systematic” inability to predict something? Especially with chronic pain, such predictions should be quite easily produced! Even if this idea is on track, how do we explain pleasure? It can’t simply be the opposite: a signal that always gets predicted away. That’s because we are all quite good at giving attention to pleasurable sensations… In other words, I can’t see how PP can directly account for the special qualities of painful and pleasant stimuli, or even start explaining what distinguishes one from the other.

9. Consciousness.

This issue follows from the point above. It doesn’t seem that PP itself is able to account for the phenomenal element of consciousness (the “what is it like” aspect, or phenomenal consciousness – PC). Once a brain has produced a good enough global prediction of the last train of inputs, what exactly controls why we can consciously perceive some features and not others? How does attention influence what we are conscious of? What constitutes the undesirability of painful sensations? What makes pleasure desirable? Are all PP-based systems conscious? If not, what additional system produces PC? What accounts for the loss of PC during dreamless sleep?
In short, PP appears to remain solidly outside the scope of Chalmers’ Hard Problem of consciousness. This is perhaps the biggest problem that I see. If PP explains perception (but does it? If PP does not include an explanation of why we perceive some things and not others, does it account for perception at all?), attention, mental-time travel, planning and action, but in no way accounts for PC, what function does PC fulfil? If accepting PP entails epiphenomenalism, as far as I’m concerned, it follows that PP must be nonsensical, pretty much as epiphenomenalism itself.

Conclusion.

The list above is incomplete. It took me a very long time to write this post also because I had to find a way to organise my thoughts and establish some reasonable criteria to decide what could be left out. The biggest omission is about the Free Energy Principle. This is because criticising FEP requires a full book, cannot be done in a few lines. Secondarily, such criticism might be aimed at a too broad target, and thus fail to be constructive. [For the gluttons: I’ve covered the brightest side of FEP here, while some hints of criticism are in this discussion.]

Overall, it seems to be pretty obvious that PP, as a theoretical framework (and/or, depending on your preferences: a scientific paradigm, a scientific programme) is far from complete. This is expected and entirely justified. As anyone with some familiarity with the history of science should know, new ideas require time to reach maturity, they necessarily start off by being incomplete, sometimes directly contradicted by some pre-existing evidence, and not necessarily self-consistent either. That’s normal. Thus, this post is not intended to curb our enthusiasm, it is intended to focus it in (hopefully) useful ways. My quasi-arbitrary critique above might help focussing our attention in interesting directions. Or at least, it might help me: I will appreciate all feedback, and in particular reading suggestions in response to any of the points raised here. Thank you!

ResearchBlogging.org

Clark, A (2016). Surfing Uncertainty: Prediction, Action, and the Embodied Mind Oxford Scholarship DOI: 10.1093/acprof:oso/9780190217013.003.0011

Tagged with: , , , , , , ,
Posted in Neuroscience, Philosophy

Machine Learning, the usual Bat and deflationary epistemology

What does it feel like to be a mechanical Batman?
Original image by Andrew Martin [CC0 1.0].

This is a quick, semi-serious follow-up to my first Twitter poll. In a rare moment of impulsivity, I’ve recently posted a deliberately awkward question on twitter. A few respondents did notice that something was amiss, an indeed, an explanation is due, hence this post. The subject does demand a lengthier treatment, which is in my plans; for today, I’m hoping that what follows will not sound entirely ungrounded.

I rarely act impulsively, but maybe I should do it more often? Predictably, my poll did not collect many votes, however, I could not hope for better results: adding my own vote, we get a perfect 50-50 split. There appears to be no agreement on the matter, so perhaps the question was worth asking…

The Question itself

Here is the original tweet:

Why did I pose the question?

In extreme synthesis: I guessed the reactions will be thought-provoking, for me, at least.

I wasn’t wrong. I was also hoping not to find too much agreement, as a split opinion in this case would give me a chance to propose some additional lucubrations.

My interest can be summarised as follows:

  1. To my eyes, the question can only make proper sense if one is aware of two distinct debates. In philosophy of mind, most of the discussions revolve around foundational questions such as: how does phenomenal experience get generated? Is it reducible to physical mechanisms?
    On the other hand, as real life applications of Artificial Intelligence are becoming quasi ubiquitous, other questions are becoming important and even urgent: there is an important demand to make machine-learning algorithms auditable, accountable and/or generally “explainable”. Thus, I was curious to see what my Twitter bubble would make of my mix’n’match provocation. I think I didn’t include the “huh?” option in order to force people to try harder and see if they could figure out what the connection might be. In hindsight, it wasn’t a bad choice, perhaps.
  2. I was also being a bit mischievous, because by forcing people to double-check their reaction (by not allowing to answer “huh?”) I sort-of forced some to make an incorrect choice. The only way I can see to make sense of the question is by recognising (at least at level of intuition) that there is a connection. If someone saw no connection at all, then the “correct” answer would indeed have been “huh? question is malformed, can’t figure why it’s worth asking”. Thus, knowing that within my twitter reach there are plenty of very clever people, I was semi-consciously curious to see if anyone will call me out. At least two did, to my great satisfaction! (With my apologies.)
  3.  Both debates (point 1. above) are, IMVHO, informed by mistakes. I wanted to explore the intuition that these mistakes share a common root. Which then immediately becomes the reason why my answer is “No, it isn’t a coincidence“.

This leads me to the second part of this brief response, it’s time to spill my beans and write down what I think.

My answer: no, it isn’t a coincidence.

My position has to do with what it means to know/understand something and how my own deflationary epistemology allows to make sense of a good number of problems. I’m pointing at some sort of illusionism in terms of knowledge (as in: “knowledge isn’t what we think it is“). I’m not planning to fully unpack the above in here, but will use my question to explain a little.
[Note. I will do so from one angle only: a full exploration requires to show how the same manoeuvre works along many different paths and brings to more or less the same conclusions.]

The route I’ll pick today is about the mistakes I mentioned above. In AI (or better: Machine Learning – ML), (informed) people are both rightly and mistakenly(!!!) asking to work towards producing ML systems that can be “explained”. Specifically, because of the enormous importance that ML-based decision-making is acquiring in our society, (informed) people want  the ML algorithms to be auditable. When a given machine makes a non-trivial choice, we want to be able to know “why did this system pick A and not B?”. The reason to demand such “transparent” ML systems is obvious, important and entirely correct: after all, we *need* to be able to detect and correct mistakes.

However, I fear that it is impossible to fully satisfy this demand. This has to do with reduction and our epistemological limits. Starting with the latter, if the question is “why did this system pick A and not B?”, the set of what could count as acceptable answers does not, by definition, contain the correct answers. ML systems are built to deal with otherwise unmanageable high number of variables, each having the potential of contributing to the output, and usually the final result is indeed determined by small contributions of a very high number of input variables. Thus, saying “Machine picked A because…” requires to list the contribution of many factors, and explain how they influenced the training phase as well as their relative contribution in the current choice. Unfortunately, no human can make sense of such an answer! What we’d like instead are answers like “…because the training set was biased towards A” or “…because most training data points to A”. Trouble is, both kinds of answers are oversimplifications, to the point of being wrong and pointless.

To put it in another way: when we are applying ML to a domain that justifies the use of ML, the complexity of the domain in question guarantees that the easiest way for us to learn what the ML system will output is to let the system compute the response. If we had an alternative, “better” (simpler) way of doing it, we would use this simpler system directly and let intractable ML systems alone, right?

Looking at the same scenario in terms of reduction, what we find is that ML is used precisely when reducing a problem to a handful of tractable variables simply doesn’t work (or we don’t know how to make it work). Thus, the interesting/useful results provided by ML are precisely those we are currently unable to reduce to simpler, more explainable, algorithms. QED: we can’t know why the machine picked “A” precisely because we asked the machine in the first place!

In terms of deflationary epistemology: we can only fully “understand” simple stuff, most of us (including me) can hold in working memory only less than ten variables, working out how they interact without the aid of external mind-extensions (pen and paper, calculator, spreadsheet, ML systems, etc.) is simply not possible. In other words, we can’t understand ML-driven choices because we ask ML to operate on domains that we can’t reduce to stuff we can consciously process.

This leads me to our bat – or better, a bit closer to our (mis)understanding of phenomenal consciousness. Image recognition is the typical domain where only ML systems can match our own abilities (we could not design any “simpler” way of doing it). [Coincidence? No!] Of course humans are, according to their own standards, quite good at image recognition. However, not a single one of us has a clear (and demonstrable) idea of how we do it. We do it all the time, but we do it unconsciously. Yes, we “recognise” red among other colours, which leads us to say that there is a specific “how it is like” to perceive redness. But how we do recognise redness (or anything at all) is entirely obscure to us. Introspection can tell us exactly nothing about the mechanism that allows us to discern colours. Neuroscience is starting to produce some incomplete answers, but it is merely scratching the surface.
[Reminder: colour perception is spectacularly complex, do I need to mention “the dress“?]

Thus, we must conclude that humans (and probably mammals, if not most animal forms), just as ML systems, are able to make discriminations that rely on contributions made by a high numbers of variables. I hope that we can now agree that humans are unable to consciously explain exactly how equivalent tasks are performed by machines and biological organisms alike. This inability is a function of the complexity of the task, not a function of what system performs it.

[Note: I am not talking of what counts as “scientific explanations”, I am referring here to what we can grasp and feel without external aids.]

In the case of biological image-recognition, we don’t know how the mechanisms in question work, but we do know that even if we did (in scientific terms), we would not be able to produce explanations that are simple-enough to be understood by most humans (not without having to laboriously study for many years): in the case of ML, we know everything about the mechanisms, but we still can’t find the answers we’re seeking. This is because we want “simple” answers, simple enough to be understood, at least. The simplicity of the desired answers is the common factor between the two “unknowns” mentioned in my poll.

Thus, we reach my conclusion. We can’t (consciously) know how it feels to be a bat: even if we knew the mechanism (as per ML), we would not have the capacity of reasoning all the way up to forming the correct idea (such idea, in order to be correct, includes too many variables, so we wouldn’t be able to hold it in our limited conscious minds).
Thus, the answer to my question is (from my own perspective!) a definitive: “No, not a coincidence”. The common factor is how limited our conscious understanding can be.

Conclusion

My own hunch may well be wrong, however, the fact that the poll results are split (based on a tiny sample size!) is hopefully an indication that the question is not as absurd as it may appear at first sight. Please do feel free to add your own thoughts in the comments (or via Twitter, if you prefer). Thanks for reading and to all the poll responders!

Tagged with: , , , ,
Posted in Consciousness, Philosophy, Stupidity
Follow me on Twitter

All original content published on this blog is licensed under a Creative Commons Attribution-NonCommercial 4.0 International License.
Creative Commons Licence
Please feel free to re-use and adapt. I would appreciate if you'll let me know about any reuse, you may do so via twitter or the comments section. Thanks!

%d bloggers like this: