Sources of error: the illusory illusions of reductionism

While my discussions elsewhere are settling down, it’s time to finally tackle one basic issue that underlies all reductionist endeavours, I need to write this down also to justify much of my claims on matters of brains and minds. In these pages I’ve frequently hinted at the notion that any given epistemological approach necessarily comes with its own idiosyncratic problems; or, if you prefer, will come with some inherent blind-spots. In this post I’ll discuss the obvious trouble that comes with reductionism*. It’s a bit sad that this seems necessary, but after lengthy discussions I’ve come to the conclusion that writing the following in clear language might be useful: the implications I will describe seem to be missed or overlooked by far too many people.
Blue alarm clock (1)
Imagine a very young me, I used to have a blue alarm clock, very much like the one shown here. One day I got hold of a set of tiny screwdrivers and carefully dismantled it to find out how it works. After several hours, having reconstructed and de-constructed the whole thing many times, I got a very good understanding of how it worked. Satisfied with my new knowledge I went to see dad, waving the clock and proudly announcing:

Dad, I’ve discovered that this isn’t an alarm clock: in reality it is just a bunch of smaller objects arranged in such a way that they look as if this was a unique object, but it really isn’t.

I was an inquisitive and philosophically inclined child! This little story is invented, but it does sound like me at a young age. A sensible reaction from dad would have been to explain me that this is how most “things” work: they are made of smaller parts, arranged in such a way that they produce coherent behaviours. This applies to man-made mechanisms, but to living organisms as well: in the case of biology the smaller parts are much, much smaller.

The hypothetical episode accounts for the “obvious” quality of my whole point: we all know the imaginary child-me reached a naïve and unhelpful conclusion. However, we do use reductionist approaches to study much more complex natural phenomena, and when we do, we frequently forget about the inevitable implications, and repeat the same error, over and over, and over and over again.

To proceed in order, the first thing to observe is that when dismantling the clock, imaginary-me was actually applying standard reductionist methodology. The aim was to “understand how the clock works”, and to do this, I dismantled it, carefully studied how its internal parts interact, and was therefore able to more or less grasp how the overall behaviour is produced by the interactions of its smaller parts. Was I wrong in saying that it isn’t an alarm clock? Clearly, but the underlying assumption: “it is not a single entity, it is actually made of many single objects” was correct. If had concluded “it isn’t a single object” it would have been equally naïve and unhelpful, but somewhat less wrong. We can all see why: as long as it works, in everyday life makes sense to treat and consider the alarm-clock as a single object, even if we are perfectly aware that we can dismantle it into smaller components. However, this whole thought experiment uncovers a general pattern, which we do tend to forget for complex enough domains: reductionism relies on the assumption that we can dismantle our object of study into smaller parts. Thus, it requires to treat “our object of study” as something that only appears to be unitary, but can in fact be subdivided. When it comes to alarm clocks, nobody gets surprised, but when it comes to the study of minds, almost everyone is ready to jump to unwarranted conclusions.

Before diving into philosophy of mind, there is one more thing that our alarm clock example uncovers. Another conclusion I could have reached (if I was even more philosophically inclined!) is:

Dad, I’ve discovered that the trilling sound this clock makes is an emergent quality. None of the components of the clock can make that sound on their own, but when you piece them together in exactly this way, the result is able to make this remarkable sound. It’s fascinating!

The above would have been entirely correct, but probably none of us is likely to be equally fascinated by such emergence. That’s just how things work, right? Indeed, but again, when it comes to philosophy of mind (and many other equally challenging subjects) people regularly spend plenty of efforts trying to uncover the mysterious laws of emergence, reflecting on what emergence really is or other related, ultimately self-explanatory riddles. I’ve got news (and I’m sad to realise they are news to some): emergence is an artefact of reductionism. Whenever you try to understand a phenomenon by identifying the smaller parts that produce it, and describing how such smaller parts interact, you are assuming that the actual phenomenon is an emergent property of the underlying system. Emergence is implicit in the hypotheses that make reductionism possible (and tremendously useful). There is nothing mysterious about emergence: it is an entirely (and inadvertently) invented phenomenon, it is the result of the most useful strategy that we use to understand how reality unfolds. Emergence is epistemic: it’s the flip side of the explanatory power of reductionism. Just like any other epistemic approach, reductionist has its own blind spots, one is about emergence.

That’s not to say that reductionism is flawed: I’m a fanatic defender of reductionism, but my fanaticism requires me to spend some time and effort understanding the limits and blind spots of reductionism, so to compensate my own biases. So here is one: you can’t explain what emergence really is in reductionist terms, because reductionism implies emergence. Reductionism works because regular interactions between smaller parts can frequently generate coherent behaviours on a larger scale. That’s it, can we please stop wasting rivers of ink/bytes on such a non-problem?

No, we clearly can’t. Why can’t we? I don’t really know, but we still can’t.

To show that the errors/limitations I mention above are really worth discussing and clarifying, I will use the example of philosophy of mind (because I know something about it), other people might want to try the same exercise with quantum or theoretical physics, if they are so inclined.

If Free Will is an illusion, also my alarm clock is illusory.

If you have the slightest interest in minds you almost certainly have heard of the Libet experiments on free will: according to the prevalent narrative, the experiments show that free will does not exist, it’s an illusion. If the straightforward interpretation of the results is correct we must conclude that we become conscious of our own “free” decisions only after having made them. Therefore the idea that we consciously make decision has to be wrong.

The Libet experiment is clearly reductionist: it tries to isolate the different parts of the decision making process, and see how they interact (in a very coarse fashion!). By proceeding in this way, it assumes that there are different parts in the process, and that they interact. If they interact, it goes without saying that some things will happen before some others. Thus, what the results tell us is that indeed it seems that the decision-making process is made of smaller, distinguishable mechanisms. Or, it tells us that reduction can indeed help us to understand how we make decisions. We can thus infer that  one day it might be possible to explain human decision-making in mechanistic terms . What would have been surprising is very different: if it turned out that we became aware of having decided something before any associated activity happened in the brain, that would have left open the possibility that something non-material was generating the decision, leaving various forms of dualism somewhat viable. So where does the “Libet demonstrates that free will is an illusion” claim come from? There are two interpretations of this claim, naturally: the weak interpretations claims that “free will and/or our sense of agency aren’t what they seem on first inspection“. To us it naturally seems that we consciously make decisions, but if we admit that some underlying brain mechanisms must make our “conscious decisions” possible, and therefore expect reductionism to be useful, we are already accepting that:

a. these underlying mechanisms are not consciously perceived (otherwise we would have tried to dissect them, not our entire sense of agency).
b. if these mechanisms do exist, they won’t all happen at the same time, and there would need to be a temporal distance between settling for one choice and perceiving the result of the choice.

Negating b. requires to add some magical extra-temporal element, not the typical ingredient of scientific and reductionist efforts. Thus, I don’t have a problem with this interpretation, except noting that Libet’s (and many others’) experiments merely confirm what we already assumed (points a. and b.): they confirm that our agency depends on brain mechanisms, which is what we wanted to verify from the very start.

The second interpretation is that “free will is an illusion, in the sense that it doesn’t exist”. This is the claim that gets on my nerves…

The differences with the clock example help understand why we tend to jump to the wrong conclusion in one case but not the other.
In the clock case, we perceive it as a unitary object, but we know from experience that mechanical objects are made of smaller moving parts. Each one of us would have seen, touched and experienced the presence of such (or analogous) smaller parts, and witnessed how mechanisms interact to produce coherent behaviours. On the other hand, when it comes to our own decision making processes, the only experience we have of them comes from introspection: from that point of view, they (almost) always  look unitary. However we might try, we never get the chance to observe the inner workings of our (or someone else’s) decision making mechanisms. Thus, we always consider only their face value, and get genuinely surprised if a scientist uncovers some tiny detail of their inner workings. The sad fact is that we (assuming my readers are naturalistically inclined and don’t believe on supernatural souls or similar fantasies) should be surprised if scientists were unable to discover any of these inner-workings, not the other way round.
Once again: if we study a system in reductionist terms, we start with the assumption that it can be subdivided into smaller parts. Finding out that this assumption does hold grants us exactly the same right to declare that “free will is an illusion” and that “the alarm clock doesn’t exist in itself, it’s just a bunch of carefully assembled smaller objects”. Both conclusions are somewhat correct (in the most charitable reading), but the only thing they tell us is that there is hope that we can find interesting things by applying reductionism to the study of clocks and free will. Few people/scientists are surprised by the latter conclusion, but for some reason, most don’t see why it is equivalent to the “illusion” statement (in the weak form). For the clock, we would all react as my dad would have, and say something like “don’t be silly, of course it’s made of smaller parts, that’s why it works”. In the case of free will, because we are introspectively blind to its inner workings, discovering that it’s made of smaller-interacting parts, tends to make us conclude “it’s an illusion” (as in: it doesn’t exist). This isn’t formally wrong, but I completely fail to understand what makes the conclusion useful. Do we make decisions? Yes. Do we have some perception associated with making decisions? Yes, it feels like exercising free will. That’s it.

Thanks to Libet and many after him, we now have many reasons to believe that our ability of making decisions is the result of some mechanisms inside our brains. Thus, our free will is an emergent quality of our brains, in the same way as the trilling sound is an emergent quality of the alarm clock. Neither the clock, nor the trill, nor the brain, nor the fact that we make decisions are non-existent. They might be different from what they look like at first sight. But they still exist.

The same kind of reasoning applies to “the self”**. If we study our brains/minds in reductionist terms, we start with the assumption that someday/somehow we’ll be able to isolate the inner workings that produce our sense of self. We should be surprised if this attempt will prove to be impossible, not if we’ll find some ways to proceed. But above all, we should not conclude that “the self” is an illusion, only because reductionism is working as expected. In the same way, being surprised that the self will start looking as an emergent quality of brain/minds is equally naïve: it’s why we thought that reductionism would work in the first place!

This isn’t to say that free will, the self, or consciousness are not somewhat illusionary. I’m the first one to admit that they are not what they look like at first sight (that’s why philosophy of mind is so much fun!). However, if you use the successful results of reductionist science to declare that any of the above is an Illusion as in “it doesn’t really exist”, you are actually demonstrating that you don’t understand how reductionism works, and nothing else.

Notes and References:

*As far as I know, and to my complete surprise, the only argument that comes close to what I discuss here also comes from philosophy of mind, but applies to a slightly different issue, namely the claim that consciousness (or qualia, or phenomenal experience) are epiphenomena. Jaegwon Kim (1993) uses the example of pumps instead of an alarm clock, but the underlying logic seems the same to me. It is used to show that causality chains change according to how we (semi-arbitrarily) pick what is to be considered as the “underlying mechanisms” (or smaller objects that make up the clock), thus, showing that behaviour is caused by mental mechanisms does not imply that consciousness (or qualia, or phenomenal experience) can’t have any causal effect.

**Therefore hollowing out the standard incompatibilist view of free-will: if our decisions are the result of some mechanism, they can’t be our decisions – they just happen, whether we want it or not. Our will is therefore irrelevant/epiphenomenal. Hello? Of course my decisions are the result of some mechanism, the small detail is that these mechanisms (along with lots of others) are precisely the best possible definition of what makes me me. I am the system that hosts these mechanisms, they are part of me. So yes, I am taking my own decision, because I am (also) made of the mechanisms that make them possible (or necessary), duh.

Kim, J. (1993). Supervenience and mind: Selected philosophical essays. Cambridge University Press.
Libet, B. (1993). Unconscious cerebral initiative and the role of conscious will in voluntary action. In Neurophysiology of consciousness (pp. 269-306). Birkhäuser Boston.
Tagged with: , , , ,
Posted in Neuroscience, Psychology, Science, Stupidity

ICYMI: my contributions elsewhere

Just a short note, in case someone missed the fact that I’ve been spilling my beans elsewhere. I consider myself extremely fortunate: for reasons that I can only speculate about, the owners of two of my favourite corners of the internet have both decided to publish a post from yours truly.

In chronological order: back in March I’ve written a piece on how and why evolution can sometimes systematically favour the formation of misbeliefs. My post, entitled “Misbeliefs, evolution and games: a positive case“, is published on TheEGG (Theory, Evolution and Games Group), the main curator and contributor is Artem Kaznatcheev.
Artem has been very kind to me over a number of years, you will find plenty of short debates between us, both here and on TheEGG. TheEGG covers a wide range of topics, from evolutionary game theory, to computer science and more general reflections on epistemology and science. If you like maths, you’ll find plenty; if you don’t, you’ll get hints at what you’re missing ;-) (and much, much more).

This week I’ve had the pleasure of seeing my own lucubrations (on why it is reasonable to study the mind in computational terms) published on Conscious Entities (CE), the title is “Sergio’s Computational Functionalism“. CE is a blog entirely dedicated to the study of consciousness, with a special attention to the philosophy of mind angle, but does cover all sides of the subject. Peter Hankins, the author and CE owner, generally produces dry, witty and no-nonsense commentary on essays and/or journal articles that caught his attention – CE is a recommended “reality check” for everyone interested in consciousness and philosophy of mind. The community of contributors is informed, friendly, argumentative and very diverse. Unlike your typical “bottom side of the internet” (h/t to Dave Gorman) it is always worth checking the comments on CE (as well as on TheEGG ). After 11+ years of blogging, Peter published an entire book, which I’ve reviewed here.
For my own contribution, Peter even went through the trouble of drawing the “feature image”: it depicts my own avatar with the background of (or thinking about) Babbage’s Difference Engine. The image alone delights me far too much. I hope to make the best of the discussion in the comments section.

My short note here wants to serve as a public “thank you” to both Artem and Peter. I don’t know what to say: sometimes generosity leaves me speechless.

Tagged with: , , , ,
Posted in Uncategorized

It’s politics, stupid!

It’s election time in the UK, time to write down a handy little heuristic rule that I apply to guide my voting habits.
The following will draw from Britain-centric examples, because that’s the milieu I’m immersed in, but the conclusions are general, and may be applied to any political election. The aim is to provide a first, heuristic, one-stop-test to avoid voting against our own legitimate interests. In Cipolla’s terms, the following is a short guide on how to avoid casting a stupid vote.Brit-It humour, with apologies...

[If you are offended by what follows, that’s a good thing. I’m writing this post especially for you.]

The starting point is our operational definition of a stupid action. If you decide to perform an action in order to obtain a given effect, your choice is considered stupid when it has the opposite effect: in the real world it makes the desired effect less likely to happen. When voting, everyone can be expected to have a complex mix of “desired effects” in mind, things they wish the next government will make happen; these can be straightforwardly selfish (reduce my taxes, improve the nearby school, etc) or may be more holistic (make our society fairer, happier), but it’s important to start by realising that the two classes of effects can’t be untangled in practice: a fairer society implies that everyone has access to decent schools, while less taxes may (or may not, the causal links are notoriously difficult to pinpoint) have negative effects on many public services, and thus have a negative overall impact on our own quality of life (even if we would have more money to spend). Thus, one can quickly conclude that figuring out what party will bring the best outcome (in selfish terms) is everything but straightforward. For this reason, a solid initial heuristic test has to be useful, at the very least to identify what parties will definitely Not protect our own interests. (If you are in a hurry, feel free to read just the three examples below.)

To propose such a test, I need to reverse the perspective and look at the kind of problems that politicians are expected to solve. In an ideal world, they should all be working in the interest of the common good, and may propose different and incompatible strategies to achieve the same overall result (or may have radically different views of what the common good is!). The test I propose is intended to verify if this is the case: it should help to figure out if a proposed policy is indeed intended to improve the overall well-being of the citizenship and/or of the particular group that it’s supposed to protect. This is because we all agree that politicians lie, but if they lie about their intentions, and we allow their (undetected) lies to guide our choices, it’s almost guaranteed that our vote will be a stupid one.

In terms of general well-being, there is one macroscopic policy problem that all Western governments need to address. This problem first surfaced with the industrial revolution, and is now becoming unavoidably central: technological unemployment. The trend is clear: mechanical machines first, and computers later on, are increasingly able to perform tasks that were usually performed by people. The result is alarming: less qualified and less talented people will find it more and more difficult to get a stable and satisfying work. As a result, a dangerous and destabilising rift is widening in most/all Western Societies. On one side there is a privileged elite that has access to the intellectual resources (knowledge, education, time to think) necessary to discern this and related patterns, on the other side are those that don’t. To use the unfortunate shorthand that is prominent in British culture, you can say that most of the middle and upper classes belong to the former pool, while working class people generally fall in the latter (with as many exceptions as you like!).

A politician in a democratic country needs to attract enough consensus, and would thus be inclined to see the same rift in another light: in terms of public interest, it is clear that reducing the problem and moving as many people as possible into the privileged pool should be the first and all-encompassing priority. However, doing this is exceedingly difficult. On one hand, the technological drive pushes in exactly the opposite direction, the market and capitalist metaphors don’t help either: the correlations between the “wealth-producers” and “intellectual elites” on one side and “normal people” as “consumers” on the other are glaringly obvious. Furthermore, not everyone has an uncontrollable interest in intellectual matters (like I do): lots of people just want to get along a trouble-free life and mind their own business. Thus, a well-intentioned politician has a legitimate and very difficult problem to solve: the aims should be to reduce the rift (1), make people want to improve their knowledge and understanding (2), and keep those that remain on the less-privileged side happy nevertheless (3). Aim(1) is problematic in itself: the more knowledge is accumulated on the fortunate side, the more it would be important to make it spread to the other side; but obviously, the inherent forces towards inequality created by the capitalist system, the market and technological progress all conspire against this result. Furthermore, aims (2) and (3) are somewhat contradictory: the higher the living standards of the working class, the less reasons there are to try moving to the other pool.

This allows to draw one interim conclusion: a well-intentioned politician, one that really wants to improve the overall well-being, needs to constantly negotiate with a macroscopic, unavoidable and very difficult problem.
Such a politician needs to find a reasonable equilibrium between contrasting forces that can’t be eliminated, and is condemned to an ever-lasting need to find decent compromises. I have no doubt that it is difficult, consuming and very stressful. But wait, just a small change of attitude can make the life of our trusted politician immensely easier. What happens if he or she suddenly decides to protect his/her own interests first, and consider the general well-being only the second priority? Oh, suddenly there is no need to walk on dangerous tightropes, many conflicts magically disappear and the ones that replace them are so much easier to handle. The key consideration is this: there is a quite big number of electors that lack the knowledge and resources to figure out what really is in their own best interest. If you are genuinely well-intentioned, this poses a huge problem, it’s something that requires constant attention just to keep it manageable. If, on the contrary, you don’t care too much for the well-being of your electors, suddenly the problem becomes a resource. You can exploit lack of knowledge, and transform your electors into useful idiots.

Second interim conclusion: all politicians face a conflict of interest. On one side, they are expected to care for general well-being, if they do, they condemn themselves to a life of stress and uncertainty. On the other, they can turn their source of trouble into a resource, make their own life much easier, and ruthlessly exploit those that are stupid enough not to notice. Thus, for the elector, the first priority should be assessing how likely your candidate is to fall for the dark side, and happily try to deceive you. In the following I will show that we, the electors, can easily spot policies and rhetorical strategies that are designed to fool us, and therefore we can use this approach to identify with reasonable confidence who should not receive our vote.
Before doing so, however, I need to address the obvious nihilist objection. What if you are utterly disillusioned, like Russell Brand, and think that all politicians are self-interested and invariably only follow the second, easy and dishonest path? You may be entirely right, but the test remains valid: you can still apply it and find out whose allegiance to the dark side is taking a less harmful form. The key consideration is that the general population is always mixed, it will always contain some electors who explicitly or implicitly try to promote well-being through their vote. Thus, the dark-side has a self-limiting element: if all politicians are deceiving scoundrels, suddenly the honest route stops being a problem and becomes a strategic advantage. In the political arena, the pressure to actually promote general well-being has to be positively correlated to the average dishonesty: in the end, this is probably the main reason why democratic systems are never optimal but remain very robust (antifragile?) and in general “good enough”.

There is an additional problem, though: deception can be sophisticated, and we are all biased and error-prone, so a dishonest politician only needs to hide his evil plans well enough, and we’ll still end up casting a stupid vote. It’s an arms race, and the politicians have the upper hand: if they are dishonest, their main preoccupation will be “how to fool the electors”. While our main preoccupations are, by definition, how to land a good job, raise our kids, etcetera. One may again fall into nihilism and conclude “there is no hope, we’ll be fooled anyway”. This may be the case, but it still doesn’t invalidate the test: for dishonest politicians, the ultimate deception is making a their (self-serving) policies so convincingly good for the general public, that no one can spot the self-serving element. The perfect self-serving policy would be indistinguishable from the genuinely well-being serving ones, and what is wrong with that? Thus: actively trying to spot deceptive policies and exploitative rhetoric automatically generates pressure towards better policies. This is why I’m writing this: if all electors got better at this game general well-being would rise, benefiting me, the ones I love and everyone else.

It’s all good and well, but what if the rule I’ve been invoking is difficult, requires vast knowledge, and can be applied only by those that are already intellectually privileged? Unfortunately, I only have a weak answer. I do think that the test can be recursively applied to increasing levels of sophistication, but I also believe that the biggest gains are made by applying the strategy in straightforward ways which require only basic knowledge and little effort.

To show why, the easiest route is to discuss a few examples, and derive general, handy, and easy to apply “evaluation guidelines”. The general aim is to spot contradictions between declared aims and the resulting policy.

First example: UKIP in the European Parliament.

The main aim of UKIP is to make the UK leave the European Union. This stance implies that the EU is inherently a bad idea and that being members will always be harmful overall. In this context, the self-interest of a member of UKIP elected as an EMP is to do harm. Promoting good policies will be against the declared overall good: if you can make the EU useful, then UKIP has no reason to exist. Thus, for a voter, it is guaranteed that electing a UKIP member to the European Parliament will produce no good. If the elected MEP is honest (i.e. does believe that the EU is inherently bad), s/he should oppose all the good policies and promote everything that demonstrates that the EU is a bad thing – in this way s/he will make it more likely that the UK will decide to leave the EU, and thus remain true to the declared aim (while decreasing well-being in the process!). If the same MEP is dishonest, and is actively deceiving us, the same behaviour would still be required: s/he would still want to shoot down any good policies as they would reduce the likelihood of being re-elected. Thus: electing a UKIP candidate to the European Parliament is inherently stupid. The only situation where it may have a positive effect is when also the candidate is consistently stupid: in such circumstances, the only good candidate is one who actively tries to reduce our well-being and unwillingly promotes it.

It’s astonishing how frequently similar patterns appear in politics. I’ll leave it to the reader to check whether the same considerations apply for SNP candidates to the UK parliament.

Second example: privatisations as a money-saving measure.

This one is only a variation of the quasi-ubiquitous UKIP-EU contradiction. If a service is provided on a for-profit basis, some of the cost of the service will have to provide the profit. All things being equal, if exactly the same service is provided on a non-profit basis, it will cost less: it’s primary-school maths. Therefore, when a politician responsible for the delivery of a given service advocates privatising it, what s/he is actually saying is the following: the service delivery is sub-optimal because it lacks the motivation that comes with profit; this degradation is necessarily more significant than the increase in price that has to happen upon privatisation. S/he is also implying that this situation is unavoidable and that s/he can’t significantly improve it: thus, the best choice is privatisation. Once again, the conflict should be glaringly obvious: once the aim is stated, it doesn’t matter if the politician is honest, actually trying to degrade the public service has to be the (not declared) aim, anything else would be counter-productive (in self-serving terms in the dishonest case, but also honestly, if the politician actually believes his/her own claims). Don’t get me wrong, I do believe that in many cases private enterprise works best, but the fact remains: for a politician or public servant, committing oneself to the “private is better” position, especially a-priori, is always against the public-interest (means you are paying them to not-do their job).

Third example: public finances should be managed like your household budget.

This case is marginally trickier, but still plain obvious, if one cares to understand just one fact: states produce money, households exchange it. That’s it. There are good reasons for not wanting the state to produce too much money, and even better reasons not to want the state to directly control a too large a slice of the overall economy, but the household analogy does not hold and does not even hint at the good reasons to limit the influence of the state. The state/household analogy is just flat-out non-existent, it’s like pretending that corn seeds have the same function for me (eat pop-corn) and a farmer (use them to produce more corn, sell most of the result and make a living). Therefore, whenever a politician is using this analogy, only two explanations are possible: s/he is actively trying to fool the electorate, and/or is irredeemably incompetent. In both cases, it won’t be in my interest to elect such a person.

Conclusions

The key aspects of the three examples I’ve chosen should be obvious:
1. Spotting these deceptive elements was easy. One just needs to look for them.
2. Despite this obviousness, these deceptions work. There is no doubt that the current UK election (and who knows how many others) will be decided by what proportion of the electorate will fail to spot them.

Therefore, the test I’m proposing is both easy to apply and useful. All one needs to do is look for glaringly obvious contradictions between the declared aims and the corresponding policies. If they don’t match, the political side that is proposing them is trying to turn you into a useful idiot. You should vote for someone else. In the real world, all parties will make some contradictory claims, so the rule becomes: avoid voting for those that show the biggest, more obvious contradictions.

Meta conclusions:
a. Cipolla is right. Idiocy doesn’t have upper bounds. If this weren’t the case, politicians wouldn’t be trying to exploit such easy-to-spot lies and self-defeating aims.
b. British media (and the BBC in particular) aren’t doing their job. They should be out there, actively looking for these contradictions, uncover them, make them easy to grasp and thus fulfil their job description. Quite clearly, they are playing some other self-serving game.
c. This analysis also makes it perfectly clear why the uneducated classes are drifting towards right-wing parties (against their straightforward interests). Many right-wing parties are becoming really good at the “fooling electorates” game. Berlusconi showed everyone how well it can work. It also seems that parties on the left are losing this race (but they certainly do participate, to some extent).

Tagged with: , , , ,
Posted in Politics, Stupidity

Partisan review: the Shadow of Consciousness

Peter Hankins recently published a book: The Shadow of Consciousness (A Little Less Wrong). In case you don’t know, Hankins has been blogging about consciousness for more than ten years: his Conscious Entities blog covers both scientific and philosophical approaches and provides regular, consistent, no-nonsense commentary on recent papers, essays and/or major theoretical efforts. Hankins’ approach is mostly philosophical, but in a refreshing manner: he obstinately refuses to follow any line of reasoning without questioning the key turning points, always in very pragmatic ways. By releasing a book, Hankins has (finally!) allowed his readers to learn about his own views, generating a good amount of excitement and debate: this is an important result in itself, because Conscious Entities is one of those rare places where scientists, philosophers and interested people (like me) can openly and friendly debate about consciousness; it is a place where otherwise insular approaches meet each other and get a chance to cross-fertilise.

[Spoiler warning: this article may contain a couple of minor spoilers]

The following is a short Partisan Review of Hankins’ book. Why Partisan? There are several reasons: first of all, I’ve been studying the subject of Consciousness (both the scientific and philosophical streams) for some years, and I’ve been doing so because I have my own very strong opinions about consciousness. Furthermore, because of my interest, I got in touch with Peter, and I now consider him a friend. Finally, I’m lazy: if I decide to write about something, it is always because I have my own point to make. This post is no exception: reviewing Hankins’ book allows me to explore an issue that interests me. Thus, I can’t and do not wish to claim that what follows is an unbiased review. I am hoping that my own bias is what can make the review interesting.

Short Summary.

The Shadow of Consciousness is divided in three parts. The first section goes under the heading of “Selective History”. Hankins provides a brief historical overview, where the selective element is explicitly stated: he will build a coherent narrative that is going to be instrumental to develop his own contribution. Namely, he will use the historical frame to propose a new outlook and a new classification of what makes the study of consciousness particularly difficult. This in turn will allow Hankins to propose his own (partial) solutions.

The second part is about the (supposedly) Easy Problems. To me, this was the most interesting part of the book, probably because his own discrimination between the Hard and Easy problems of consciousness doesn’t coincide with my own. In particular, he puts Meaning and (philosophical) Intentionality inside the class of Easy problems. I wouldn’t, but I also see that it worked very well for Hankins: his approach is philosophical, he does not ignore science, not at all, but he uses scientific results to inform (and sometimes constrain) his reasoning. My own inclination is quite the opposite, I would start with scientifically defined problems, and search  the (existing) philosophy to see if it can provide some promising way to tackle the problems that act as roadblocks to scientific progress. Anyway, the classification is once more instrumental: it allows Hankins to isolate a class of problems, lumping them together under the “Inexhaustibility” banner. This sort of problem “[deals] with lists of alternatives which are indefinitely long or even undefinable“. I find this approach very promising, but disagree with some of the treatment it gets. I will not discuss the disagreements in detail, this is not the place to do it; if you want the details, you should read the book and then dive into the discussion here.

The third part deals with the Hard Problem itself, and is therefore the most philosophical side of the book. I was expecting to strongly disagree with whatever I would find in there, but instead, to my unmitigated surprise, I found myself with nothing at all to complain about. Hankins does not even try to provide final answers; instead, he defines the problem in his own terms, and uses the resulting point of view to expose some more common errors. For example, he points out that:

Talk of the laws of nature, or the laws of physics, is metaphorical. When a string stretches, no calculation takes place inside it to determine how far. Stuff just exists, stuff just happens; the laws are our description of the observable regularities of stuff.

The quote above exemplifies the best and rarest quality of the book (and of all of Hankins writings in general). He is impermeable to nonsense (can I say bullshit?). His ability to spot, isolate and describe errors in reasoning makes him stand out from the crowd of consciousness scholars. More than that, he is also a very good writer: his points are always expressed in clear and succinct language. Envy is my own reaction.

Should you read this book?

Yes! Seriously. If you’ve managed to read thus far, you will find The Shadow of Consciousness delightful. Hankins’ prose is witty and direct, his scholarship is eclectic and thorough, his thinking straight. Unlike most scholarly books, you will never be dragged inside avoidable rabbit holes, you will not have to endure long defensive arguments that serve the sole purpose of meeting academic standards. Reading this book is both informative (even if you are already reasonably well-informed) and enjoyable. Where else would you read expressions like “effing the effable“, grin and simultaneously recognise that the underlying point is indeed valid and useful?

Having said this, I do also need to point out that The Shadow of Consciousness is not a textbook meant to inform students. If you are just approaching philosophy of mind, you will still need to read long and tedious monographs. As far as I can tell, no single book is (or will ever be) able to cover all the different approaches to Consciousness studies. However, the book does provide a fresh insight, and has other merits as well, so I would strongly recommend it to anyone with even the slightest interest in philosophy of mind (especially the professionals). On the other hand, if you are a neuroscientist, and in particular a cognitive neuroscientist, you really should read this book. Why? Because it is an enjoyable read, and neuroscience needs to be informed by philosophical discourse. Collecting more data will not in itself provide generalisable answers: one needs to develop meaningful and reasonable theories that allow to interpret scientific results in a well-grounded way. You should feel free to disagree with the outlook provided by The Shadow of Consciousness, but at the same time, this book scores two important meta-points:

  1. Philosophy doesn’t necessarily equate with self-serving sophistry. Some philosophers are well-grounded, their work can and should be useful.
  2. Studying Consciousness scientifically, without a decent understanding of the philosophical conundrums can only produce “small”, very localised answers.

My own axe to grind

If the above wasn’t partisan enough, there is one more thing I need to say. This book was self-published: as far as I know it was not peer-reviewed, and therefore we should (according to the orthodox scholarly stance) look at it with the highest level of scepticism. Give me a break! We should read everything with the highest possible level of scepticism. We should also feel free to publicly criticise whatever published argument we find to be criticism-worthy. This is what is happening on Hankins’ blog right now, literally while I am writing this, and is a good thing. On the other hand, I frown at how the book would have been transformed if Hankins had the unhealthy ambition of seeking the badge of approval from proper academia (e.g. publish a peer-reviewed book through a specialised academic publisher). I am sure that the result would have been deprived of all the freshness and would have been a boring and effortful book to read.  This would be OK if the added pain was compensated by some additional value. Unfortunately I don’t see where this added value would be coming from: all scenarios I can envisage simply require to make revisions in order to make your claims more defensive. But there is very little value in this. Philosophical arguments always require charitable reading: initially, you need to actively attempt to see the strengths underneath any philosophical argument. Only after recognising these strengths, and having translated them into your own terms, should you poke the result and try to find weaknesses. Failing to do so almost always results in attacking straw men, and importantly, gives you no chance at all to see the straw in your own counter-arguments.

Thus, writing a philosophical argument in a defensive way does not (in general) add value: it makes the argument less direct, unnecessarily longer, and possibly makes it harder to identify the key turning points. If I can indulge in an IT metaphor, it is equivalent to “security through obscurity” which is known to be a questionable approach.

In other words, I see reasons to believe that The Shadow of Consciousness is a good book (also) because it was not peer-reviewed.

The scientific community, and in particular mind and brain science branches (see also here) are currently engaged in long-term and arduous discussions on the effectiveness of current practices. The whole debate revolves also around the supposedly indispensable added value of peer review. I’ve touched this topic already here (comments) and here, but the discussion is happening everywhere, and very few people claim to have solutions to propose (with exceptions). In this context, Hankins’ book testifies in favour of solutions that rely on Post Publication Peer Review. It doesn’t allow us to conclude anything definitive, but I can say that it is enough to question my own approach. I have been (and currently am) trying to get my own views on Consciousness published on one or the other peer-reviewed scientific journal. So far, this led me to do some useful legwork in writing the first drafts. How about the actual peer review? The only effects I can identify are negative: I’ve become defensive, my prose is even less fluid than usual, and I can’t be completely sure that I do believe every single little detail of my argument. As one would expect, some details have been inserted because of the suggestions of peer reviewers, and since I have tried hard to make sure I wasn’t adding anything that I would not want to defend, the result feels muddy to me: I can’t separate my own points from the “external contributions” and subsequently I can’t answer the all important question “do I really wish to make this particular claim?”.

Apologies for the self-serving rant: in short, this is a call for help. If you have opinions about the dangers of peer review, the opportunities of self-publishing, or anything in between, I would be glad to hear your thoughts.

Bibliography:

Ioannidis, J. P. (2014). How to make more published research true. PLoS medicine, 11(10), e1001747.

Tagged with: , , , ,
Posted in Neuroscience, Philosophy, Review, Science

Offence!

Many people already strongly defend the view that no one has the right to not be offended*, I agree completely. In this post I’ll try to stretch the argument even further, and propose that offensive material can, under frequent circumstances, be actively useful. If I’m right, we should promote its production, not sanitise all content and make our intellectual environment safe and dull.

Jesus and Mo make my point twice.

Offence is closely related, but perhaps distinct from righteous indignation; if I had to suggest a distinction, I would propose that offence may be about intangible damage: when the injury applies to something tangible, we may call our response indignation, when the injured object is conceptual (our beliefs, what we regard as sacred, our own sensibility, etcetera) we get offended. I am not saying that this definition is the ultimate, perfect way to identify the emotional state of being offended, however, it does make some sense to me, and importantly, it also suits my current aim (suspiciously) well. For the purpose of this post, I will use the definition above, with the explicit caveat that my own definition may be a little too convenient. If you are unhappy with the definition, I would encourage you to propose some alternatives in the comments.

My regulars readers would know that I am obsessed with our own cognitive limitations, and in particular, how they relate with the mistakes we make. For all the others, here is a schematic recap of why.

Definition: a mistake is an action (or thought) that is enacted to produce a certain effect, but in fact it doesn’t produce the desired results, and sometimes hinders them.

The argument:
1. We all make mistakes, but of course, when we detect one we’ll try to fix it.
2. If we don’t detect a particular class of mistakes, we won’t know how to avoid making the same mistake again.
3. It follows that we may all be doing some mistakes over and over again: we keep repeating the mistakes we don’t detect.
4. Our own cognitive abilities are limited, that’s why we make mistakes. Being cognitively limited, it’s guaranteed that we can systematically fail to detect certain types of mistakes.

Therefore:
C1. It’s likely that most of the mistakes we make go undetected (the argument above describes a closed feedback loop).
C2. When engaged in pure thought, the aim is to generate new ideas that better reflect reality, but since we are, by definition, exploring at the edges of our subjective knowledge, mistakes can easily go undetected.
C3. Philosophical explorations are riddled by undetected mistakes, and therefore it’s a good idea to obsessively try to detect them (and uncover the biases that generated them).
C4. Other people may think differently, and may thus be able to clearly see some of the mistakes we make. As a consequence, we should always welcome criticism, take it seriously, and provide it to others.
C5. From C4, we can also conclude that differences of opinions should be seen as a resource, not as a problem. This doesn’t mean that we shouldn’t try to resolve them: we should, because that’s how they become useful.

My last point already links to offence: an offensive proposition is likely to entail a radical difference of opinions, right? Therefore it might bear important information: the reason why you are offended might be directly linked to something you get systematically wrong.

Seems like a pretty bold leap, so let’s see if we can find some supporting evidence.
In fact, I’m writing this post because the supporting evidence is unfortunately haunting all of us: it is present whenever censorship (or worse) is justified by offence. The first case that convinced me I had to write this post is the premature closure of Exhibit B at London’s Barbican. The show was quite obviously designed to be controversial, shocking and even deliberately offensive. I was not planning to visit it, but that’s because of cowardice: I simply didn’t like the idea of feeling deeply disturbed, and I was sure I would have been. The exhibition was turning racism and objectification into a show, producing disgust in the viewers and uneasiness in the performers. There is no doubt about it, the people who did see it were shocked, and as a result lively debate thrived. In my view, that’s a good thing: offence made people discuss the issues deliberately exposed by the show. Objectification, racism and the necessary connection between the two got discussed in the public arena, but also issues about commercialisation and exploitation for financial gain surfaced. All of these connect with the history of Western “civilisations” and with the unspoken assumptions that underline current Western societies. What a surprise: offence does work in this way, it exposes our biases, assumptions and blind-spots. The real problem is that it worked too well: because the show was offensive, it was shut down. Some people (including me) got offended by the censorship, and more debate sparked, this time about the right of being protected from offensive material. This case, ugly as it is, fully supports my hunch: offence can carry important information, whenever you feel offended, there might be something to learn.

Unfortunately, this episode was shortly followed by the horror of Charlie Hebdo. The common element of taken offence is glaringly obvious, I believe. Comparing the two events feels entirely wrong (my cowardice lurks again: I feel uneasy to even talk about it, let alone discuss a marginal aspect of an unjustified tragedy in order to make a point), but it does make my case much clearer: the murderers felt justified by the offence caused by Charlie Hebdo authors. Why is that? Because ridiculing any deeply felt belief is inherently offensive: it injures something insubstantial, but hurts just the same.

In my own understanding, the mechanism goes like this: because of cognitive attraction, we all have deeply held beliefs that justify our own self-image, influence pretty much all of our (conscious and unconscious) choices, and are very hard to modify. Questioning such beliefs is threatening, it may lead to concluding that all your life was a big mistake: nobody likes that. But this is specifically the reason why offence is useful. My own beliefs may feel very solid and well substantiated to me, but they certainly are perfectible: there is always room for improvement. Challenging them therefore is a good thing: it may help me identify a source of my own errors.

This leads me to humour and one cognitive role I suspect it may have. The link between some of the things that make us laugh and their offensive potential is obvious to me, I will therefore take this notion for granted (feel free to question me in the comments). In our context, this link allows us a conjecture that I find fascinating in terms of cognitive science, and illuminating in terms of useful principles to guide our thoughts and actions. The idea is: amongst other uses (by Sophie Scott), it seems to me that laughter (an amusing experience) also happens when our deeply felt beliefs are “safely challenged”. I’m using the scare quotes because the notion of a safe challenge is questionable, but if you start analysing comedy, you will find that it frequently relies on the build-up of some form of uneasy tension, which is then released in an non-threatening and unexpected direction, making us laugh about it. Is it a coincidence that we call the releasing element the punch line? The conjecture therefore is: this mechanism exists because laughter allows us to question our own assumptions in a way that is informative (the unexpected element) and pleasant, or at least, not always repulsive. If I’m right, it would be a cognitive mechanism which contrasts the undesirable (and inevitable) consequences of cognitive attraction. Laughter allows other people to occasionally put our own biases in question, without making us feel threatened and without offending each and every time.

Thus, we found another possible justification of my main point: offensive material is frequently able to teach us something important, because it is offensive. If you are utterly not-convinced, perhaps the king of comedy by offence might help you see my point: Frankie Boyle has written an excellent piece on Offence and Free Speech. Did I say excellent? Indeed. People should learn it by heart, it is that good. It’s hard to pick the best soundbite, so I’ll quote three (my emphasis):

Offence is often simply an attempt to deny reality. Avant-garde film makers get attacked for saying things that are avant-garde; comedians get attacked for making jokes and footballers get attacked for being stupid.

and

I tried to do a routine about why I thought we should be worried about Britain’s “rape culture” on Live at The Apollo recently (and I do feel we’re reaching a crisis point where some people view rape as mere bad sexual etiquette, like patting your cock dry on a tea towel or paying in loose change) only to be told that while the sentiments of the routine were acceptable I just couldn’t say the word rape. If you’re any kind of writer these days the culture seems to be saying “Please challenge and provoke me, redefine how I see the world, while I scream my head off every time I hear something I don’t like.”

His conclusion is:

We don’t live in a shared reality, we each live in a reality of our own, and causing upset is often the price of trying to reach each other. It’s always easier to dismiss other people than to go through the awkward and time consuming process of understanding them. We have given taking offence a social status it doesn’t deserve: it’s not much more than a way of avoiding difficult conversations.

So there you have it, Boyle makes a living out of the art of mixing laughter and offence, and he sees it in the same way: challenging how we see the world is always potentially offensive.

Time to reach some conclusions (in “Writing my own user manual” style) and finish off with a pair of important caveats.

Conclusions:

At the society level: offensive material should be protected because it causes offence. Instead, what is happening around us is that society is capitulating to our stupidest biases; it is protecting people from offensive material. By doing so, it favours all sorts of unwelcome consequences: for starters, it makes us more easily offended as we are less frequently challenged. This makes offence less informative. It also favours homogeneity of thought, a phenomenon that is dangerous in its own right.

At the personal level: whenever I feel offended, I should pay attention and try to find out why. Doing so might help me uncover my own biases.

Caveats:

  1. I am NOT proposing that offensive material is by definition always useful and should be unquestionably accepted. I am saying that it can be useful if and because it points to something questionable. Offensive material should be questioned, that’s how it may become useful. The fact is that whoever produced something offensive has to be as human as I am, so all I’ve said works both ways: whatever we find offensive may be challenging deeply held beliefs that are also justified and useful. In such cases, questioning would expose the bias behind the offensive material, but of course it’s a tricky exercise: it’s always easy to fail to see what valid points your opposition is making.
  2. It’s also important to remember that we should not feed the Moloch. Highly polarised discussions display their own typical and rather fruitless dynamics. If we are to learn something from debates that were ignited by offence, it is imperative to avoid simplistic polarisations.

Notes and further reading:

*The list of people who oppose the “right to not be offended” naturally starts with Salman Rushdie. For more recent arguments, see: Samira Ahmed at Index on Censorship or Melanie McDonagh on the Spectator. The humanist and atheist crowd is naturally very vocal on the subject, see for example The Oxford Declaration on Freedom of Thought and Expression and this piece by Jerry Coyne (exemplifies the same dangers I discuss here). Finally, there are of course numerous counterarguments, I find most of them both predictable and weak; one worthy exception comes from Alan Haworth, it is also useful to reinforce my first caveat. [back]

Tagged with: , , , , ,
Posted in Ethics, Politics, Stupidity

Complexity is in the eye of the beholder: that’s why it matters

1GZX HaemoglobinComplexity is all around us, right? Our electronic gadgets are complex, as well as cars, laws and social relations. The thing between your ears, the human brain, is  frequently described as ‘probably the most complex object on earth’, and there are billions of them. But hey, what is complexity? The straightforward dictionary definitions are not really useful, while Wikipedia provides an adequate starting point:

Complexity is generally used to characterize something with many parts where those parts interact with each other in multiple ways.

One may conclude that simple systems have “few” interacting parts that interact in limited ways, while complex systems have many parts which interact in multiple and separate ways. Fine. Now name a simple system, please. Remember that at the most common level of abstraction atoms are made of electrons, protons and neutrons, each describable by their own complex properties, and each interacting with the other subatomic particles in more than one way. Hence, if you take a hard-core objective stance, nothing around us is simple and everything is complex. If that’s the case, why would we bother using a word/concept pair that can never truly apply to anything? Because, as always, it’s a useful concept. Saying that something is simple doesn’t really refer to what that thing really is, it means something much more subtle: a simple object, phenomenon or relation is something that you can expect to easily understand. A simple system is predictable: it is unlikely to behave in surprising ways. Therefore, the simple/complex dichotomy is one that refers to our own ability to comprehend and predict, it does not really apply to reality out there. Complexity is all in our heads, in the eye of the beholder.

Once again, a hard-core objective individual might at this point be tempted to utter: “the distinction between complex and simple is arbitrary, therefore it is only an illusion“. Guess what? Nine times out of ten, when someone concludes that a common concept is an illusion I start feeling my blood pressure rising. Concepts aren’t real in any direct sense, evaluating concepts in terms of how real they are is short-sighted: concepts have a degree of usefulness, which strictly depends on the applied context or domain of enquiry.

For example: the simple versus complex distinction is a very useful way to describe something that the listener knows little about it. The zoology exam is simple/easy, biochemistry is complex/difficult. Knowing this, back then, I could plan to prepare zoology in about three weeks, and biochemistry in two months or more. Simples!
I didn’t pick this example at random: zoology is about animals, entire animals, and they function because of the biochemical reactions that happen in their bodies and cells. Hence, in a naïve view, zoology should be more complex than biochemistry. But hey, it wasn’t.Why? Levels of abstraction. To understand biochemical reactions, you need to describe and keep in mind plenty of stuff, reagents, energetic profiles, enzymes, affinity coefficients and more, but crucially, all of them play important roles, usually described by (long) mathematical equations, and no one so far was able to make the subject simpler. On the other hand, to describe the features of different zoological phyla, one needs to remember stuff, plenty of single notions, but very little in terms of interactions. The subject that I was asked to master was static, not very dissimilar to a list that needs to be remembered. In other words, zoology (as it was presented to me, I’m not making a sweeping statement on zoology in general!) was concerned with a level of abstraction that made it easy to handle: it lacked the multiple interactions that, in our minds, make something complex.

Interim conclusion: everything is complex, but some subjects or domains of knowledge can be mastered easily, some others can’t. The difference must depend on our mental abilities: committing a list of notions to memory is hard, but still easier than learning how to describe multiple and interdependent relations. Why is this important? Because science, philosophy, knowledge and even this blog, are all about understanding the world. Thus, understanding complexity, what it is and where it comes from, is a founding requirement that cuts across pretty much every conceivable domain of enquiry. Understanding, in other words, is the process of taming complexity: it’s about finding suitable levels of abstraction, with the aim of minimising the things that one needs to consider while retaining the descriptive/predictive power of the concepts employed. In the case of biochemistry, to retain enough explanatory power, it is unfortunately necessary to accept a good deal of complexity. I am writing this blog post because this otherwise lame conclusion has had an enormous influence in shaping my thoughts: studying biochemistry I had an epiphany that I wish to share. I wish to share it because it is a simple one, and has remarkable explanatory power in itself.

Enter Haemoglobin, the protein that carries oxygen from lungs to the rest of the body. All it does is bind to O2 molecules while passing through the capillaries in the lungs, and then release it where it’s needed: muscles, organs and the brain. But how does it manage? This isn’t straightforward: due to its structure, Haemoglobin has high affinity to oxygen, in plain English it means that oxygen tends to stick to it. Put Haemoglobin and oxygen together and O2 molecules will attach themselves to the four Heme groups present in a fully formed Haemoglobin complex. Fine, Haemoglobin is contained in red cells, which stream through blood vessels, reaching every part of the body that needs oxygen. Problem: how does Haemoglobin know when it is time to let go of the bound oxygen? This isn’t trivial: if two molecules have high affinity, they stick together, but to be useful Haemoglobin needs to release its oxygen at the right time, not to just carry it around.

You know where this is going: the mechanism that dynamically regulates the affinity between Haemoglobin and oxygen is complex, beautifully so, and largely understood. A good explanation is here:

(by Wellcome Trust).

In a nutshell, when there is a lot of oxygen around, this directly makes it more likely that a first O2 molecule will bind to an unoccupied Heme group; when this happens, the shape of Haemoglobin changes in such a way that the affinity of the remaining three groups increases, making it more likely that they will get their own O2 molecule attached. Thus in the lungs, where there is plenty of oxygen, the overall affinity tends to increase, specifically because there is more oxygen. When Haemoglobin reaches the muscles, there is less free oxygen around, so it is already more likely that some Oxygen will detach and actually reach its destination, when this happens, the affinity change will reverse, enhancing the “release” effect. But that’s not all: oxygen is used in some sort of controlled combustion, the result is CO2, carbon-dioxide, which when present has the effect of lowering the pH (makes the solution a touch more acidic) – therefore, where O2 is needed, the pH also tends to decrease, specifically because O2 is being used. You guessed right, a lower pH changes the shape of Haemoglobin in such a way that it becomes less affine to O2, further increasing the oxygen release. All this happens via fairly well understood changes to the 3D shape of the Haemoglobin molecule (shown in a very simplified form on top), and has the overall effect of allowing Haemoglobin to act as if it was a little agent that knew when to bind and when to let go. But in fact, it’s all about (complex) molecular interactions, the appropriate and detailed explanation of what happens is limited to physical mechanisms. This is where the epiphany happened: Haemoglobin is a very important biological molecule, but just one of a huge number of them. It is also one that exerts a function is a relatively simple environment: of course, it needs to operate within the circulatory system, but how the structure of blood vessels influences the behaviour of Haemoglobin is relatively straightforward. The first take-home message for me is: wow, think about the variety of different proteins that sustain life, probably each one of them is characterised by similarly fine-tuned mechanisms that regulate its activity in their own complex ways. Biochemistry then added the importance of regulatory networks, where certain proteins dynamically (and frequently reversibly) regulate the function of others, while some proteins influence how many copies of a given protein will be made or destroyed. Add to the mix cellular biology and histology and you’ll find that different organs are made of different cells, each with their own specialised structure, each containing different proteins, and all able to self assemble, using mechanisms that are in all aspects analogous to the ones that allow Haemoglobin to function effectively. Awe is what I felt: the amount of complexity implied is nothing short than annihilating. In its own humbling way, however, this epiphany was also illuminating.

Take the brain: like all other organs, it is made of gazillions of such interacting little dumb robots, at the molecular level, each protein and component reacts in mechanic ways with a multitude of others, in each synapse we will have thousands of these, all contributing to the overall effect of a single synaptic event. The number, composition and chemical-physical properties of each building block is itself regulated in similar ways, thus a single synapse may show a huge amount of different behaviours. But this isn’t enough: synapses are created and removed all the time, and one single neuron will typically form (much) more than a thousand synapses. A human brain is formed by something less that 100 billion neurons which form and constantly rearrange something like 1014-1015 synapses, connecting neurons in highly intricate ways. These numbers alone are beyond comprehension, but, if I’ve managed to convey my message, the important consequence does not need to be negative. Think about it, each of the elements above (neurons, synapses, regulatory networks) is made of tiny superspecialised robots that are finely tuned to mechanically carry out their own molecular job. Our minds do not have a chance to comprehend all of this in one single sweep: not only the complexity of the system is beyond direct comprehension, the scale of such complexity is itself beyond the reach of our direct intuition. This is a negative conclusion, but has plenty of positive consequences:

  1. Neurobiology is more than a century old, and still we don’t know how our brains work. For example, we don’t know what forms our memories, how they are stored, encoded or used. We don’t even know the role that synapses have in memory formation. Given the amount of complexity that neurobiology and all mind-related sciences are attempting to tame, this is not a surprise. The amount of ground that needs to be covered is enormous, and yet, bit by bit, we are making progress.
  2. In terms of understanding what consciousness is, given 1., it should not be surprising that the distinction between the supposedly easy and hard problems make intuitive sense (to some). The “easy problem” (concerns physical description of what the brain does), after taking into account my considerations above should be recognised for what it is: far more difficult than imaginable. We can’t even grasp in one single thought how difficult it is “solving” the easy problem, therefore we should admit that we have no idea of the explanatory and predictive powers that will come with sufficient understanding. Grasping a limited and merely intuitive idea of how complex our brains are should help us admit that solving the easy problem may indeed make the hard problem evaporate and/or look trivial – but in truth, we just don’t know.
  3. In terms of the mind-body problem, our intuitions usually push towards one or another form of dualism. Some will think that brains host a non-physical soul, some other will draw a line between physical substrates and information processing; more or less every single person that has thought about the mind-body problem will have produced a new and unique theory. This fact alone suggests that nobody has the slightest idea of how to distinguish right from wrong in this subject – or, to say this in a negative way, saying that “everyone is probably wrong” (including me) must be very accurate. Be as it may, these “mistakes” are all legitimate attempts to tame the underlying physical complexity. Their “wrongness” should be measured in terms of how much (or how little) they are able to explain and predict. However, my considerations above provide another way to look at the issue: our dualistic tendencies are, and have to be, the result of the need for simplicity. Given the complexity that needs to be tamed, concentrating at the level of “mind”, as described in folk-psychology (and thus introducing the seed of dualism), is entirely understandable: this approach does have some predictive and explanatory power, while strictly physical explanations currently don’t (not outside their limited scope), precisely because they are still utterly incomplete.
  4. On the other hand, all this suggests why a certain family of approaches doesn’t look promising. For example, concluding that strictly physical explanations will never surpass the explanatory power of dualist solutions is possible only if one ignores the unimaginable level of complexity that a physical explanation would have tamed. In the same way, concluding that the hard problem of consciousness requires the introduction of an entirely new kind of solution also looks misguided: it makes intuitive sense because the scale of what needs to be explained in traditional ways is in itself so vast that it escapes comprehension.
  5. In terms of a subject that I have discussed here, the predictive brain, all the above explains why I’m inclined to dismiss objections such as “yes, but you need much more to explain X (psychology, or consciousness, or intentionality, etc.)”. To me it is much too easy to think that such objections come from the lack of understanding of the scale of the problem. Yes, all physical explanations of how brains support minds are far from convincing – not one claims to be complete: this has to be the case, given the enormity of the task.
  6. The key to concrete advancements will be theoretical. What is needed is the identification of powerful concepts, located at the most useful (and still unknown) levels of abstraction (a full description will need to span across more than one level of abstraction), these will allow to isolate complexity in separate compartments. For example, eliminating the need of describing in full the inner working of a single synapse. Or finding repeated circuitry of several neurons that serve a single general-purpose function (our new powerful concept) and thus allow us to grasp what happens at a higher level of abstraction. These two examples also suggest that we have ideas of where to start looking, but in all honesty, we don’t know if these are the right or best ideas.

Before concluding, a self-referential side note is due. The epiphany that I’m trying to transmit here is also the reason why I write in the way I do. Understanding requires to simplify, hence my main effort is aimed at reducing complexity: I try to do this via conceptual clarity, giving a lot of thought on how to express ideas in simple ways. This is why I normally limit name-dropping, vast bibliographies, formulas, and, whenever possible, specialised jargon. The result probably makes most academics cringe, but it ain’t my problem, right?

Tagged with: , ,
Posted in Neuroscience, Philosophy, Premises, Psychology, Science
Follow

Get every new post delivered to your Inbox.

Join 279 other followers

%d bloggers like this: