Complexity is all around us, right? Our electronic gadgets are complex, as well as cars, laws and social relations. The thing between your ears, the human brain, is frequently described as ‘probably the most complex object on earth’, and there are billions of them. But hey, what is complexity? The straightforward dictionary definitions are not really useful, while Wikipedia provides an adequate starting point:
Complexity is generally used to characterize something with many parts where those parts interact with each other in multiple ways.
One may conclude that simple systems have “few” interacting parts that interact in limited ways, while complex systems have many parts which interact in multiple and separate ways. Fine. Now name a simple system, please. Remember that at the most common level of abstraction atoms are made of electrons, protons and neutrons, each describable by their own complex properties, and each interacting with the other subatomic particles in more than one way. Hence, if you take a hard-core objective stance, nothing around us is simple and everything is complex. If that’s the case, why would we bother using a word/concept pair that can never truly apply to anything? Because, as always, it’s a useful concept. Saying that something is simple doesn’t really refer to what that thing really is, it means something much more subtle: a simple object, phenomenon or relation is something that you can expect to easily understand. A simple system is predictable: it is unlikely to behave in surprising ways. Therefore, the simple/complex dichotomy is one that refers to our own ability to comprehend and predict, it does not really apply to reality out there. Complexity is all in our heads, in the eye of the beholder.
Once again, a hard-core objective individual might at this point be tempted to utter: “the distinction between complex and simple is arbitrary, therefore it is only an illusion“. Guess what? Nine times out of ten, when someone concludes that a common concept is an illusion I start feeling my blood pressure rising. Concepts aren’t real in any direct sense, evaluating concepts in terms of how real they are is short-sighted: concepts have a degree of usefulness, which strictly depends on the applied context or domain of enquiry.
For example: the simple versus complex distinction is a very useful way to describe something that the listener knows little about it. The zoology exam is simple/easy, biochemistry is complex/difficult. Knowing this, back then, I could plan to prepare zoology in about three weeks, and biochemistry in two months or more. Simples!
I didn’t pick this example at random: zoology is about animals, entire animals, and they function because of the biochemical reactions that happen in their bodies and cells. Hence, in a naïve view, zoology should be more complex than biochemistry. But hey, it wasn’t.Why? Levels of abstraction. To understand biochemical reactions, you need to describe and keep in mind plenty of stuff, reagents, energetic profiles, enzymes, affinity coefficients and more, but crucially, all of them play important roles, usually described by (long) mathematical equations, and no one so far was able to make the subject simpler. On the other hand, to describe the features of different zoological phyla, one needs to remember stuff, plenty of single notions, but very little in terms of interactions. The subject that I was asked to master was static, not very dissimilar to a list that needs to be remembered. In other words, zoology (as it was presented to me, I’m not making a sweeping statement on zoology in general!) was concerned with a level of abstraction that made it easy to handle: it lacked the multiple interactions that, in our minds, make something complex.
Interim conclusion: everything is complex, but some subjects or domains of knowledge can be mastered easily, some others can’t. The difference must depend on our mental abilities: committing a list of notions to memory is hard, but still easier than learning how to describe multiple and interdependent relations. Why is this important? Because science, philosophy, knowledge and even this blog, are all about understanding the world. Thus, understanding complexity, what it is and where it comes from, is a founding requirement that cuts across pretty much every conceivable domain of enquiry. Understanding, in other words, is the process of taming complexity: it’s about finding suitable levels of abstraction, with the aim of minimising the things that one needs to consider while retaining the descriptive/predictive power of the concepts employed. In the case of biochemistry, to retain enough explanatory power, it is unfortunately necessary to accept a good deal of complexity. I am writing this blog post because this otherwise lame conclusion has had an enormous influence in shaping my thoughts: studying biochemistry I had an epiphany that I wish to share. I wish to share it because it is a simple one, and has remarkable explanatory power in itself.
Enter Haemoglobin, the protein that carries oxygen from lungs to the rest of the body. All it does is bind to O2 molecules while passing through the capillaries in the lungs, and then release it where it’s needed: muscles, organs and the brain. But how does it manage? This isn’t straightforward: due to its structure, Haemoglobin has high affinity to oxygen, in plain English it means that oxygen tends to stick to it. Put Haemoglobin and oxygen together and O2 molecules will attach themselves to the four Heme groups present in a fully formed Haemoglobin complex. Fine, Haemoglobin is contained in red cells, which stream through blood vessels, reaching every part of the body that needs oxygen. Problem: how does Haemoglobin know when it is time to let go of the bound oxygen? This isn’t trivial: if two molecules have high affinity, they stick together, but to be useful Haemoglobin needs to release its oxygen at the right time, not to just carry it around.
You know where this is going: the mechanism that dynamically regulates the affinity between Haemoglobin and oxygen is complex, beautifully so, and largely understood. A good explanation is here:
(by Wellcome Trust).
In a nutshell, when there is a lot of oxygen around, this directly makes it more likely that a first O2 molecule will bind to an unoccupied Heme group; when this happens, the shape of Haemoglobin changes in such a way that the affinity of the remaining three groups increases, making it more likely that they will get their own O2 molecule attached. Thus in the lungs, where there is plenty of oxygen, the overall affinity tends to increase, specifically because there is more oxygen. When Haemoglobin reaches the muscles, there is less free oxygen around, so it is already more likely that some Oxygen will detach and actually reach its destination, when this happens, the affinity change will reverse, enhancing the “release” effect. But that’s not all: oxygen is used in some sort of controlled combustion, the result is CO2, carbon-dioxide, which when present has the effect of lowering the pH (makes the solution a touch more acidic) – therefore, where O2 is needed, the pH also tends to decrease, specifically because O2 is being used. You guessed right, a lower pH changes the shape of Haemoglobin in such a way that it becomes less affine to O2, further increasing the oxygen release. All this happens via fairly well understood changes to the 3D shape of the Haemoglobin molecule (shown in a very simplified form on top), and has the overall effect of allowing Haemoglobin to act as if it was a little agent that knew when to bind and when to let go. But in fact, it’s all about (complex) molecular interactions, the appropriate and detailed explanation of what happens is limited to physical mechanisms. This is where the epiphany happened: Haemoglobin is a very important biological molecule, but just one of a huge number of them. It is also one that exerts a function is a relatively simple environment: of course, it needs to operate within the circulatory system, but how the structure of blood vessels influences the behaviour of Haemoglobin is relatively straightforward. The first take-home message for me is: wow, think about the variety of different proteins that sustain life, probably each one of them is characterised by similarly fine-tuned mechanisms that regulate its activity in their own complex ways. Biochemistry then added the importance of regulatory networks, where certain proteins dynamically (and frequently reversibly) regulate the function of others, while some proteins influence how many copies of a given protein will be made or destroyed. Add to the mix cellular biology and histology and you’ll find that different organs are made of different cells, each with their own specialised structure, each containing different proteins, and all able to self assemble, using mechanisms that are in all aspects analogous to the ones that allow Haemoglobin to function effectively. Awe is what I felt: the amount of complexity implied is nothing short than annihilating. In its own humbling way, however, this epiphany was also illuminating.
Take the brain: like all other organs, it is made of gazillions of such interacting little dumb robots, at the molecular level, each protein and component reacts in mechanic ways with a multitude of others, in each synapse we will have thousands of these, all contributing to the overall effect of a single synaptic event. The number, composition and chemical-physical properties of each building block is itself regulated in similar ways, thus a single synapse may show a huge amount of different behaviours. But this isn’t enough: synapses are created and removed all the time, and one single neuron will typically form (much) more than a thousand synapses. A human brain is formed by something less that 100 billion neurons which form and constantly rearrange something like 1014-1015 synapses, connecting neurons in highly intricate ways. These numbers alone are beyond comprehension, but, if I’ve managed to convey my message, the important consequence does not need to be negative. Think about it, each of the elements above (neurons, synapses, regulatory networks) is made of tiny superspecialised robots that are finely tuned to mechanically carry out their own molecular job. Our minds do not have a chance to comprehend all of this in one single sweep: not only the complexity of the system is beyond direct comprehension, the scale of such complexity is itself beyond the reach of our direct intuition. This is a negative conclusion, but has plenty of positive consequences:
- Neurobiology is more than a century old, and still we don’t know how our brains work. For example, we don’t know what forms our memories, how they are stored, encoded or used. We don’t even know the role that synapses have in memory formation. Given the amount of complexity that neurobiology and all mind-related sciences are attempting to tame, this is not a surprise. The amount of ground that needs to be covered is enormous, and yet, bit by bit, we are making progress.
- In terms of understanding what consciousness is, given 1., it should not be surprising that the distinction between the supposedly easy and hard problems make intuitive sense (to some). The “easy problem” (concerns physical description of what the brain does), after taking into account my considerations above should be recognised for what it is: far more difficult than imaginable. We can’t even grasp in one single thought how difficult it is “solving” the easy problem, therefore we should admit that we have no idea of the explanatory and predictive powers that will come with sufficient understanding. Grasping a limited and merely intuitive idea of how complex our brains are should help us admit that solving the easy problem may indeed make the hard problem evaporate and/or look trivial – but in truth, we just don’t know.
- In terms of the mind-body problem, our intuitions usually push towards one or another form of dualism. Some will think that brains host a non-physical soul, some other will draw a line between physical substrates and information processing; more or less every single person that has thought about the mind-body problem will have produced a new and unique theory. This fact alone suggests that nobody has the slightest idea of how to distinguish right from wrong in this subject – or, to say this in a negative way, saying that “everyone is probably wrong” (including me) must be very accurate. Be as it may, these “mistakes” are all legitimate attempts to tame the underlying physical complexity. Their “wrongness” should be measured in terms of how much (or how little) they are able to explain and predict. However, my considerations above provide another way to look at the issue: our dualistic tendencies are, and have to be, the result of the need for simplicity. Given the complexity that needs to be tamed, concentrating at the level of “mind”, as described in folk-psychology (and thus introducing the seed of dualism), is entirely understandable: this approach does have some predictive and explanatory power, while strictly physical explanations currently don’t (not outside their limited scope), precisely because they are still utterly incomplete.
- On the other hand, all this suggests why a certain family of approaches doesn’t look promising. For example, concluding that strictly physical explanations will never surpass the explanatory power of dualist solutions is possible only if one ignores the unimaginable level of complexity that a physical explanation would have tamed. In the same way, concluding that the hard problem of consciousness requires the introduction of an entirely new kind of solution also looks misguided: it makes intuitive sense because the scale of what needs to be explained in traditional ways is in itself so vast that it escapes comprehension.
- In terms of a subject that I have discussed here, the predictive brain, all the above explains why I’m inclined to dismiss objections such as “yes, but you need much more to explain X (psychology, or consciousness, or intentionality, etc.)”. To me it is much too easy to think that such objections come from the lack of understanding of the scale of the problem. Yes, all physical explanations of how brains support minds are far from convincing – not one claims to be complete: this has to be the case, given the enormity of the task.
- The key to concrete advancements will be theoretical. What is needed is the identification of powerful concepts, located at the most useful (and still unknown) levels of abstraction (a full description will need to span across more than one level of abstraction), these will allow to isolate complexity in separate compartments. For example, eliminating the need of describing in full the inner working of a single synapse. Or finding repeated circuitry of several neurons that serve a single general-purpose function (our new powerful concept) and thus allow us to grasp what happens at a higher level of abstraction. These two examples also suggest that we have ideas of where to start looking, but in all honesty, we don’t know if these are the right or best ideas.
Before concluding, a self-referential side note is due. The epiphany that I’m trying to transmit here is also the reason why I write in the way I do. Understanding requires to simplify, hence my main effort is aimed at reducing complexity: I try to do this via conceptual clarity, giving a lot of thought on how to express ideas in simple ways. This is why I normally limit name-dropping, vast bibliographies, formulas, and, whenever possible, specialised jargon. The result probably makes most academics cringe, but it ain’t my problem, right?