Complexity is in the eye of the beholder: that’s why it matters

1GZX HaemoglobinComplexity is all around us, right? Our electronic gadgets are complex, as well as cars, laws and social relations. The thing between your ears, the human brain, is  frequently described as ‘probably the most complex object on earth’, and there are billions of them. But hey, what is complexity? The straightforward dictionary definitions are not really useful, while Wikipedia provides an adequate starting point:

Complexity is generally used to characterize something with many parts where those parts interact with each other in multiple ways.

One may conclude that simple systems have “few” interacting parts that interact in limited ways, while complex systems have many parts which interact in multiple and separate ways. Fine. Now name a simple system, please. Remember that at the most common level of abstraction atoms are made of electrons, protons and neutrons, each describable by their own complex properties, and each interacting with the other subatomic particles in more than one way. Hence, if you take a hard-core objective stance, nothing around us is simple and everything is complex. If that’s the case, why would we bother using a word/concept pair that can never truly apply to anything? Because, as always, it’s a useful concept. Saying that something is simple doesn’t really refer to what that thing really is, it means something much more subtle: a simple object, phenomenon or relation is something that you can expect to easily understand. A simple system is predictable: it is unlikely to behave in surprising ways. Therefore, the simple/complex dichotomy is one that refers to our own ability to comprehend and predict, it does not really apply to reality out there. Complexity is all in our heads, in the eye of the beholder.

Once again, a hard-core objective individual might at this point be tempted to utter: “the distinction between complex and simple is arbitrary, therefore it is only an illusion“. Guess what? Nine times out of ten, when someone concludes that a common concept is an illusion I start feeling my blood pressure rising. Concepts aren’t real in any direct sense, evaluating concepts in terms of how real they are is short-sighted: concepts have a degree of usefulness, which strictly depends on the applied context or domain of enquiry.

For example: the simple versus complex distinction is a very useful way to describe something that the listener knows little about it. The zoology exam is simple/easy, biochemistry is complex/difficult. Knowing this, back then, I could plan to prepare zoology in about three weeks, and biochemistry in two months or more. Simples!
I didn’t pick this example at random: zoology is about animals, entire animals, and they function because of the biochemical reactions that happen in their bodies and cells. Hence, in a naïve view, zoology should be more complex than biochemistry. But hey, it wasn’t.Why? Levels of abstraction. To understand biochemical reactions, you need to describe and keep in mind plenty of stuff, reagents, energetic profiles, enzymes, affinity coefficients and more, but crucially, all of them play important roles, usually described by (long) mathematical equations, and no one so far was able to make the subject simpler. On the other hand, to describe the features of different zoological phyla, one needs to remember stuff, plenty of single notions, but very little in terms of interactions. The subject that I was asked to master was static, not very dissimilar to a list that needs to be remembered. In other words, zoology (as it was presented to me, I’m not making a sweeping statement on zoology in general!) was concerned with a level of abstraction that made it easy to handle: it lacked the multiple interactions that, in our minds, make something complex.

Interim conclusion: everything is complex, but some subjects or domains of knowledge can be mastered easily, some others can’t. The difference must depend on our mental abilities: committing a list of notions to memory is hard, but still easier than learning how to describe multiple and interdependent relations. Why is this important? Because science, philosophy, knowledge and even this blog, are all about understanding the world. Thus, understanding complexity, what it is and where it comes from, is a founding requirement that cuts across pretty much every conceivable domain of enquiry. Understanding, in other words, is the process of taming complexity: it’s about finding suitable levels of abstraction, with the aim of minimising the things that one needs to consider while retaining the descriptive/predictive power of the concepts employed. In the case of biochemistry, to retain enough explanatory power, it is unfortunately necessary to accept a good deal of complexity. I am writing this blog post because this otherwise lame conclusion has had an enormous influence in shaping my thoughts: studying biochemistry I had an epiphany that I wish to share. I wish to share it because it is a simple one, and has remarkable explanatory power in itself.

Enter Haemoglobin, the protein that carries oxygen from lungs to the rest of the body. All it does is bind to O2 molecules while passing through the capillaries in the lungs, and then release it where it’s needed: muscles, organs and the brain. But how does it manage? This isn’t straightforward: due to its structure, Haemoglobin has high affinity to oxygen, in plain English it means that oxygen tends to stick to it. Put Haemoglobin and oxygen together and O2 molecules will attach themselves to the four Heme groups present in a fully formed Haemoglobin complex. Fine, Haemoglobin is contained in red cells, which stream through blood vessels, reaching every part of the body that needs oxygen. Problem: how does Haemoglobin know when it is time to let go of the bound oxygen? This isn’t trivial: if two molecules have high affinity, they stick together, but to be useful Haemoglobin needs to release its oxygen at the right time, not to just carry it around.

You know where this is going: the mechanism that dynamically regulates the affinity between Haemoglobin and oxygen is complex, beautifully so, and largely understood. A good explanation is here:

(by Wellcome Trust).

In a nutshell, when there is a lot of oxygen around, this directly makes it more likely that a first O2 molecule will bind to an unoccupied Heme group; when this happens, the shape of Haemoglobin changes in such a way that the affinity of the remaining three groups increases, making it more likely that they will get their own O2 molecule attached. Thus in the lungs, where there is plenty of oxygen, the overall affinity tends to increase, specifically because there is more oxygen. When Haemoglobin reaches the muscles, there is less free oxygen around, so it is already more likely that some Oxygen will detach and actually reach its destination, when this happens, the affinity change will reverse, enhancing the “release” effect. But that’s not all: oxygen is used in some sort of controlled combustion, the result is CO2, carbon-dioxide, which when present has the effect of lowering the pH (makes the solution a touch more acidic) – therefore, where O2 is needed, the pH also tends to decrease, specifically because O2 is being used. You guessed right, a lower pH changes the shape of Haemoglobin in such a way that it becomes less affine to O2, further increasing the oxygen release. All this happens via fairly well understood changes to the 3D shape of the Haemoglobin molecule (shown in a very simplified form on top), and has the overall effect of allowing Haemoglobin to act as if it was a little agent that knew when to bind and when to let go. But in fact, it’s all about (complex) molecular interactions, the appropriate and detailed explanation of what happens is limited to physical mechanisms. This is where the epiphany happened: Haemoglobin is a very important biological molecule, but just one of a huge number of them. It is also one that exerts a function is a relatively simple environment: of course, it needs to operate within the circulatory system, but how the structure of blood vessels influences the behaviour of Haemoglobin is relatively straightforward. The first take-home message for me is: wow, think about the variety of different proteins that sustain life, probably each one of them is characterised by similarly fine-tuned mechanisms that regulate its activity in their own complex ways. Biochemistry then added the importance of regulatory networks, where certain proteins dynamically (and frequently reversibly) regulate the function of others, while some proteins influence how many copies of a given protein will be made or destroyed. Add to the mix cellular biology and histology and you’ll find that different organs are made of different cells, each with their own specialised structure, each containing different proteins, and all able to self assemble, using mechanisms that are in all aspects analogous to the ones that allow Haemoglobin to function effectively. Awe is what I felt: the amount of complexity implied is nothing short than annihilating. In its own humbling way, however, this epiphany was also illuminating.

Take the brain: like all other organs, it is made of gazillions of such interacting little dumb robots, at the molecular level, each protein and component reacts in mechanic ways with a multitude of others, in each synapse we will have thousands of these, all contributing to the overall effect of a single synaptic event. The number, composition and chemical-physical properties of each building block is itself regulated in similar ways, thus a single synapse may show a huge amount of different behaviours. But this isn’t enough: synapses are created and removed all the time, and one single neuron will typically form (much) more than a thousand synapses. A human brain is formed by something less that 100 billion neurons which form and constantly rearrange something like 1014-1015 synapses, connecting neurons in highly intricate ways. These numbers alone are beyond comprehension, but, if I’ve managed to convey my message, the important consequence does not need to be negative. Think about it, each of the elements above (neurons, synapses, regulatory networks) is made of tiny superspecialised robots that are finely tuned to mechanically carry out their own molecular job. Our minds do not have a chance to comprehend all of this in one single sweep: not only the complexity of the system is beyond direct comprehension, the scale of such complexity is itself beyond the reach of our direct intuition. This is a negative conclusion, but has plenty of positive consequences:

  1. Neurobiology is more than a century old, and still we don’t know how our brains work. For example, we don’t know what forms our memories, how they are stored, encoded or used. We don’t even know the role that synapses have in memory formation. Given the amount of complexity that neurobiology and all mind-related sciences are attempting to tame, this is not a surprise. The amount of ground that needs to be covered is enormous, and yet, bit by bit, we are making progress.
  2. In terms of understanding what consciousness is, given 1., it should not be surprising that the distinction between the supposedly easy and hard problems make intuitive sense (to some). The “easy problem” (concerns physical description of what the brain does), after taking into account my considerations above should be recognised for what it is: far more difficult than imaginable. We can’t even grasp in one single thought how difficult it is “solving” the easy problem, therefore we should admit that we have no idea of the explanatory and predictive powers that will come with sufficient understanding. Grasping a limited and merely intuitive idea of how complex our brains are should help us admit that solving the easy problem may indeed make the hard problem evaporate and/or look trivial – but in truth, we just don’t know.
  3. In terms of the mind-body problem, our intuitions usually push towards one or another form of dualism. Some will think that brains host a non-physical soul, some other will draw a line between physical substrates and information processing; more or less every single person that has thought about the mind-body problem will have produced a new and unique theory. This fact alone suggests that nobody has the slightest idea of how to distinguish right from wrong in this subject – or, to say this in a negative way, saying that “everyone is probably wrong” (including me) must be very accurate. Be as it may, these “mistakes” are all legitimate attempts to tame the underlying physical complexity. Their “wrongness” should be measured in terms of how much (or how little) they are able to explain and predict. However, my considerations above provide another way to look at the issue: our dualistic tendencies are, and have to be, the result of the need for simplicity. Given the complexity that needs to be tamed, concentrating at the level of “mind”, as described in folk-psychology (and thus introducing the seed of dualism), is entirely understandable: this approach does have some predictive and explanatory power, while strictly physical explanations currently don’t (not outside their limited scope), precisely because they are still utterly incomplete.
  4. On the other hand, all this suggests why a certain family of approaches doesn’t look promising. For example, concluding that strictly physical explanations will never surpass the explanatory power of dualist solutions is possible only if one ignores the unimaginable level of complexity that a physical explanation would have tamed. In the same way, concluding that the hard problem of consciousness requires the introduction of an entirely new kind of solution also looks misguided: it makes intuitive sense because the scale of what needs to be explained in traditional ways is in itself so vast that it escapes comprehension.
  5. In terms of a subject that I have discussed here, the predictive brain, all the above explains why I’m inclined to dismiss objections such as “yes, but you need much more to explain X (psychology, or consciousness, or intentionality, etc.)”. To me it is much too easy to think that such objections come from the lack of understanding of the scale of the problem. Yes, all physical explanations of how brains support minds are far from convincing – not one claims to be complete: this has to be the case, given the enormity of the task.
  6. The key to concrete advancements will be theoretical. What is needed is the identification of powerful concepts, located at the most useful (and still unknown) levels of abstraction (a full description will need to span across more than one level of abstraction), these will allow to isolate complexity in separate compartments. For example, eliminating the need of describing in full the inner working of a single synapse. Or finding repeated circuitry of several neurons that serve a single general-purpose function (our new powerful concept) and thus allow us to grasp what happens at a higher level of abstraction. These two examples also suggest that we have ideas of where to start looking, but in all honesty, we don’t know if these are the right or best ideas.

Before concluding, a self-referential side note is due. The epiphany that I’m trying to transmit here is also the reason why I write in the way I do. Understanding requires to simplify, hence my main effort is aimed at reducing complexity: I try to do this via conceptual clarity, giving a lot of thought on how to express ideas in simple ways. This is why I normally limit name-dropping, vast bibliographies, formulas, and, whenever possible, specialised jargon. The result probably makes most academics cringe, but it ain’t my problem, right?

Tagged with: , ,
Posted in Neuroscience, Philosophy, Premises, Psychology, Science
10 comments on “Complexity is in the eye of the beholder: that’s why it matters
  1. ihtio says:

    Predictability of simple and complex systems: You seem to be suggesting that complex systems are inpredictable. It is not so, in general. There are many complex system that are deterministic and we can predict their behaviors quite well.

    The terms “complex” and “simple” are used mainly to describe systems, that is models of phenomena that we encounter in the world. A large set of 100 linear equations is simple, as there are no interactions between the equations and each equation is well behaved (proportional change to the arguments leads to proportional change in the effect). A set of 10 or even 3 differential equations (look for Lorenz system / equation) is much smaller, but the effects of changes in arguments are disproportional.
    This shows that elements of a simple system work in a much more independent fashion. Elements in a complex system work in an intertwined fashion – each affects some other elements.

    The distinctive features of complex systems are only correlated with difficulty of understanding them. The mere difficulty of understanding is not what makes a system complex :).

    You say “Complexity is all in our heads, in the eye of the beholder”. Now I’m asking: what is not in our heads? Is an apple a “real” thing? We all know that apples are made of atoms (which are made of elementary particles, which may be made of strings, which may be…), so by your line of reasoning apples are in our heads, because they can be decomposed into smaller parts. In fact, if you look closer at most of our scientific concepts, you will be able to say that they are only approximations, models, or things made of other things. This is the way we think.

    I will provide a nice example of an ecosystem: a forest. If you see it as a simple system, then you can easily eradicate one species from it (say, foxes) and you would expect that the changes to the forest will be minimal – one species less. However, if you consider a forest as a complex system, you will notice that there are tight interrelations between different species – if you eliminate foxes, then rabbits will propagate beyond imagination, eating all grass, leading to death of many fungi that won’t be able to decompose dead animals and plants, etc. The whole forest will be “sick”.

    In the cognitive sciences, the computational theory of the mind (-> mind is a computer) sees the mind as a simple system, where one module (or a processor) does something with “information”, passes it somewhere else, etc. and we have action as an output. (I write about this in one of my previous posts here: However, careful examination of brain function showed us that the brain works very much differently: for any task some networks of different brain areas are engaged and their activity is far from simple “process and pass the result further up”. These task specific networks seem to be communicating with each other, modulating each other activities. That is why we now start to see brain as a complex system.

  2. Sergio Graziosi says:

    ihtio, late and disorganised reply…

    First of all, I struggle to understand what you are disagreeing with. The weak “disagreement candidates” that I can put my finger on don’t seem to undermine my main point, and your last paragraph confirms this second hunch in an almost direct way.
    To try and put some order, I’ll start with my main point: the brain is very complex, so much so that our own intuition cannot even grasp the amount of complexity it entails. This applies to a mechanistic description of brain function: when looking at brains from this perspective we should recognise that our own intuitions will invariably fail to grasp how complex the entire system is.
    I was making this point also because of some of your previous comments, which seem to point in a direction that I dislike: that mechanistic/reductive methodologies will never be able to fully explain the brain (let alone the mind). What I’m trying to say (I must add that I’m not satisfied with the current post, there is something about it that I don’t like, but I don’t know what – perhaps I’m just trying to cover too much ground), is that our usually shared intuition that “minds cannot possibly arise from mechanistic brains” can be explained by observing that there is no way we can grasp the scale of a mechanistic description of the brain (we fail twice: in understanding the probable complexity, and in understanding how “big” this complexity is). Therefore our intuition is bound to point us in the wrong direction, and I believe it does. You point out that brains are complex, we clearly agree on this, so if you have one main point to make I am failing to see it.

    On the other hand, in previous comments on the predictive brain posts you wrote that FEP and family fail to explain psychology. You have also written a whole post on this (thanks for the link-back!), and I do think your position is due to the imaginative gap that I’m laboriously trying illuminate. FEP and other “predictive brain” approaches are nothing but proposed abstractions that might or might not prove useful in trying to make the complexity of the brain a little bit more tractable. If they will be useful, they might help bridging the gap (we are talking of the huge distance between mechanism explanations and psychology, a gap that has the hard problem of consciousness at its core). The fact that they currently do not bridge the gap is the direct consequence of the sheer amount of complexity that needs to be understood (described, modelled, synthesised, etc) and we are not even sure any theory has gone beyond square one in this race.

    On the other hand, you seem to be picking at minor details that are indeed debatable, especially if you equate “complexity” with “what complexity and/or systems theory classifies as complex”. I’ve tried to avoid hinting at one or the other theory, and used “complexity” strictly as a colloquial term throughout the post (because this post is about the limits of our intuitions!), so once again, I’m not even sure we disagree on anything at all. Of course you can say that everything is in our heads, and that our heads identify useful levels of abstraction as part of what we refer to “understanding the world”, that is how a bunch of atoms becomes (is recognised as) an apple.

    Does any of the above actually answer your concerns? I do hope so, but you are the judge of that!

    • ihtio says:

      Sergio, I finally have the time to write an answer that your post and your reply to my comment deserve. So I re-read your post to make sure I don’t misunderstand or ommit anything of importance.

      “Complexity” is all around us, that’s for sure. However what researchers in the complexity theory / complex systems domains call “complex” has specific meaning and — even more importantly — very specific connotations and connections to tools, experimental and simulation methods, mathematics, etc. See the nice map on Wikipedia:
      It is therefore important to consider “complex” as it is understood in these fields.

      Naturally, when we consider a fragment of the world we talk about we restrict to a certain level of abstraction/description. When we talk about financial markets we aren’t interested in atoms from which people are made of. When we talk about a flying bird we may easily see it as a simple system.

      The thing that I have to draw attention to is that most things can be seen as simple or complex. What makes a difference is how we model fragments of reality. Therefore we often talk about complex models or simple models. A ship on a sea may be modeled a simple system. Moon revolving around Earth can be modeled as a simple system. Mind can also be modeled in such a way.
      Sometimes simple models are not enought. Even though they are easy to understand and we would like to use them, the gaps are so severe that we have to generate complex models.
      When we consider a 3-body problem in physics, or the interrelations between species in a forest, we see the limitations of simple models. Then the introduction of complexity is necessitated. To understand the system we have to work with the complexity, not around it.

      When you write “the simple/complex dichotomy is one that refers to our own ability to comprehend and predict, it does not really apply to reality out there. Complexity is all in our heads, in the eye of the beholder.” you are missing the point. It seems that you conflate “complex” with “difficult”. Of course complex models are usually difficult to understand and simple models are easy to understand. But “complexity” points to specific properties (self-organization, pattern formation, emergence, many interrelations between components of the system, nonlinearity, adaptivity, networks of influences, complex dynamics) of phenomena or models, such as those mentioned on the map above and described in more detail in many places, for example here:

      I will now proceed to your reply.

      I’m not sure why you bring the topic of our intuitions being fallible, as that’s something that is indisputable in science in general.

      I see that you don’t like the direction of anti-reductionism. Indeed I am of the opinion that we won’t be able to get a satisfactory understanding of the brain or many other natural system only by reductive principles. We are already using various other approaches to integrate the knowledge and build bridges between levels of description of the brain.
      Studying stuff (neurons, neurotransmitters, etc.) in isolation will only get us this far. There is a need to produce ideas on how these pieces fit the puzzle. The point I am making is that there is no way around complexity when it comes to studying the brain. Computationalism is a very simple idea (input -> processing by modules -> output), but we already see its bounds or limitations.
      The main point is therefore to embrace complexity of the brain and try to build models with complexity in mind at the start, and not as an afterthought.

      Consciousness? Currently I see theory or even a direction of inquiry – be it scientific or philosophical – that I would call “good enough” or prospective. I don’t know how complex systems science could fit into this picture and I don’t know how “plain old” neurobiology and neuroscience could push us forward. That’s why I try not to make bold statements about consciousness and physical/experiential dualism.

  3. Sergio Graziosi says:

    I must be doing a very lousy job at communicating my point, even worse than I thought. We are doing very slow progress, if any.

    We might at least have identified one solid reference point, namely that you:
    “[are] of the opinion that we won’t be able to get a satisfactory understanding of the brain or many other natural system only by reductive principles” and that I disagree.

    However, I am also inclined to think that the issue is semantic, not substantial.
    The reason is that you say:
    “We are already using various other approaches to integrate the knowledge and build bridges between levels of description of the brain. Studying stuff (neurons, neurotransmitters, etc.) in isolation will only get us this far. There is a need to produce ideas on how these pieces fit the puzzle.”

    I am 100% in agreement with the above, but in my own mind, the following equivalence applies:
    A convincing and comprehensive explanation of how the pieces (networks, neurons, synapses, spikes, neurotransmission, modulation, gene regulation, etc) “fit the puzzle” is precisely what a convincing reductionist explanation would be. Thus, I tend to think that in your mind (a dangerous, unwarranted but also unavoidable exercise, do correct me if I’m wrong) “reductionism” equates to “over-simplistic reductionism”.
    I think we can agree with the following: scientists sometimes err on the side of unjustified over-reductionism, and we (both you and I) are happy to point out that that’s a mistake.
    What we may disagree with is: “reductionism can’t work because it necessarily brushes aside complexity (this time in the proper complexity-theory sense)”. If you think that this last statement is true, then I disagree completely.

    Take your statement:
    “Computationalism is a very simple idea (input -> processing by modules -> output)”
    and compare it with:
    “Computationalism tries to segregate complexity: input -> an unimaginable amount of vastly complex computations -> output)”. (another way to synthesise one of my points)
    Which one does justice to Computationalism? Your statement suggests that indeed you think “reductionism = over-reductionism” and if you do, I am unable to find a way to give you the benefit of doubt.

    And we approach the conclusion: you are not sure why I bring the topic of our intuitions being fallible. Well, I bring it in because it is central to my point:
    a) on many different topics, and in particular the mind-body problem, our intuitions are bound to fail systematically.
    b) when making guesses on where to look for solutions to notoriously hard, and yet unsolved problems, we can only take educated guesses.
    c) thus, we have a strong incentive to employ our intuitions (we can’t use hard science to solve hard and seemingly intractable problems, if we could we would have already solved or made them tractable).
    Conclusion: using our (default) intuitions, especially on mind-problems, introduces a systematic bias, leading to the facile (not trustworthy) assumption that standard scientific approaches will always fall short, and that new unorthodox solutions are necessary.

    The predictable fallibility of our intuitions therefore tells us that we should not trust intuition when approaching the mind-body problem. Thus, I find it very difficult to accept statements such as “we won’t be able to get a satisfactory understanding of the brain only by reductive principles”. Whenever I read something like that I get stuck in the interpretation that “this person has no idea of how complex the reductive understanding would be” or “this person thinks that reductionism entails simplistic explanations”.
    Do you see the reason why I had to stay away from complexity theory? I am not talking (or implying anything) about it, I am talking about the folk understanding of “complex”, precisely because it best applies to our (faulty) intuitions.

    “The main point is therefore to embrace complexity of the brain and try to build models with complexity in mind at the start, and not as an afterthought.”
    We agree on this! But I fail to see how such models will not be reductionist: by definition, they will be made of smaller components, each defined by their own well understood behaviours.

    • ihtio says:

      We have so many ideas in this discussion: (anti-)reductionism, intuitions and their fallibility,

      The problem of reductionism

      OK, so we have established that I “am of the opinion that we won’t be able to get a satisfactory understanding of the brain or many other natural system only by reductive principles” and that you disagree. Please, allow me to knock on this issue a bit as it seems to be of great importance for the whole discussion.

      This is the moment when we have to distinguish between ontological reduction and explanatory reduction. No one doubts that “stuff of biology” is made of chemical molecules and those in turn are made of even smaller entities studied by physicists. Therefore we are not – and we should not – be arguing whether animals are ontologically reducible to chemistry, whatever that would mean. We are now talking about science, so we’re interested in the question, whether it is possible to explain / account for higher level entities by lower level entities, where “entities” are scientific objects (concepts).
      This is important with regard to the last paragraph in your in the post I’m replying to, that is you “fail to see how such models will not be reductionist: by definition, they will be made of smaller components, each defined by their own well understood behaviours.” The fact that such models will make use of many small components does not in itself constitute an argument for reductionism. If you have time, take a look at models that are used by researchers in complexity related domains: most of them are models, in which entities interact using simple rules, and through these interactions a global order arises. A clear-cut example is an ant colony optimization algorithm, where we see how an entire colony finds the shortest path from one point to another. See: and for a nice simulation. This model is complexity-based, not a reduction driven.

      I shall present two lines of argumentation against excessive reductionism: one based on the lack of proof of the relation of reduction in established sciences, and one based on the nature on scientific theories in general.

      The first is the claim that there is no proof that any explanation in higher level theory can be reduced to an explanation in a theory a level down. Of course we set a level 0 being for example quantum mechanics for whatever theory suits our purposes.

      When I say “can we reduce X to Y?” what I mean is “can we explain X by Y?”. Therefore let us as the following questions:
      1. Can we reduce chemistry to physics (
      2. Can we reduce biology to chemistry?
      3. Can we reduce psychology to biology?

      Asserting that higher level theories are reducible to lower level theories (psychology to biology to chemistry to physics) would require some non-trivial proof, I imagine. As of now “the jury is still out”, we don’t know if it is indeed the case that these level are reducible to lower levels. Why assume they are without any solid proof?

      You write “A convincing and comprehensive explanation of how the pieces (networks, neurons, synapses, spikes, neurotransmission, modulation, gene regulation, etc) “fit the puzzle” is precisely what a convincing reductionist explanation would be.” However I don’t think that this is what a convincing reductionist explanation would look like. I see it as a task that would require complexity-based models and theories. In reduction any high level entities and rules could be defined/explained in terms of entities and rules of lower level. Complexity-based view points to the need to assert that higher level entities are first class citizens, that can for example interact with lower level entities. Think of whirls in a river – explaining them in terms of single water molecules is not enough, as whirls have their own distinct dynamics.
      Similarly take the relation between genetics and the theory of evolution. Theory of evolution is a higher level theory than genetics, but not reducible to it. There is just no way we could account for all evolutionary processes just by looking at genes. We need something more. This something more is provided in the theory of evolution (e.g. struggle for life, kin selection, etc.).

      I have indeed written about links between levels of explanation. What I have in mind is something like in the example above with the theory of evolution and genetics, that is we find how those two levels relate to each other, how they interact, etc. These tasks (finding links between the two) is what I understand as finding out how they “fit the puzzle”. What emerges is a high level picture that is not explainable by mere examination of paint dots / pixel on each puzzle piece, which reduction seems to entail.

      Second argument I would wish to make is that of the nature of scientific theories being inherently unreducible. In practically any scientific theory[, if it was expressed in first order logic,] we can find statements which are undecidable on the ground on the theory in question. That is there are statements expressed in the language of the theory that say something about entities which the theory is about, but those statements cannot be proved nor falsified using the theory itself. To do this we have to construct a “stronger” theory (which will in turn have some other undecidable statements). This was proved by Kurt Gödel some time ago. See
      What is that has to do with anything? Assume we have a high level Theory T2 with some statements about the world. When we try to reduce it to a lower level Theory T1, we can often find statements that had a proof in T2, but now – after translation to T1 – cannot be proved. Laws of logic dictate that.

      These two points were clearly and straightforwardly explained on Scientia Salon blog:

      The problem of intuition

      You write: “using our (default) intuitions, especially on mind-problems, introduces a systematic bias, leading to the facile (not trustworthy) assumption that standard scientific approaches will always fall short, and that new unorthodox solutions are necessary.” I see this as a wicked stance.

      I would wish to pursue two lines of reasoning on the matter. The first one is that of intuitions being a valuable source of ideas in science, and the second one is that the thought that (default) intuitions lead to lack of trust in science is flawed.

      Firstly, if we read biographies of great inventors, scientists, we find that they indeed used intuition as a resource of great potential for formulation of new ideas, which were then worked on in a more “to the ground” way. Who hasn’t heard the story about Newton and the apple that fell on his head? Scientists from all domains often use intuition as a starting point, then develop a rigorous theory or a mathematical model. Intuition is crucial to scientific process.

      Secondly, science progresses also by scrutinizing our intuitions and replacing them by better understandings. I don’t see where did you get the idea that using our intuitions lead to thinking about the scientific method as lacking in power. Of course biases, dogmatism, gaps in reasoning, errors, etc. make all of us not perfectly rational and fool-proof machines. Daniel Kahneman’s work on behavioral economics makes this clear as day.
      I don’t see how any of this is brought as an argument against my position.

      Intuitions are good drivers of creativity and of new ideas. Intuitions should be transformed into rigorous theories.

      You write “Do you see the reason why I had to stay away from complexity theory? I am not talking (or implying anything) about it, I am talking about the folk understanding of “complex”, precisely because it best applies to our (faulty) intuitions.” I don’t see why you cling so much to folk understanding of “complex”. In science we don’t really put too much weight on folk understanding. Many terms have folk definitions and are used in different, rigorous meaning in science, mathematics. Take “space” – in folk understanding it is “something between things”, in mathematics it is a set with added structure and maybe some operations (e.g. R^5), in computer science we can find things like “search space”, in physics “space” can also be rigorously defined. The same thing applies to “complex”. “Complex” have different meaning, connotation than its folk counterpart. It is important not to confuse folk concepts with scientific ones.

      You have pointed that there may be a misunderstanding as to the reduction itself, namely that I may not appreciate how complex reductionist explanations can be, or that I may think that reduction amounts to over-simplification of studied phenomena.
      Reduction, as I see it, is the process of dividing a phenomena into smaller, more manageable problems which can be studies with more ease.

      Questions for clarification:
      What do you understand by reduction?
      What is the difference between reduction and over-simplistic reduction?

  4. Sergio Graziosi says:

    I’m glad we’re both stubborn enough to persist, despite the difficulty.

    This is a short(?) answer to your direct questions. A longer reply will follow as soon as I’ll find the time and clarity required.
    My definition of reductive method: subdivide the object of inquiry into smaller components, define how each part works, how they interact, and use this understanding to create a model of the whole.
    The relevant bit for our discussion I think is “[describe] how they interact”. If/when complexity theory is able to fully capture the interaction dynamics (and can do so better than other approaches), then complexity theory should be used. Why on earth using complexity theory would make the effort above less reductionist absolutely escapes my understanding.

    I may be the ignorant one here: it’s possible that the overwhelming consensus is that whenever an explanatory theory includes the typical elements of complex dynamics then (almost) everyone will not call such a theory reductionist. But in strict formal terms, I can’t avoid seeing this as a labelling mistake. (Just semantics, and the reason why we are finding it so difficult to isolate where we disagree – I’m pretty sure we only disagree on labels, where my labels are much more fluid/inclusive than yours)
    Also: the definition above allows for recursive nesting. Once you have “defined how a given part works” you can subdivide it further, and start all over again at a “smaller” level. You can also go in the other direction, take the system that you’ve modelled, and use it as one of the basic building blocks of another system that works on a larger scale.
    In such a way you are actually building the “links” between different levels of abstraction. I find the kind of discussion in the ScientiaSalon posts you link to a bit irritating, to me it smells of unnecessary over-complication. Don’t get me wrong, the whole topic interests me deeply, but (or because) I also think it is based on wrong premises. Namely that reductionism and emergence are opposing ideas. They are not: emergence is the artefact of our cognitive abilities (or limitations), as is the idea of causality itself. Perhaps one day I’ll find the courage to write down why in a full-length post.

    How to distinguish between thorough reduction and over-simplistic reduction? This problem is interestingly similar (symmetric?) to the demarcation problem between science and pseudoscience. It has similar properties:
    – it is surprisingly difficult (I would say impossible) to distinguish between borderline cases a priori (there is no universal rule to make the distinction),
    – at the same time, there is no shortage of very easy to classify examples.

    Astrology is clearly pseudoscience, and behaviourism is very emphatically over-reductive. Unfortunately a lot of science (especially in what I call volatile and fluid disciplines) constantly flirts with over-reductionism, presumably for reasons that have to do with the need to stay away form the pseudoscience edge: flirting with the opposite error is a far more acceptable mistake (and perhaps shouldn’t be?).

    Finally, a hint: all of our discussion is coloured by some underlying differences on our general approach to epistemology (at least). My position is most certainly not mainstream and, despite the fact that I would self-identify as guilty of “scientism” (and then deny it’s a guilt), it is also very far removed from any sort of positivism. If you are really looking for punishment, you may want to start from my just another story series of posts, and then jump straight to essentialism fallacy. You may find the first bearable, but I’d be very surprised if you’ll be able to accept and digest the latter without wanting to kick some sense into me.

  5. Sergio Graziosi says:

    second answer, this one mostly on intuitions. I’ll build and expand on my previous assertion:

    reductionism and emergence are [thought to be] opposing ideas.They are not: emergence is the artefact of our cognitive abilities (or limitations)

    If you had the time to read through some of my old posts, you may have an idea of what I’m about to re-state…

    The starting point is that reasoning operates on models of reality, where the accuracy of the models is ultimately tested via empirical verification. This applies to everyday experience as well as loftier philosophical and scientific endeavours.
    I may have a concept of “tap”, and now that I can use taps to get some water. This “knowledge” is easily confirmed via straightforward verification, allowing me to function in the current world: if I find myself in need of water in a unknown building, I would know that I can find a tap in the toilets and get some water without having to bother anyone. What is happening there? I have a concept of tap, that has some properties attached (what it does, where they are usually located, as well as more detailed information about where you buy them, how they work, etc). In strict physical terms, a tap isn’t even a single object, they are usually made of separable components, so one could argue that the distinction tap/non-tap is entirely arbitrary: there is no a-priori, objective and pre-existing rule to distinguish what is a tap or where the tap starts and ends. However, the concept is a useful one, so I keep using it (and most people needn’t worry about it’s apparent arbitrariness).
    That’s the general mechanism of all cognition: it slices reality into categories that have known properties, and uses them to predict how the world behaves. This process is heuristic and imprecise, never error-free (all models are by definition approximations), but allows us to function in the physical world because it cuts down the number of variables we need to track. A concept usefulness is proportional to how much it allows us to keep our understanding simple, the less independent variables we need to consider, the easier it is to make predictions. Of course, the variables that need to be considered include the limits of the model I’m using: a tap in a doll-house is unlikely to deliver water.

    I’m writing all this because the process above generalises in different directions:
    1. we are born with some built in “concepts” (or inclinations that favour the formation of some concepts), such as the distinction between up and down, which is very useful on earth, but completely meaningless in absolute terms. It’s an invention of evolution, installed in our brains, it exists because it’s useful, not because it’s objectively true.
    2. when we create scientific theories, what we really are doing is trying to identify new concepts that have new predictive powers and thus allow us to navigate some problem spaces that we couldn’t handle before.

    My point 1. above accounts for the vast majority of our “intuitions”; it is of course frequently useful to use them as the starting point of 2. However, as every cognitive mechanism, the scientific effort is bound to systematically fail in some circumstances. Hence, improving our cognitive strategy requires to:
    a. Identify when the above process is going to fail.
    b. Devise new strategies to employ in such cases.

    At this point you are probably thinking: “why should I bother with all this mental masturbation?”
    A legitimate question, but in our case the answer is indeed relevant: because we are exploring the edges of knowledge, the problem spaces where our everyday intuitions start to fail systematically (a.) and thus we need something better (b.).
    The cost is that our new “concepts” are necessarily more complicated, the value (if any) is that they should provide new solutions. Hence the question is: does the above actually help to solve known outstanding problems?
    I would answer “Yes”, for science, philosophy and for the relation between the two.

    A few examples:
    Epistemology: the above requires to ditch the concept of “absolute truth” and replace it with “bounded usefulness” or “limited applicability”. Suddenly, understanding why the scientific method works becomes trivial (all knowledge implies reduction and empirical verification), and the history of “scientific revolutions” becomes straightforward to follow. Incidentally, even quantum weirdness becomes bearable, as we are required to accept that even the concept of causality needs to be considered as ‘just’ a very reliable heuristic, but nevertheless inapplicable in some contexts.

    Morality: this one should be a system of rules that have some usefulness in a given problem space. Because we all seem to develop a sense of right and wrong (and even toddlers have it), it needs to apply to a problem-space that is relevant to our evolutionary history. At the same time, the question of whether absolute moral truths exist becomes obsolete: there aren’t any absolute truths, only useful heuristics with limited applicability. Woops, there goes a never resolved problem down the drain: deontology, consequentialism, utilitarianism, categorical imperatives, they all become alternative, partially overlapping models of the space of “moral problems”. Now we “know” what sort of space this is, and can evaluate alternative strategies based on how easily applicable they are and how big their area of applicability is.
    Instead of alternative and incompatible approaches to a given (very vague) problem, we end up having a set of tools, each with pros and cons, and a general theory to start investigating the relative strengths and weaknesses of each.

    It goes without saying, I’m convinced that the mind-body problem, consciousness and all the related questions suddenly become more tractable as well: our intuitive dualism is exposed for what it is, a useful simplification (the same applies to plenty of other explananda that are ultimately justified by our own, built-in default intuitions). Conversely, the most promising method to tackle these problems becomes self evident: we need to look for new abstractions, and judge them on their explanatory and predictive powers. Yes, the new concepts will be judged as useful if they can explain our intuitions, but they should not find their justification in our intuitions (intuitions become the final test, not the basis on which to formulate hypotheses). Explaining why our intuitions exist, and where they fail should be the aim, finding a justification for our intuition (e.g. explaining why they are right) is, by definition, the wrong way to proceed.
    More trivially: very smart and dedicated people make the mistake above, they try to build models that explain (support) their deeply felt intuitions. As a result, you get all sorts of dualisms, including oxymorons such as Materialist Dualism, or the opposite mistake, with claims that consciousness/free-will/identity and whatnot are illusions. At the same time, scientist try to build reductive models from the ground-up, but lack a conceptual guide on how to identify the most useful levels of abstraction, as well as the most promising concepts therein (they tend to rely on one or the other default intuitions instead). This is a contingent problem, as it merely slows down progress, but it would still be nice to alleviate it.

    And we get to emergence: what is this concept? It’s the idea that certain phenomena/properties appear as the irreducible result of complex interactions between other distinct elements. However, within the epistemological approach that I’m trying to explain, emergence is the direct consequence of how we decided to slice-up reality, it’s not something that exists out there, it’s a phenomenon that appears to remain unexplained after we have chosen how to identify the distinct elements of our model. Only: we have (arbitrarily) chosen what the more fundamental elements are, choosing them on the basis of some explanatory/predictive power that a given distinction allows. We already know that slicing reality in one way necessarily produces explanations in one direction while hiding something else. Therefore, for each internally coherent model of reality, there needs to be some unaccounted-for emergent phenomenon. Hence, some emergence is implied by the reductive approach. The challenge is to pick our concepts in such a way as to minimise what is left out. In this way I account for why reductionism and emergence seem to be opposing ideas while showing that they are opposite consequences of a given approach (modelling reality, and all models are reductions, by definition).

    In conclusion: why do I keep obsessing about the limits of our intuitions? Because doing so allows to tackle and simplify a wealth of distinct and long-standing problems. It is also the very foundation of my own way of thinking, making me, by definition, blind to it’s own shortcomings. As a result, I’m busy exploring two directions:
    First, I am trying to become more aware of said limits, a process that requires other people to challenge my thoughts (thanks!).
    Second, I try to explore and explain why and where this approach might be useful.

    Does any of the above make sense to you?

    • ihtio says:

      Your definition of reductive method is a very broad one. I wonder if there is anything (in the realm of science) that wouldn’t fall into it. With such understanding it seems that most of the things is done according to this “reductive method”, which is a very general term and approach. You seem to have extremely strong convictions and intuitions that “reductive method” is the way to go, while at the same time bashing at intuitions because they are inherently fallible.
      I’m not sure why you go with the intuition that “emergence is the artefact of our cognitive abilities (or limitations)” and – somehow – “reductive method” is not. At least that’s how I understood you.

      This feel is what I get from reading the two of your last comments. For example you write “within the epistemological approach that I’m trying to explain, emergence is the direct consequence of how we decided to slice-up reality”, suggesting that reduction maybe is not the consequence of how we decided to slice-up reality (?).

      “Does any of the above make sense to you?” Yes, Sergio, you have explained yourself very clearly. The problem for me was that I got the impression that you downplay complexity theory in favor or reductive method. From your last comment I conclude, even though you haven’t explicitly stated so, that “reduction is [also] in the eye of the beholder”, as it is one of many modes of thinking of the world.

  6. Sergio Graziosi says:

    On assuming a broad definition: this is a criticism I expect, and one that applies to my take on Science in general as well. So why do I nevertheless embrace such broad definitions? I’ll answer on reductionism, leaving the parallel (and analogous) explanation on Science aside.

    First thing to note is that finding a more stringent definition doesn’t look feasible to me. For example, on what grounds could we assume that an approach is reductive if and only if it postulates that the system of interest can be modelled by using only pairwise interactions (i.e. that all dynamics can be reduced to a set of formulas, each formula with a maximum of 2 independent variables) or that it assumes that all interactions have linear effects, on any other such a-priori limitation? Doing so is equivalent to saying “reductionism is wrong”, as such assumptions will limit the applicability of the approach to a crippling extreme.

    On the other hand, assuming a broad definition allows to concentrate on the distinction between reductionism and the attached risks, usually over-reductionism. This makes the broad definition immediately useful because it allows to focus on the key worry that any reductionist should always keep in mind. On one side, one wants to create models that minimise their internal complexity while maximising their explanatory/predictive powers. The danger is always to create an overly simplified model by brushing aside difficult to account for phenomena, and/or simply defining a too narrow scope so to conveniently leave aside significant, but difficult to model, variables. The symmetric danger also exists, namely over-fitting.

    To me, assuming any narrower definition automatically leads to the conclusion that reductionism only works for a narrow set of well behaved (simple, easy to understand!) systems. It implies that you have already condemned the approach as outdated, and you’ve done so by picking an arbitrarily limited definition, suggesting that you were biased from the start.
    On the other hand, accepting the wide definition comes with the risk of providing no information at all: if a proposition has probability 1 (all scientific efforts are reductionist) then it contains no information. This is almost true, in the sense that almost all scientific efforts that I can think of have at least some reductionist elements (for obvious ‘cognitive’ reasons, see below), but:
    a. Realising that one particular discipline isn’t reductionist, provides very significant information (because the probability is so low).
    b. As I’ve said above, we can now concentrate on asking:
    b.1 Is this particular effort “over reductionist” and
    b.2 Is it over-fitting?
    Thus, from one broad but not arbitrary definition we get to define 4 useful sub-classes:
    1) Theories that claim to be scientific, but also claim not to include a reductive side. We have reasons to look at them with the greatest amount of scepticism.
    Within reductive efforts, we may find:
    2) Over-reducing efforts. All the theories that neglect important aspects of the system they describe.
    3) Goldilocks efforts. Those that appear to be just about right, they account for all the phenomena we are aware of, without ad-hoc, case-by-case explanations and exceptions.
    4) Over-fitting theories. Theories that keep adding more special cases as more evidence emerges. Theories that assume that the number of independent variables is roughly the same as the behaviours they try to quantify (ultimately, these sort of theories aren’t reducing anything).

    Thus, I conclude that my definition is useful, and, in accordance with my own theory, accept it as valid.

    On [having] strong convictions and intuitions that “reductive method” is the way to go, while at the same time bashing at intuitions because they are inherently fallible. First of all: implying that I must be relying on “strong intuitions” is a bit uncharitable, but OK, not entirely unjustified. However, my strong conviction is backed by the greatest amount of evidence. Science has produced reliably useful models, has done so by using the reductive method (most of the time), and by self-correcting so to converge towards the Goldilocks zone (on average and with plenty of wrong turns).
    Thus, my strong conviction may well be a priori, but it is backed by the strongest evidence, so abandoning it wouldn’t be justified.

    On reduction [may] not [be] the consequence of how we decided to slice-up reality (?). You got this one wrong: reductionism is the act of slicing up reality, so to make some sense of it ;-). Also: I’m not sure why you go with the intuition that “emergence is the artefact of our cognitive abilities (or limitations)” and – somehow – “reductive method” is not. Reductive method is a major component of our cognitive abilities, you can say that emergence is an unavoidable artefact of our need to slice up reality in one way or the other.

    That’s because “some reduction” is not just in the eye of the beholder, and not one of many modes of thinking of the world. It is an unavoidable element of “thinking” (see my “essentialism fallacy” post). Whether we like it or not, all conscious efforts to understand something start with reductive assumptions (just by isolating this something). Explicit, declarative knowledge implies reduction. Being well aware of this, and of the inherent limitations entailed, is the obvious way to try limiting the risks.

    Finally: language is tricky. When I’ve pointed out that I wasn’t making any claim about complexity theory (CT), I meant it – literally. I wasn’t downplaying CT, nor promoting it: I was not talking about it at all.

  7. […] and happen to occur inside bodies (would count as indirect?). In other words, it declares the reductionist approach a dead-end a priori. Trouble is, nobody does this: we do study how sensory signals travel along […]

Leave a Reply

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s

Follow me on Twitter

All original content published on this blog is licensed under a Creative Commons Attribution-NonCommercial 4.0 International License.
Creative Commons Licence
Please feel free to re-use and adapt. I would appreciate if you'll let me know about any reuse, you may do so via twitter or the comments section. Thanks!

%d bloggers like this: