Evolutionary Theory of Consciousness: why, how and what

After pre publishing the Evolutionary Theory of Consciousness (ETC) paper, it’s time to provide a little context, along with a tl;dr version (below). The full paper is long and dense, and may be a tad obscure if one isn’t familiar with both the neuroscience and philosophical sides of Consciousness studies. In this post I’ll report how the fundamental ideas appeared on my mind, how I’ve tried to develop them and what the core argument is.

Why consciousness?

The seed was planted during my undergrad studies, more or less when Crick and Koch where picking up the subject, making consciousness finally acceptable as a subject of neuroscientific enquiry. Once the seed was planted, it kept simmering in my mind in slow-burning mode. One day I was reading the replies to Edge’s annual question “What *should* we be worried about?” and happened to read the answers from Robert Sapolsky and Timo Hannay one after the other. Sapolsky worried about the fact that, despite his best efforts, he keeps feeling as if he has (libertarian) free will. Hannay provides a good summary (please read it, and then continue to this excellent article by Oliver Burkeman to get the full picture) of why consciousness is so hard to tackle, both philosophically and scientifically: the worry being that we have no idea of what is conscious and what isn’t. My brain produced a link between the two worries: Sapolsky can’t truly convince himself that there is no free will because his brain makes him experience it (as a specific feeling you get when you actively take a decision), and this happens because it is useful to remember when and how a decision was reached (more in the “how” section below).

Thus, the seed started germinating, and I soon realised how important understanding consciousness was for my own long-term plans:

  1. A lot of nonsense can grow on fertile grounds because we “officially” have no idea of what consciousness is, thus you get:
    1. Religious people who claim that science is irrelevant because it can’t explain what really counts. Hence, God must exist (to give us the unexplainable consciousness) and of course God is exactly how described in the sacred text of choice. I don’t need to explain why this annoys me, right?
    2. All sorts of villains are able to spout pseudo-scientific nonsense to boost their own revenues. The godfather of them all is of course Deepak Chopra, but there is no shortage. This annoys me also because it puts science in a bad light: since such claims are self-declared as ‘scientific’ they may drag the perception of science down along with them.
    3. Considering 1.2. it doesn’t help that our ignorance allows some nonsense to also get the official badge of scientific approval, see this paper for an example. When what I perceive as pure nonsense appears in peer-reviewed publications, my blood starts boiling.
  2. In the big picture, I can see a pattern emerging by the action of natural selection: information (as structures that make a difference) keeps accumulating. It does so by aggregating in clusters of ever-growing superstructures: bacteria -> eukaryotes -> colonies -> multicellular organisms -> collaborating families -> collaborating groups -> villages -> cities -> societies -> nations -> multinational companies, etcetera. In the latter superstructures, humans are mere expendable cogs (think of armies), and I really don’t like it. In this context, it is possible that consciousness, and thus the ability of making deliberate decisions on the basis of vast and complex conceptual understandings, might (just might) introduce an anomaly. If it does, it could be very important (especially considering the existential risk that this pattern is generating in terms of ecological deterioration and excessive consumption), but we can’t hope to reach useful conclusions without understanding consciousness first.
  3. Stupidity: why do conscious creatures so frequently and deliberately pick the choice that brings them harm? You can see that this question can become: why does consciousness seem to be dysfunctional with alarming regularity? Again, if we don’t understand what consciousness does, we can’t even approach this question.

Thus, for me, the question of consciousness became pretty central.

Knowing that so many other failed, one secondary question became crucial:

How to approach the study of Consciousness?

In my case, I had a basic intuition, one that looked promising, so my very first step was to develop it autonomously, and see where it led. Given my background, it should not be a surprise that the whole approach revolved around evolutionary questions: what useful trait requires consciousness? How could such a trait plausibly evolve from other abilities? The answers I’ve found can be summarised in simple terms: you need experience to learn from experience, duh. At this stage, in order to solidify the idea I wrote things down, followed some threads, and then turned to study.

Updating myself on the advances of neuroscientific knowledge about consciousness was a grim affair. The more I read, the more depressing it got. We have a massive amount of data, and precious little to explain it. We have tiny proto-theories that try to explain small details, and a few macro-theories that claim to explain the whole, but look irredeemably simplistic to my eyes (will tackle this another day). By contrast, the philosophical side of the argument looked pugnacious and manifestly confused. Whatever position you can think of, there’s a philosopher that violently insists it must be the right one, and unfortunately this applies also to hypotheses that you might intend as jokes. Not nice. Furthermore, it soon became clear that there was a turf war going on: short-sighted scientists on one side and wishy-washy philosophers on the other, flinging mud in all directions, with little respect for rigour and dialogue. Numerous exceptions do apply, on both fields, and it soon became clear that it was worth paying attention to what people like David Chalmers, Daniel Dennett, John Searle, Ned Block, David RosenthalThomas Metzinger, Antti Revonsuo, V.S. Ramachandran, Stanislas Dehaene, Giulio Tononi and Andy Clark have to say (and others, apologies for all the omissions!). They all disagree with one another, but that’s not the point: each one of them has valuable insights to offer, and their disagreements are excellent tools to highlight the central issues that need to be solved.

This work eventually shaped ETC very deeply: the way ETC is formulated directly reflects my efforts of pinning down the questions that underlie the major disagreements.

What does ETC say?

Let’s recall the basic intuition: it’s hard to definitely eliminate the idea of having free-will because there is something it is like making a decision, and people can reflect on this something afterwards. Thus, the questions are: why? How and when is this ability adaptive?

From there, the derived intuition is that in order to selectively learn what is useful to us, we need to:

  1. Evaluate what our senses are telling us, so to retain the information that we (heuristically) classify as relevant to us. Because we are limited in our overall abilities, we can’t simply record all our sensory input and then use all this data to inform all our behaviour. This is physically and computationally impossible so shortcuts have to be taken.
  2. We also need to check the validity of such heuristic guesses, both in hindsight and right away. This is useful because what happens in 1. is by necessity heuristic, and thus it is immediately useful to double-check and eventually refine the process.

Thus, in light of (1.), ETC proposes that our brain contains a computational subsystem, which has one single function: to guess what sensory input might be important, and decide (at least) whether it should be attended immediately, flagged as worth remembering, or ignored right away. I call this the Evaluation Module (EM): it’s the system that answers the “Am I bothered?” question.

You may be guessing that “attending immediately” might, depending on the stimulus itself, require to recollect something you already know, and/or give another look. In turn, this kind of second-order activity is immediately useful to produce a better answer to the original question: should I (still) care about this particular stimulus, or is it OK to just ignore it? Thus, we get my point (2.) above: for such a system to work, it is useful to allow the results of the first evaluation to be enriched with information collected subsequently, and then re-ask the same question: should I bother about this thing? The results of the first evaluation can sometimes be evaluated again.

Furthermore, looking at these mechanisms from the evolutionary point of view, and assuming that what happens inside our brains can be described in algorithmic/computational terms, it turned out that proposing a hypothetical evolutionary history is genuinely straightforward. This side of ETC is quite abstract, though, so I will not cover it here.

Cross referencing these ideas with scientific and philosophical theories then became truly exciting. In particular, it seems to me that they allow to surpass a (suspiciously large) amount of disputes, and at the same time explain why promising scientific approaches seem deadlocked and unable to produce the much awaited-for breakthroughs (as well as why proper general Artificial Intelligence is nowhere to be found). The EM generates what is usually called phenomenal experience, because it enriches sensory information with what counts [Note: I provide some justification of this claim in the comments] to the experiencing subjects and makes the result available to introspection (re-evaluation). In sketchy terms, EM provides the first spark of subjectivity. Furthermore, the recursion proposed by (2.) allows this first spark of subjectivity to become self-aware: as many scientist and philosophers have pointed out, we only become aware of the “something it is like” when we actually ask ourselves “how does it feel?” or “what was that?”. ETC also explains why the Hard Problem of consciousness exists, but falls short when it comes to solving it.

As this is supposed to be a tl;dr version, I will stop here, but I do wish to point out that the above is brutally sketchy. At best, it may whet your appetite and give you some reason to read the full paper (a tall order, I know). I always welcome criticism, but if you are to dismiss ETC, please make sure you’ll refer to the full paper, not the short excerpt above.

What’s next?

For me, it’s time to confront the hard reality. My main hope was to give a little help to my first scientific love: neuroscience. As I hint above, I think the field is plagued by a chronic deficiency of solid theories. Fortunately, I am not the only one that sees it in this way, for example Gary Marcus has been banging on this point for many years now. There are exceptions, of course, some very interesting theories do exist: for example, to remain close to home, the Bayesian Brain hypothesis, and when it comes to Consciousness, Tononi’s Information Integration Theory (IIT) stands out as a very promising and exquisitely theoretical approach.
However, my direct experience is not encouraging: very few journals that cover neuroscience subjects are open to purely theoretical efforts, and word-limits inevitably limit the scope of such efforts. Furthermore, there are good reasons to give more credit to theories that are built on mathematical grounds, while ETC is entirely speculative.

Thus, I’ll proceed as follows: my plan is to write a series of “ETC and …” posts, where I’ll take a closer look at various theories, discuss ETC in their light and vice-versa. I’ll do this because I need to: my head is to full of ideas that won’t leave me alone until I express them. The subjects I plan to cover are: IIT, the Bayesian Brain and Predictive Coding, but also the ones I’ve deliberately ignored in the paper, for example the explicitly non-representational theories that cluster around the embodied cognition approach (ETC seems completely incompatible with this approach. Therefore exploring the disagreements is particularly interesting). I also hope to find the time and clarity to address some philosophical conundrums, starting from Scott Bakker’s ideas on the Blind Brain Theory (which looks 100% compatible with ETC to me, although Scott does think otherwise), and hopefully many others.

A personal note: I can recognise in myself a pattern that is very visible in others. One approaches a subject with (what feels like) a brilliant new idea, and in time stubbornly refuses to see why his idea isn’t that new, it doesn’t convince anyone, and even when it does, it leaves many questions unanswered while opening up new ones. That’s the harsh reality: the study of consciousness is so complicated that it’s almost impossible to be sure that your point is coming across, and/or that you are understanding the points made by others. In the end, no one, including and starting with myself, should hold the presumption of being right. I hope it’s still OK to nurture the ambition of adding something useful: it would be gratifying enough.

The most difficult subjects can be explained to the most slow-witted man if he has not formed any idea of them already; but the simplest thing cannot be made clear to the most intelligent man if he is firmly persuaded that he knows already, without a shadow of doubt, what is laid before him.

[Leo Tolstoy (source – h/t to KStreetHipster, via Bashir3000)]
The point is: I’m trying not to play the part of the one convinced to “know already”, but I’m pretty sure I’m failing.

To conclude, a reminder: if you’ll ever happen to read the ETC paper, please do fill in the short survey. I still need to understand if ETC has some value in the real world (as opposed to “just in my head”).

Tagged with: , , , , , , ,
Posted in Consciousness, Evolutionary Psychology, Neuroscience, Philosophy, Psychology
16 comments on “Evolutionary Theory of Consciousness: why, how and what
  1. ihtio says:

    What a lovely read, Sergio!

    I leave the full text for my upcoming plain flights, car rides and train journeys. Right now I have some questions to specific fragments. I believe that the answers will be of great aid to my reading of the full text.

    In the big picture, I can see a pattern emerging by the action of natural selection: information (as structures that make a difference) keeps accumulating. It does so by aggregating in clusters of ever-growing superstructures: bacteria -> eukaryotes -> colonies -> multicellular organisms -> collaborating families -> collaborating groups -> villages (…)

    You forgot about most of the universe that forms superstructures by actions other that natural selection: quarks -> electrons and protons -> atoms -> molecules -> … -> composites of atoms -> … -> planets, stars -> star systems -> galaxies…
    Structures certainly do aggregate and become more (yes, just “more”). It seems that: 1) natural selection is just a tiny addition, and 2) consciousness doesn’t seem to be relevant here.
    What is your opinion about these two types of superstructure emergence (by natural selection and the second type in which simple, “dead” matter aggregates)? In particular, I’m asking about the commonalities/differences, and why are we focusing so much on one of them.

    you need experience to learn from experience

    – Why can’t animals learn from non-conscious experience? We can imagine a scenario in which some (probably very simple) animals are not conscious, but do learn new things.

    ETC proposes that our brain contains a computational subsystem

    Could you please write someday more about what is a “computational subsystem” in a brain?

    our brain contains a computational subsystem, which has one single function: to guess what sensory input might be important, and decide (at least) whether it should be attended immediately, flagged as worth remembering, or ignored right away.

    So a brain contations a subsystems that attends (without consciousness, I presume) to various features, objects, etc. so that the rest of the brain can then attend (consciously, I guess) to those events that are important?

    The EM generates what is usually called phenomenal experience, because it enriches sensory information with what counts to the experiencing subjects and makes the result available to introspection (re-evaluation). In sketchy terms, EM provides the first spark of subjectivity. Furthermore, the recursion proposed by (2.) allows this first spark of subjectivity to become self-aware

    Dude! Far out!
    What counts?
    If I read you correctly, you claim that EM sits close to sensorium. What about our thoughts and dreams that don’t come from the world but are generated in other ways (but are available to introspection)?

    the explicitly non-representational theories that cluster around the embodied cognition approach (ETC seems completely incompatible with this approach. Therefore exploring the disagreements is particularly interesting).

    But you realize that most of embodied cognition theories are in fact representational? They just don’t see cognition as a manifestation of a locked-in syndrome. The non-representational theories are found in the so called radical embodied cognition framework.

    And just some general questions about ETC:

    What are some interesting predictions that ETC makes?

    What are some testable hypotheses that you make on ETC’s grounds?

  2. Sergio Graziosi says:

    ihtio,
    Thanks so much, I was counting on your keen eye to come out with some interesting questions, and you provide plenty! I do hope the paper does answer most of them, but I’ll summarise my thoughts here, as it’s always good to double-check.

    What is your opinion about these two types of superstructure emergence (by natural selection and the second type in which simple, “dead” matter aggregates)? In particular, I’m asking about the commonalities/differences, and why are we focusing so much on one of them.

    This is the first time I think about “the other kind” to be honest. I am focussing on the one driven by natural selection because I’m interested in life, not so much on dead matter. Also, without giving too much thought to the subject, the interesting difference is that Natural Selection operates on how matter is assembled, without really promoting the persistence of the same matter. Say, atoms aggregate in a planet, and mostly remain there, while the atoms that form me change all the time, but the structure persists. Also, the superstructures produced by natural selection span the domains of time and space, creating what looks to me as this ever-growing nesting of structures. For example, you can see sexual reproduction as a structure, and a very conserved one, but to see it you need to look across generations and across the barriers that separate species. Perhaps we can find similar patterns in cosmology, but it would feel like a stretch to me if I was to propose that they “definitely exist”. I would tend to believe that the way that “dead matter” aggregates is merely the lowest level of organisation, which in at least one case kick-started the nesting of superstructures that we see here on earth (we usually call it life). You may want to check the discussion that is (or may be) starting here, as it looks relevant to me.

    Why can’t animals learn from non-conscious experience? We can imagine a scenario in which some (probably very simple) animals are not conscious, but do learn new things

    In this post I’m deliberately cutting many corners, trying to whet people’s appetite, without any attempt to carefully explain my reasoning, many of your questions stem from this choice I’ve made, do you think it was a bad move?

    To answer your question, yes to you can learn things without awareness, but you can learn more things if you are able to connect what is happening with previous experiences and thus actively link together things that happened at distant time and places. To do so, you need to record past experiences, but we can’t record everything in perfect detail, so a filter is needed (hence the EM), once you have the filter, you can recycle it to probe for connection between the here and now and past events (the point 2. in the main post above). This generates metacognition, the possibility of re-checking the results of the EM, but when you do that, you inevitably become aware of your own experience. The above is again just a sketch: I hope the paper does a better job at explaining this central point.

    Could you please write someday more about what is a “computational subsystem” in a brain?

    Ah! I understand why and how the idea that the brain is “essentially” modular is being questioned and revised, however, I do think it’s conceptually useful to start with the expectation of finding that some functions rely on particular brain structures. From there, you can expect that structures get recycled and also participate in similar, but different, functions, and that the same functions are also used to participate in different tasks, and so on. Thus, I’m not afraid of the fact that we are figuring out that the idea “1 function, 1 structure” is wrong, it was always going to be. At the same time, though, I do think that the idea of brain modules can and should survive, therefore I’m exploiting it without too much explanations. I suppose I just hoped I could get away with it, and save precious words, but I can’t get past you, can I?

    So a brain contains a subsystem that attends (without consciousness, I presume) to various features, objects, etc. so that the rest of the brain can then attend (consciously, I guess) to those events that are important?

    Almost. We know from phenomena such as attentional blindness that a lot of processing and data extraction happens well below the consciousness threshold. So plenty of pre-digestion happens before things get the chance to be conscious. However ETC proposes a sharp distinction between phenomenal experience (PE), the “what is it like” and awareness of PE. In the context of Higher Order Thought theories there is this thing called the Transitive Principle (TP): a sensation is experienced when the subject is appropriately conscious of it. In ETC, this kind of recursion is important, but I claim that PE happens for each evaluation made by the EM. However, you become aware of it only when you re-evaluate the result of the first evaluation.
    ETC also proposes that a lot of the processing that follows then keeps on going on in the dark: only what has clear ecological reasons to be sometimes worth remembering may potentially work as input to the EM, and thus be perceived. So, overall, the conscious part is a small layer between perception and executive functions, there is no “unconscious bottom” and “conscious top”. (I like this! I never described it in this way before)

    Dude! Far out! What counts? [refers to “The EM generates what is usually called phenomenal experience, because it enriches sensory information with what counts to the experiencing subjects”]

    I agree, far out indeed. Once again, I hope the full paper does a better job, and I should have put more thought on the sentence that made you jump. The EM can easily be described as a pattern matching engine: some perception comes in, you search memory for the best match, and probably also get a measure of surprise. If you frame the task in Bayesian terms it becomes spectacularly straightforward: in the paper I don’t, however. My aim was to be explicitly agnostic on how to algorithmically implement the thing, and avoid placing a big bet on one and only one lower-level theory, that’s for two reasons:
    1. We don’t know enough of how the actual neural structures do such things. In fact, it is even possible that non-neurons play a crucial role, so it’s too early to pick a winner.
    2. In terms of synthetic implementations, you can implement the EM in many different ways, and if I’m right, they would all work.
    If I was forced to place a bet, I would have no doubt, I would bet on a Bayesian solution, but I’m not forced, so remain officially agnostic ;-). [ETC would also extend the Bayesian framework in interesting ways, but that’s the subject for another post and paper.]

    If I read you correctly, you claim that EM sits close to sensorium. What about our thoughts and dreams that don’t come from the world but are generated in other ways (but are available to introspection)?

    EM sits close to the sensorium, yes, but its primary job is about selecting what looks worth remembering. Thus, EM can receive thoughts, as well as other internally generated stuff as input. On dreams, well that’s another looong story and even more wild speculation. I’m already very worried about how much speculation is behind ETC, so I’d rather not unpack the question on ETC and dreams, not now, at least.

    […]The non-representational theories are found in the so called radical embodied cognition framework.

    Yup. I was trying to write something that didn’t rely on specialised jargon, but would still tickle the curiosity of those in the know (it worked for you!). If you prefer, if a theory isn’t radical, I don’t even register it as a theory ;-). Anyway, you got it right, I find radical embodiment very interesting, but I also (still) can’t see how the claim of being non-representational stands: either I’m failing to understand it or it is wrong (it’s not about absence of representations, but having representations that are radically poor, just a pointer on how to fetch information, and nothing more, but still the pointer has to exist). Hence, it is very well worth giving me more time to study and then write about it.

    What are some interesting predictions that ETC makes?
    What are some testable hypotheses that you make on ETC’s grounds?

    [this also answers your other question in the “Consciousness at last” post.]

    Primarily, on it’s own grounds, and without enriching ETC in the light of one lower-level theory or the other, ETC does exactly one testable prediction. One and only one, as a precise “make it or break it” design decision. The prediction is: you can’t learn new motor tasks if the information necessary to understand the task doesn’t reach full awareness. In the paper I claim that you could use masked stimuli to test. The idea being a task like “press the blue button when you see a blue letter and the green when a green letter appears”, but you are not telling the subject the rules. You tell the subject “you’ll see stuff, and you need to figure out when to press the right button, you can make as many mistakes as you want”.

    It seems an obvious prediction to make, but it isn’t, not after realising how much goes on in the dark:
    – Consider actions you take before even knowing that something happened, such as quick and life saving steering adjustments I do make when I hit some gravel on a corner with my motorbike. I “know how” to avoid falling, and do it regularly before I can register that something happened.
    – The already mentioned attentional blindness requires very sophisticated processing to dynamically change what is allowed to reach the conscious level. The same sophisticated processing could then be available to learn new stuff. ETC specifically predicts that this isn’t the case.
    – In fact, we are starting to believe (but the jury is still out) that priming can be a real thing. If it is, the “same sophisticated processing” does happen and murks the picture. Whether it falsifies ETC or just makes the picture more complicated I don’t know. We are reaching the limits of my brainpower at this point.

    Secondarily (assuming the make it or break it test is passed!), once you do pick a secondary theory, gazillions of predictions are made possible by ETC: take the Dynamic-Core cum Global Workspace (DC & GW) theory (to remain in the boundaries of what I do discuss in the paper): once we’ll have a fairly decent picture of the connectome of this particular system, you can run simulations and see if you can implement the kind of algorithm I describe as the function of EM. That’s one reason why I don’t commit to any particular implementation: you need a secondary hypothesis (e.g. the EM is implemented in the Dynamic Core, while the input may come from all over the place (GW)) to specify the details. People like Dehaene and Changeux could have many years of fun working in this direction (they do already, to be fair).

    Otherwise, if we pick Predictive Coding as our Bayesian framework of choice, you will end up making hypotheses about what kind of network organisation supports an EM-like functionality, in this case, any PC network would do, but the additional requirements would be about connectivity and re-entrance (we already believe that the DC fulfils the last two requirements, so we already have three hypotheses to verify!). Thus, you would end up producing specific hypotheses all the way down to the neuronal level. How does ETC help? It allows linking the hypotheses at the computational and neuronal levels all the way up to cognition (what you experience); as far as I know, it’s the only theory that is ambitious enough to make this this claim. Thus, if ETC does resist the empirical all-important verification, and if it does make sense conceptually, the potential it has is hard to overstate. I’ll stop here, otherwise I become megalomaniac.
    In short: pick a physiologically plausible theory, plug ETC on top of it, and see what predictions come out. I can’t provide a full list because I don’t even know the full list of all the physiologically plausible theories. But I know that ETC is deliberately agnostic of the implementation details specifically because in this way it makes it possible to “interpret” many existing theories in new testable ways.

    I hope this makes some sense…

    • ihtio says:

      Sergio,

      I’m afraid I didn’t fully understand the paper. What eludes me is something that would help integrate completely disparate and incongruent elements: on the one hand we have The Computational Theory of Mind, which states that for cognition (perception, learning, decision making, memory, etc.) what is needed is a machine operating on syntactical structures (representations, meaningless symbols), and on the other hand we have consciousness, which we don’t really know what it does.
      You claim that consciousness allows learning from experience. However you also build on computationalism. But if you already have computationalism that should account for many of known psychological / neurological phenomena, such as learning, categorization, etc., then why exactly do we need any qualia? Attention has been studied for over half a century in cognitivist paradigm, and qualia weren’t needed for that.
      Maybe phenomenal consciousness is needed for learning complex behaviors, but what then computations do in the brain (that is: why aren’t computations powerfull enough to enable learning without consciousness)?

      The second thing: What would ETC say about robots such as RoboCub / iCub (http://www.icub.org/)? It certainly does learn new behaviors. It focuses attention. Is it conscious?

      The third thing: What animals are conscious and what animals are not conscious (if any)? Are slugs or insects conscious?

      And the last thing: my phone constantly monitors its environment (GPS, wifi, lumination) and itself (battery, gyroscope). There are apps (and more can be made) that process some information (from the camera and other sensors), focus on specific elements, and according to some rules based on the state of the phone (battery, cpu usage, cpu temperature, etc.) create a representation of the whole phone, which then can be accessed by said application to modify the phone’s parameters (to e.g. extend battery life, to change the brightness of the screen, to put wifi to sleep). New behaviors could then be learned by the phone – to change parameters according to time of day, location, internal state, and the like.
      Is my phone conscious according to ETC? It seems to have simple EM, memory, representational apparatus, ability to attend to itself (/representation of itself), does learn (change behavior over time).

  3. gvdr says:

    And check, follow here as well.

  4. gvdr says:

    Dear Sergio,

    I finally found a bit of time to read your paper. I have to say that I’m not an expert in cognitive science, so much of what you claim is outside my comfort zone.

    I have one main question (I leave all the details aside for the moment). It is my opinion (but I know to be in good company there: similar ideas are expressed By Tuttle in “Apes and Human Evolution” about certain “human” traits and by Bouchard in “The Nature and Origin of Language”) that a sound “evolutionary theory of X” should be able to provide convincing arguments about “why X is found in the clade y and not in the clades z”. To be more explicit, in your case, it is not enough to state that “X can be evolved by the ancestral set of traits found in species A”, nor that “X gives an evolutionary advantage over not-X”. It is also necessary to say in which ecological, environmental (and social) conditions, unique to clade y, X is selected for (here, in the technical term of “selected for”, see Kaplan & Pigliucci, Making Sense of Evolution or Sober, The nature of selection.).

    So, in the light of your theory, can you sketch an explanation of the distribution (or, vice versa, try to delineate such distribution) of consciousness?

    Best,

    Giulio

  5. Sergio Graziosi says:

    ihtio and gvdr(Giulio)
    thank you so very much!
    I’ll invest my week-end on digesting and hopefully replying in detail to both of you (here and in the “Consciousness at last” post). For now I can say that I think I understand your questions and remarks, and find the vast majority very reasonable. I already worry about how to make use to all of them, but that’s probably a little premature.
    I take courage from the fact that you didn’t suggest something like “drop it, it just doesn’t make sense”, but I do think that you might have tried to write this in between the lines… Please don’t sugar-coat your pills, it just makes it easier for me to self-deceive and gloss over the subtleties.

    • ihtio says:

      I assure you that I had no intention of putting such obscenities between the lines. On the contrary, I think that your ideas and especially you attempts at publishing an article are commendable.
      Some of my points are related to the “form” of the article (such as inadequate – in my opinion – division of space across sections), some are about the core of the topic, and some just stem from lack of good understanding of the theory. A friendly discussion this is :).

    • Sergio Graziosi says:

      Thanks again: my first aim is to figure out if I’m wasting my time and energy… I don’t think I am, but I have no chance to reliably detect self-deception: therefore I need people to tell me.
      In other words, I’ve suggested the possibility (if true, it wouldn’t be obscene!) because I know very well that the discussion is friendly.
      So, I reckon you don’t think I’m wasting my time on something that has no chance of being useful, and therefore it is worth trying to make it work.

  6. Sergio Graziosi says:

    ihtio and gvdr(Giulio),
    I’ve drafted a long reply to both of you. Given it’s length, I plan to let it settle for a day or two (to see if I agree with myself after a good night sleep) and then publish it as a separate post, I hope you don’t mind.
    In the mean time, I can add a two remarks covering what I don’t address in the long reply, as well as what I don’t plan to address later on. [ihtio, your comments on strategy and form are all noted, but I think I should first discuss the ETC contents]

    1. ihtio: I wasn’t aware of RoboCub / iCub, looks very interesting! Will have to find out how it works, and that takes time, I’m afraid.
    2. gvdr: I don’t need to be convinced (despite my provocations in our other discussion), your question is 100% pertinent and I’m trying to address it in the long reply.

    Thanks again! Your comments alone made my attempt worthwhile and fun at the same time. I couldn’t ask for more. (Actually, I do! More comments are welcome 🙂 )

  7. […] Two of my readers have kindly provided some very interesting comments and questions on my Evolutionary Theory of Consciousness paper (ETC). Because their comments and questions are very relevant and thought-provoking, I am publishing my reply as a separate post. This allows me to cover some (highly speculative!) aspects of ETC, without having to cut too many corners. To understand what follows, it’s probably best to read the full paper first. Alternatively, one could try with the (extremely) short version available here. […]

  8. ihtio says:

    One more thing: What interesting things does ETC prohibits?
    The question phrased more intelligibly: Assuming that ETC is ok, what are some of the phenomena that we should not come across in our daily experiences or in serious experiments?

    I’m asking this, because a good theory not only say what should happen, but – maybe even more importantly – what shouldn’t happen.
    For example, having Newton’s idea of motion, if we have an object in motion, on which no forces act, the object will not change direction or speed.
    So a good theory provides good constraints on what we can observe. What are the bounds of observations that ETC sketches?

  9. I still haven’t read the paper, but here are some passing thoughts on this post. These are written hastily, as they come to mind while I read. I apologize if I repeat something from the comments above me, since I have not read them. These are also not meant as significant critiques, just quick points that popped to mind.

    In the big picture, I can see a pattern emerging by the action of natural selection

    I am uncomfortable with this constant rehashing of the Aristotelian ladder of life in different disguises. I think that this a contentious point for people studying evolution, and it should be approached very carefully and with nuance. Especially if evolutionary grounding is a big part of your theory.

    it is possible that consciousness, and thus the ability of making deliberate decisions on the basis of vast and complex conceptual understandings, might (just might) introduce an anomaly.

    In other words, they are the modern-day soul or vitalism? Institutions make decisions based on vast and complex conceptual understandings, will you explain why they are not conscious? And if they are conscious then humans are just another step on your ladder of complexity (or I guess now that we are not the top, we might as well switch to the religious terminology completely and talk about the great chain of being)

    why do conscious creatures so frequently and deliberately pick the choice that brings them harm? You can see that this question can become: why does consciousness seem to be dysfunctional with alarming regularity? Again, if we don’t understand what consciousness does, we can’t even approach this question.

    What does this have to do with consciousness? This is just decision making, which in most accounts of consciousness is independent of it. You seem to tie them together by dragging in free-will, and saying that our conscious experience is essential for explaining free will and thus for decision making. Is that a correct understanding?

    I think the last sentence of the above quote is false. Plenty of psychology (and now behavioral economics) has approached the question, in some domains very successfully, without having a theory of consciousness. In fact, they have done it in animals that many don’t — or might not, or have not — grant consciousness to, like mice, fruit flies, or c.elegans. This brings me to the first sentence: consciousness is in no way necessary to making bad decisions or bringing self-harm (at least if we quotient out pedantic points about free-will, but maybe that is unjustified?). Even in humans, social psychology reminds us how much we harm (or benefit) ourselves unconsciously or unknowingly.

    you need experience to learn from experience

    I suspect that this is a very difficult sentence to justify. I can write plenty of ML algorithms that learn from their experience, but I would hardly consider them conscious (even though I might consider other more complicated algorithms as conscious). I definitely don’t see how you would defend needing qualia in order to learn. Later on, you seem to go the route of saying that qualia directs attention, but even there it is not clear to me how you could turn that into a justification for them being necessary to direct attention. Plenty of things that many would not consider conscious seem to direct attention fine.

    Updating myself on the advances of neuroscientific knowledge about consciousness was a grim affair. The more I read, the more depressing it got. We have a massive amount of data, and precious little to explain it. We have tiny proto-theories that try to explain small details, and a few macro-theories that claim to explain the whole, but look irredeemably simplistic to my eyes (will tackle this another day). By contrast, the philosophical side of the argument looked pugnacious and manifestly confused.

    Do you summarize and review this extensive literature in your paper? Such a systematic review on its own can be extremely useful, either as a series of blog posts or as a review paper. Making such a review concise but sufficiently broad could be a great way to get your foot in the publishing door.

    From there, the derived intuition is that in order to selectively learn what is useful to us, we need to:

    and then you go on to list two points. I can see how this can become a theory of learning, and plenty of theories of learning already exist; some of them quiet good. But again, I don’t see how this is talking about consciousness as opposed to just learning and decision making or adaptive behavior more generally. I guess that I will need to read your paper for that part.

    ETC proposes that our brain contains a computational subsystem .. that answers the “Am I bothered?” question.

    Is there one such system? Many? As far as I remember, the biggest arguments against classic modular functionalist theories of the sort you seem to be proposing here (and that were fashionable a few decades ago) is that we don’t see such modules as natural single units, either psychologically or neurologically. If there are many such systems — say one for each of the senses — then you are not much closer to resolving consciousness since the question becomes: why do we often seem to have a unified experience over all our senses, even if we have separate modules for attention in each sense?

    As with learning, you will need to address unconscious attention as well, like athletes bracing themselves for impacts that they don’t consciously register until after the impact versus getting knocked out by impacts that they didn’t sense coming at all.

    it seems to me that they allow to surpass a (suspiciously large) amount of disputes, and at the same time explain why promising scientific approaches seem deadlocked and unable to produce the much awaited-for breakthroughs

    It might be worthwhile to focus on these solutions, instead of presenting your overall theory. Can you select a single (or a small locus of) dispute that you think ETC is particularly good at resolving? Can you then fairly frame that dispute as a review of the existing literature and then present just the minimal amounts of your though needed to surpass the dispute? If you do this with a couple of disputes then people will become interested in your theory, and you can then write an exposition of the whole theory.

  10. Sergio Graziosi says:

    Artem,
    thanks for reviving this discussion, I am still starving for feedback!
    I guess providing passing thoughts is entirely appropriate here, since this post is meant to be informal, approachable, and, at best, whet the reader’s appetite. This is also a general disclaimer: given the aim and colloquial tone of the post, I’ve intentionally sacrificed rigour, in this case.

    On to your comments: I think I know exactly where your aversion for ladder-like depiction of evolution come from (nice post, BTW!), and I can agree without worries. People should be careful, especially because encouraging teleonomic views of evolution does foster the wrong kind of over-simplifications. In this particular case, I am certainly guilty of taking this kind of slippery shortcut. I do have a few mitigating reflections to add, though.
    First, I am openly and vociferously, anti-essentialist, in particular for biological phenomena: the side effect is that I think I can allow myself a little slack. More explicitly, if I say that “I see a pattern emerging”, I necessarily mean just that, the pattern could be inside my seeing, more a function of how my cognitive structures work than how the world is. Furthermore, assuming my cognitive structures do capture something “real” (with scare quotes), saying that I see a pattern emerging isn’t negating that other patterns may be developing in other directions. The main observation however remains, information does accumulate and aggregate around bigger and bigger clusters. It’s happening, we can see it and we can even estimate it quantitatively (I guess). It’s the main reason why I have an ongoing interest in Antifragility, despite the attempts of Taleb to make it dislikeable. So overall, I am convinced that evolution is directionless, but this doesn’t mean we are not allowed to slice up its products and isolate interesting patterns. One is about accumulating information, and sure enough, it is very interesting to me.
    Does this mitigate the crime?

    In other words, they are the modern-day soul or vitalism?

    You’ve lost me here!

    Institutions make decisions based on vast and complex conceptual understandings, will you explain why they are not conscious?

    Hmm, not really, but maybe by extension. Re your question, there are two interesting contributions to highlight, the first is the infamous “Chinese brain” thought experiment (I like Brian D. Earp’s answer to it). The same “Chinese brain” approach can (and, IMHO should) then be extended to ask ourselves about consciousness of aggregates, as Eric Schwitzgebel famously does. I’m certainly not deluded enough to think I can offer the final word, but I do think that ETC (if it does make sense!) does offer a way to answer this sort of question. Since it does propose a very high level description of what kind of functions are needed for human-like consciousness, accepting ETC would require us to accept that Mr.Li is conscious, while the USA isn’t.

    What does this have to do with consciousness? This is just decision making, which in most accounts of consciousness is independent of it. You seem to tie them together by dragging in free-will, and saying that our conscious experience is essential for explaining free will and thus for decision making. Is that a correct understanding?

    Roughly, yes, it’s correct. But do note the initial disclaimer, this post wasn’t written to offer a rigorous analysis of my position and we’re commenting on the section that describes my initial motivations, so we are really very far from trying to figure out whether the end result is of any use.

    I think the last sentence of the above quote is false. Plenty of psychology (and now behavioral economics) has approached the question, in some domains very successfully, without having a theory of consciousness.

    Yes and no. Certainly decision making doesn’t require consciousness and thus one can unconsciously make wrong decisions. In another view however, you can restrict “decision making” to only “conscious decisions”, and call the other stuff “mechanisms” or “reflexes”. I hope you get my gist: the whole mystery of consciousness can be seen to hinge between the apparent distinction between “conscious decisions” and automatic, mechanistic and necessary reactions. Why do we feel like there must be a great divide between the two? It’s not hard to link this way of describing the mystery to the original Hard Problem, I think. [More below]

    you need experience to learn from experience

    I suspect that this is a very difficult sentence to justify. I can write plenty of ML algorithms that learn from their experience

    This time the discursive tone has caught you off-guard ;-). I’m not saying “you need experience to learn”, I’m saying “you need experience to learn from experience”, which is intentionally tautological. If the big mystery of consciousness is why do we have “phenomenal consciousness” (Block’s) or, as I prefer to call it “phenomenal experience” (PE), then the fact that we all intuitively accept that we learn from experience provided me the first spark of intuition, which then I tried to develop in something more coherent (exploring the consequences of a tautology can be useful! Sounds familiar?). Thus, (once again) if we accept ETC, we would get a distinction between different ways of learning, which is something I am currently agonising over, so I most definitely can’t claim to have well defined answers.

    Do you summarize and review this extensive literature in your paper? Such a systematic review on its own can be extremely useful, either as a series of blog posts or as a review paper. Making such a review concise but sufficiently broad could be a great way to get your foot in the publishing door.

    Indeed!
    However a proper systematic review of the literature is positively impossible. There just is too much stuff and too much production. In the paper I cherry-pick across both philosophy and science, trying to make sure that I do tackle the mainstream offers from both camps. If one wanted to include also fringe or less known theories, a lifetime won’t last, as everyone and his dog have one to offer (one good reason to ask: why should I offer also mine? I am currently very close to conclude that I shouldn’t…).

    I don’t see how this is talking about consciousness as opposed to just learning and decision making or adaptive behavior more generally. I guess that I will need to read your paper for that part.

    Yes! In ETC there is a very definite link between human-like consciousness and a certain kind of learning. The “certain kind” part is what I’m agonising with: I can’t seem to pinpoint exactly what kind, which is indeed worrying.

    Is there one such system? Many? As far as I remember, the biggest arguments against classic modular functionalist theories of the sort you seem to be proposing here (and that were fashionable a few decades ago) is that we don’t see such modules as natural single units, either psychologically or neurologically.

    Good point. ETC is indeed very uncompromisingly modular, and I do think it’s a weakness. But it’s already worryingly fuzzy, ditching modularity would inevitably reduce ETC to vague hand-waving. For example, the main falsifiable claim that ETC allows to make resolves around the question you’re posing: if indeed there are many such “mini-modules” but no central one, then ETC is just wrong. There may be many such “mini-modules” acting as upstream filters, but there must be a central one for ETC to hold.

    It might be worthwhile to focus on these solutions, instead of presenting your overall theory. Can you select a single (or a small locus of) dispute that you think ETC is particularly good at resolving? […]

    Oh, I wish I could! I’m not saying it would be impossible, I’m saying that I don’t know how: I’ve tried and failed. The ETC paper is already too long, and one of its many weaknesses is how many corners it cuts. Because it spans both philosophical and scientific approaches, keeping the length down to “long paper” dimension was very hard and painful already. The version I’ve uploaded includes an intro which is also a pitch to Cognitive Scientists, I suppose that’s something that could be omitted (depending on where to publish/intended audience…).
    Oh well: really appreciate the chance you gave me to re-think about all this.

    • Sorry for the delayed response, I didn’t get a notification that you replied. Also, I think there is a stray blockquote-tag somewhere in your reply.

      I am convinced that evolution is directionless, but this doesn’t mean we are not allowed to slice up its products and isolate interesting patterns. One is about accumulating information, and sure enough, it is very interesting to me.

      Fair point. As long as you are conscious of the fact that your views of consciousness are a reflection of your consciousness and not a completely external pattern. But then, this also takes a lot of ‘special-ness’ away from it. If it is the metric by which others are measured, it is not surprising that it measures well under that metric.

      accepting ETC would require us to accept that Mr.Li is conscious, while the USA isn’t.

      From your description of ETC so far, it is not obvious that this follows. As such, you should make sure that you have a clear explanation of how, if I accept ETC I can go on to show that Mr. Li is conscious (and in what sense of the word) but China is not.

      In another view however, you can restrict “decision making” to only “conscious decisions”, and call the other stuff “mechanisms” or “reflexes”. I hope you get my gist: the whole mystery of consciousness can be seen to hinge between the apparent distinction between “conscious decisions” and automatic, mechanistic and necessary reactions.

      This makes more sense. I was not seeing what you meant by the connection between consciousness, integrated information, and learning. What really helped me see it — and the reason I came back to this post and saw that you responded to my comment — was listening to the recent interview of Collin Allen on Rationally Speaking (159). Not sure if you listen to this podcast, but you might enjoy this particular episode. He also talks about the “distinction between different ways of learning” that you are agonizing over. Have you reviewed his work on your blog before? I can’t find anything with a quick search.

      I look forward to your continued thought on this.

  11. Sergio Graziosi says:

    Artem,
    don’t apologise for late replies. My replies are chronically slow, if anyone follows suit, that’s a bonus for me. [Fixed the blockquote mess, thanks!]

    Fair point. As long as you are conscious of the fact that your views of consciousness are a reflection of your consciousness and not a completely external pattern. But then, this also takes a lot of ‘special-ness’ away from it. If it is the metric by which others are measured, it is not surprising that it measures well under that metric.

    I’m not sure I follow. In my view, cognition is about picking up patterns, slicing up reality on this basis, and getting predictive powers in return. Can we ever be sure that any pattern we detect in the “real world” (with scare quotes, as it should be the perceived world) is “completely external”? I find that answering “no” to this question allows to unravel a great deal of philosophical conundrums. If you think we should answer “yes”, please do elaborate!
    Going back on topic, my views on anything are necessarily a reflection of my consciousness, because that’s what allows me to even have a view, this makes consciousness very special, in my view. How’s that for a Möbius strip of an argument?

    you should make sure that you have a clear explanation of how, if I accept ETC I can go on to show that Mr. Li is conscious (and in what sense of the word) but China is not.

    Noted and agreed. I think the China brain thought experiment is the hardest hurdle of them all, the strongest signal I’ve found that would suggest I am barking at the wrong tree, or even that I shouldn’t bark at any tree at all. As such, I hesitate at the idea of tackling it head on, but perhaps I should have the opposite reaction. Maybe aiming at it without safety nets is what I need to finally decide whether I really am onto something and whether I should keep pushing. Hmmm…
    So far I’ve just been willing to bite the bullet. :-/

    What really helped me see [the connection between consciousness, integrated information, and learning] […] was listening to the recent interview of Collin Allen on Rationally Speaking (159). Not sure if you listen to this podcast, but you might enjoy this particular episode.

    [I need strong incentives to listen to podcasts, they are not “random access” like written text, and thus require the sort of commitment I usually can’t afford…]
    Anyway, this podcast was well worth my time, and reminded me that I’m a fool, for I’ve almost forgotten my first scientific/intellectual crush: ethology. In all my studying about consciousness I’ve neglected it too much (not entirely ignored, but I certainly haven’t given it enough time).
    What Allen says sounds reasonable to me, word by word, which is more unique than unusual, when someone touches the consciousness topic.
    In particular, the idea that deferred learning is crucial to explain the function of consciousness is precisely the heart of my argument (as originally conceived) and isn’t new at all (see this gem of a paper, from 1902!).
    What I’m agonising over is how to operationalise the idea in order to guide the design of empirical studies (ha. ha, as if anybody would have any reason to take notice): the boundary between what can be learned via direct association and what can’t is inevitably fuzzy. It rests on working memory: following the kind of experimental paradigms discussed in the podcast (they are good, but don’t satisfy me in full), whatever (conditioning) stimulus remains in (any sort of) working memory for long enough (assuming there is only one kind of “working memory” seems premature to me!) can potentially be used as a conditioning clue without having to reach consciousness. By this point I’ve probably lost you, as I’m positively running away with the ball you gave me…
    Thanks for the ball, though: you made me think in directions that may well be prove to be useful!
    Hope some of the above makes some sense to you.

Leave a reply to Sergio Graziosi Cancel reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.

Follow me on Twitter

All original content published on this blog is licensed under a Creative Commons Attribution-NonCommercial 4.0 International License.
Creative Commons Licence
Please feel free to re-use and adapt. I would appreciate if you'll let me know about any reuse, you may do so via twitter or the comments section. Thanks!