Two of my readers have kindly provided some very interesting comments and questions on my Evolutionary Theory of Consciousness paper (ETC). Because their comments and questions are very relevant and thought-provoking, I am publishing my reply as a separate post. This allows me to cover some (highly speculative!) aspects of ETC, without having to cut too many corners. To understand what follows, it’s probably best to read the full paper first. Alternatively, one could try with the (extremely) short version available here.
The following is a partial but cumulative reply, addressing only two comments, from ihtio and gvdr. The points I’ll try to cover address both comments and questions at the same time: I think the separate answers complement one another, and therefore I’ll try to address them in one (almost) organic essay. In here, I’m addressing the questions about content and will ignore the ones about from, my hope is that clarifying the conceptual side is a prerequisite for discussing how to better express it.
From ithio, I get the impression that one important message didn’t come across: I was hoping that ETC was clearly proposing a way to describe the relation between computational theories of mind and consciousness. Furthermore, both comments, for different reasons, ask for something that is currently hard to say: what animals are conscious? If validated, ETC would provide a clue, so I’ll expand on this. Similarly, I will briefly discuss the question of what makes artefacts conscious, if at all.
To address these points, I’ll rephrase and expand the relevant ETC section (the first part of the “conclusion”, lines 738-766).
If ETC is correct, human-like consciousness requires a number of mechanisms:
Sensory data must be used to produce a coherent representation of the world, which must include the self (level 3). Different elements on level 3 will have different relative weights, used to prioritise what happens next. Based on this weight, elements are used as input for the Evaluation Module (EM), the task of this module is to direct the received information in different directions. Options include:
– do nothing, just ignore the input and fetch new data from level 3.
– give more attention to the current element, which usually requires new motor input to acquire more data, but is also related to retrieving more information from memory (I do not cover this on the paper, as it requires to pick more detailed theories to define how it might be done).
– flag it as potentially interesting, worth remembering. Again, I don’t discuss what happens next, but I have strong feelings that it involves sleep.
– take immediate action. This could be dodging a projectile, hide, reach and grab, etcetera.
Aside from the first possible outcome, all the others need to be able to influence the representations on level 3. This means that the work done by EM is inherently susceptible to recursive evaluation. In turn this recursion is probably the most important element in ETC: how EM is implemented, and thus what (unknown and hypothetical) influences the EM has on level 3, ultimately defines what metacognitive abilities a system has.
For example (following the paper), if the EM can create a level 3 element which marks the condition of “focussing attention on (perception)X“, the system will inevitably be able to “know” that X was perceived.
ETC claims that this fact alone explains why our perceptions take the puzzling form of what we call qualia (in the philosophical sense). They are perceived as immediately available, self-explanatory, ineffable, etc. because of how recursion between EM and Level 3 works: the information that participates in this loop is the information that provides evolutionary advantages, all the rest (the vast majority) gets removed at each evaluation. For example, level 3 doesn’t even contain a representation of EM.
We can thus define the computational elements that are required:
- A model of the world+self (including all the sensory structures, and a way to coherently bind all the sensory input together in level 3).
- Relative weights of all elements represented.
- An EM that redirects elements to other subroutines.
- The possibility of flagging elements for long-term storage, and of using what is already stored to inform the activity of EM.
- The ability of EM to influence what is represented in level 3, and thus to change what is available as input for EM (recursion).
ETC claims that each of these 1-5 elements are required for full, human-like consciousness: if one of them isn’t present, or not fully implemented (for example the EM does not flag for retention, or doesn’t influence what’s on level 3) then the whole phenomenon either disappears or changes radically.
I have not tried, not even privately, to work out a range of tinkering possibilities and (speculatively) explore the space of different results. I have the feeling/expectation/hope that doing so would be very interesting in terms of interpreting different abilities of different species, but it is both hard work and heavily hypothetical, so I have two reasons not to pursue this line of thought at this stage.
We now reach the “Computational Theory of Mind” (CTM). To my best knowledge, this is a family of theories characterised by the expectation that either our behaviour, or both our behaviour and our “inner life” will some day be fully described in terms of data-processing. However, to the best of my knowledge (again), no theory claims to have done this, and in particular, no one can claim to have convincingly bridged the explanatory gap (solved the hard problem). To do so, one needs a theoretical explanation (T) of how mere mechanisms can generate a mind, and then a complete (empirically verified) description of the mechanisms as they exist in the real world (M). At best, CTM may claim that once we’ll have (M) we would have a chance of retrospectively produce (T). Some say that (T) will become unnecessary, and I must say that despite my grandiose aims, I find this expectation quite credible.
In this context, ETC aims at being a significant contribution to (T), showing why a given computational architecture (1-5 above) logically generates the key and puzzling aspects of the consciousness phenomenon.
Similarly, ETC focusses on a particular kind of attention, and I can see that attention has been studied for a long time, allowing to find out many fascinating fragments of the puzzle, but we are currently unable to piece them all together in a coherent form. In the paper I cite a particular work (Vecera 2014) which aims at unifying two separate accounts of attention, exemplifying my point: as for consciousness in general, there is no consensus about what attention *is*. [I must add that ETC doesn’t try to be the final word on attention, it uses one account of attention to make a larger point, that’s all].
In this way we can then (speculatively) propose to classify as conscious, differently conscious, and not conscious, particular physical structures, provided that we know and agree about what algorithms they implement and what input they take from both the world and themselves. Thus, we reach ihtio’s smartphone: is it conscious? Straight answer is “No”, because it does not fully implement the 1-5 mechanisms described by ETC (I can discuss the detail, if required). However, because it is programmable, it has long-term storage, and collects plenty of information from both the environment and itself, ETC (hypothetically) states that when equipped with the right software, a smartphone can become conscious. That’s why I felt it necessary to add some (hasty) warnings at the end of the paper (I will not discuss this today).
Next up: what organisms are conscious? Answering this question is important for many reasons, not least because it would allow approaching the problem proposed by gvdr: what conditions promote the selection of consciousness, and what don’t? Now this is a question I would love to answer, but currently I can’t. At the present state, there is no consensus on what consciousness is, and thus, we don’t know how to anchor the question to empirical data. There are different approaches, but the one that looks more solid to me, and (naturally – grin) is compatible with ETC, is here (Edelman 2005). In a nutshell, the idea is that a conscious organism is one that has qualitatively comparable cognitive capacities with humans, excluding verbal abilities. This approach seems very sensible, but rests entirely on our reasonable intuitions: it is not and can’t be a formal system to settle the matter. Thus, unsurprisingly, if you move into the philosophy camp, you will find a large number of people who claim that verbal abilities are a prerequisite for consciousness, I think it is nonsense, but I do note that we are far from reaching an agreement and have no empirical way to pick a winner.
In this context, ETC allows to approach the question in a new way, and provides some hope of developing empirical tests to establish if a given organism is conscious. However, at this stage, using ETC in this way has to be premature:
(a) I believe in ETC, but this doesn’t mean that ETC is logically correct. Without serious and widespread discussion, I can’t know if ETC is internally consistent and does indeed propose a logical explanation of consciousness. We currently don’t know if ETC is theoretically solid, not even on pre-empirical grounds.
(b) Assuming that (a) is sorted, ETC needs to be empirically validated. It’s entirely possible that ETC is logically consistent, but that the real consciousness we have actually works in some other way. The first step towards empirically validating ETC is suggested in the paper, if that fails, ETC should be forgotten.
This long diversion was necessary to frame the problem posed by gvdr: given known ecological settings, can we predict whether consciousness would be advantageous? The short answer is “No”. Why not?
To start with, I think that (please keep in mind I’m in wild-speculation territory, I would not agree to write what follows in the ETC paper, not at this stage):
[I] The behavioural plasticity that ETC claims is made possible by the mechanisms 1-5 is always advantageous. If it could be obtained at no cost, it will emerge each and every time.
[II] However, the costs of mechanisms 1-5 are very high, and show up at different levels. The first and straightforward is energetic: we know that maintaining big brains has a very high energetic cost. This isn’t surprising, a high energetic requirement is certainly predicted by ETC in the form of the neural architecture necessary to implement 1-5, plus what is needed to store “learned information”, which I suspect is very relevant overall. The second is behavioural: for the advantage to be present, one needs to be able to learn from failed attempts, so one needs to survive multiple failures, not a small feat! Thus, you get a new way at looking at parental care, play, and other phenomena, but you are still unable to put a number on this cost. [there are more costs, but you get my gist, I’m sure]
Furthermore, [I] depends on the variability of ecological conditions: in an environment that is 100% static, [I] disappears. However, the simple presence of conscious organisms guarantees that ecological conditions will vary, as individuals will learn new ways to secure (and consume) resources, sending us straight into the realm of differential equations and more complex maths, effectively cutting me out of the discussion :-/. Furthermore, even the presence of any biological organism is in itself a good reason to expect that some variability is always present: natural selection would be at work, thus producing unpredictable changes, sooner or later.
Therefore the prediction depends on the balance of [I] against [II], and since we don’t know how to quantify either, we can only make very few, speculative and vague predictions. One prediction is that the presence of conscious organisms will increase [I], making the costs of consciousness less and less relevant. Another is that consciousness should be likely to emerge in volatile environments, and that it would not normally independently evolve in very stable ones. For example, I’m pretty sure that micro-niches such as enclosed caves are not favourable to the evolution of consciousness. However, if a conscious species gets to colonise such a place, this apparition in itself will act on [I], possibly tipping the balance.
Finally, as I’ve already hinted above, if ETC is right and consciousness is the typical evolutionary route to maximise adaptive behavioural flexibility, then the prediction is that it should be possible for consciousness to evolve independently more than once. We would expect to find similar patterns of behavioural plasticity to appear along multiple evolutionary lines, even some that diverged very early. If we accept the intuitively very reasonable arguments of Edelman at al. (2005), we can then reach and validate the conclusion that (some?) birds (Butler 2008), some Cephalopoda (Mather 2008), and maybe even fruit files (!! – Verjiden 2015 – NOTE: I’ve just read the abstract and should get hold of the FT shortly) are indeed conscious.
ETC however, especially in the absence of (a) and (b), isn’t really helping much: at best, it reinforces the intuition at the basis of the efforts cited above, and thus indirectly validates the pre-existing conclusions. The only one addition provided by ETC is the expectation that the evolution of consciousness can only happen in organisms that already have evolved something similar to what I call level 3. According to ETC, organisms that don’t model their surrounding and don’t monitor the state and position of their body parts should have very little chances of evolving what is needed to generate human-like consciousness (hardly a surprising prediction).
The questions I’m trying to answer above are perfectly pertinent (and very interesting!), but I can’t avoid thinking that my answers are worryingly speculative. On one side, the fact that I can try to answer is encouraging, on the other, I don’t think that it would be appropriate to add most of the above to the paper itself. If I did, I would expect the peer reviewers to object on the basis that I’m pushing speculations far beyond what’s acceptable in a scientific paper (even if it is a hypothetical theory: a proposed theory that still needs to be validated).
Naturally: more comments, questions, criticism and rebuttals are very welcome!
Butler, A. (2008). Evolution of brains, cognition, and consciousness Brain Research Bulletin, 75 (2-4), 442-449 DOI: 10.1016/j.brainresbull.2007.10.017
Edelman DB, Baars BJ, & Seth AK (2005). Identifying hallmarks of consciousness in non-mammalian species. Consciousness and cognition, 14 (1), 169-87 PMID: 15766896
Mather, J. (2008). Cephalopod consciousness: Behavioural evidence Consciousness and Cognition, 17 (1), 37-48 DOI: 10.1016/j.concog.2006.11.006
Verzijden, M., Abbott, J. K., von Philipsborn, A., & Loeschcke, V. (2015) Male Drosophila melanogaster learn to prefer an arbitrary trait associated with female mating status. Current Zoology, 61 (6)