Evolutionary Theory of Consciousness: first reply to comments.

Two of my readers have kindly provided some very interesting comments and questions on my Evolutionary Theory of Consciousness paper (ETC). Because their comments and questions are very relevant and thought-provoking, I am publishing my reply as a separate post. This allows me to cover some (highly speculative!) aspects of ETC, without having to cut too many corners. To understand what follows, it’s probably best to read the full paper first. Alternatively, one could try with the (extremely) short version available here.

The following is a partial but cumulative reply, addressing only two comments, from ihtio and gvdr. The points I’ll try to cover address both comments and questions at the same time: I think the separate answers complement one another, and therefore I’ll try to address them in one (almost) organic essay. In here, I’m addressing the questions about content and will ignore the ones about from, my hope is that clarifying the conceptual side is a prerequisite for discussing how to better express it.
From ithio, I get the impression that one important message didn’t come across: I was hoping that ETC was clearly proposing a way to describe the relation between computational theories of mind and consciousness. Furthermore, both comments, for different reasons, ask for something that is currently hard to say: what animals are conscious? If validated, ETC would provide a clue, so I’ll expand on this. Similarly, I will briefly discuss the question of what makes artefacts conscious, if at all.

To address these points, I’ll rephrase and expand the relevant ETC section (the first part of the “conclusion”, lines 738-766).
If ETC is correct, human-like consciousness requires a number of mechanisms:
Sensory data must be used to produce a coherent representation of the world, which must include the self (level 3). Different elements on level 3 will have different relative weights, used to prioritise what happens next. Based on this weight, elements are used as input for the Evaluation Module (EM), the task of this module is to direct the received information in different directions. Options include:
– do nothing, just ignore the input and fetch new data from level 3.
– give more attention to the current element, which usually requires new motor input to acquire more data, but is also related to retrieving more information from memory (I do not cover this on the paper, as it requires to pick more detailed theories to define how it might be done).
– flag it as potentially interesting, worth remembering. Again, I don’t discuss what happens next, but I have strong feelings that it involves sleep.
– take immediate action. This could be dodging a projectile, hide, reach and grab, etcetera.

Aside from the first possible outcome, all the others need to be able to influence the representations on level 3. This means that the work done by EM is inherently susceptible to recursive evaluation. In turn this recursion is probably the most important element in ETC: how EM is implemented, and thus what (unknown and hypothetical) influences the EM has on level 3, ultimately defines what metacognitive abilities a system has.
For example (following the paper), if the EM can create a level 3 element which marks the condition of “focussing attention on (perception)X“, the system will inevitably be able to “know” that X was perceived.
ETC claims that this fact alone explains why our perceptions take the puzzling form of what we call qualia (in the philosophical sense). They are perceived as immediately available, self-explanatory, ineffable, etc. because of how recursion between EM and Level 3 works: the information that participates in this loop is the information that provides evolutionary advantages, all the rest (the vast majority) gets removed at each evaluation. For example, level 3 doesn’t even contain a representation of EM.

We can thus define the computational elements that are required:

  1. A model of the world+self (including all the sensory structures, and a way to coherently bind all the sensory input together in level 3).
  2. Relative weights of all elements represented.
  3. An EM that redirects elements to other subroutines.
  4. The possibility of flagging elements for long-term storage, and of using what is already stored to inform the activity of EM.
  5. The ability of EM to influence what is represented in level 3, and thus to change what is available as input for EM (recursion).

ETC claims that each of these 1-5 elements are required for full, human-like consciousness: if one of them isn’t present, or not fully implemented (for example the EM does not flag for retention, or doesn’t influence what’s on level 3) then the whole phenomenon either disappears or changes radically.
I have not tried, not even privately, to work out a range of tinkering possibilities and (speculatively) explore the space of different results. I have the feeling/expectation/hope that doing so would be very interesting in terms of interpreting different abilities of different species, but it is both hard work and heavily hypothetical, so I have two reasons not to pursue this line of thought at this stage.

We now reach the “Computational Theory of Mind” (CTM). To my best knowledge, this is a family of theories characterised by the expectation that either our behaviour, or both our behaviour and our “inner life” will some day be fully described in terms of data-processing. However, to the best of my knowledge (again), no theory claims to have done this, and in particular, no one can claim to have convincingly bridged the explanatory gap (solved the hard problem). To do so, one needs a theoretical explanation (T) of how mere mechanisms can generate a mind, and then a complete (empirically verified) description of the mechanisms as they exist in the real world (M). At best, CTM may claim that once we’ll have (M) we would have a chance of retrospectively produce (T). Some say that (T) will become unnecessary, and I must say that despite my grandiose aims, I find this expectation quite credible.

In this context, ETC aims at being a significant contribution to (T), showing why a given computational architecture (1-5 above) logically generates the key and puzzling aspects of the consciousness phenomenon.

Similarly, ETC focusses on a particular kind of attention, and I can see that attention has been studied for a long time, allowing to find out many fascinating fragments of the puzzle, but we are currently unable to piece them all together in a coherent form. In the paper I cite a particular work (Vecera 2014) which aims at unifying two separate accounts of attention, exemplifying my point: as for consciousness in general, there is no consensus about what attention *is*. [I must add that ETC doesn’t try to be the final word on attention, it uses one account of attention to make a larger point, that’s all].

In this way we can then (speculatively) propose to classify as conscious, differently conscious, and not conscious, particular physical structures, provided that we know and agree about what algorithms they implement and what input they take from both the world and themselves. Thus, we reach ihtio’s smartphone: is it conscious? Straight answer is “No”, because it does not fully implement the 1-5 mechanisms described by ETC (I can discuss the detail, if required). However, because it is programmable, it has long-term storage, and collects plenty of information from both the environment and itself, ETC (hypothetically) states that when equipped with the right software, a smartphone can become conscious. That’s why I felt it necessary to add some (hasty) warnings at the end of the paper (I will not discuss this today).

Next up: what organisms are conscious? Answering this question is important for many reasons, not least because it would allow approaching the problem proposed by gvdr: what conditions promote the selection of consciousness, and what don’t? Now this is a question I would love to answer, but currently I can’t. At the present state, there is no consensus on what consciousness is, and thus, we don’t know how to anchor the question to empirical data. There are different approaches, but the one that looks more solid to me, and (naturally – grin) is compatible with ETC, is here (Edelman 2005). In a nutshell, the idea is that a conscious organism is one that has qualitatively comparable cognitive capacities with humans, excluding verbal abilities. This approach seems very sensible, but rests entirely on our reasonable intuitions: it is not and can’t be a formal system to settle the matter. Thus, unsurprisingly, if you move into the philosophy camp, you will find a large number of people who claim that verbal abilities are a prerequisite for consciousness, I think it is nonsense, but I do note that we are far from reaching an agreement and have no empirical way to pick a winner.

In this context, ETC allows to approach the question in a new way, and provides some hope of developing empirical tests to establish if a given organism is conscious. However, at this stage, using ETC in this way has to be premature:
(a) I believe in ETC, but this doesn’t mean that ETC is logically correct. Without serious and widespread discussion, I can’t know if ETC is internally consistent and does indeed propose a logical explanation of consciousness. We currently don’t know if ETC is theoretically solid, not even on pre-empirical grounds.
(b) Assuming that (a) is sorted, ETC needs to be empirically validated. It’s entirely possible that ETC is logically consistent, but that the real consciousness we have actually works in some other way. The first step towards empirically validating ETC is suggested in the paper, if that fails, ETC should be forgotten.

This long diversion was necessary to frame the problem posed by gvdr: given known ecological settings, can we predict whether consciousness would be advantageous? The short answer is “No”. Why not?
To start with, I think that (please keep in mind I’m in wild-speculation territory, I would not agree to write what follows in the ETC paper, not at this stage):

[I] The behavioural plasticity that ETC claims is made possible by the mechanisms 1-5 is always advantageous. If it could be obtained at no cost, it will emerge each and every time.

[II] However, the costs of mechanisms 1-5 are very high, and show up at different levels. The first and straightforward is energetic: we know that maintaining big brains has a very high energetic cost. This isn’t surprising, a high energetic requirement is certainly predicted by ETC in the form of the neural architecture necessary to implement 1-5, plus what is needed to store “learned information”, which I suspect is very relevant overall. The second is behavioural: for the advantage to be present, one needs to be able to learn from failed attempts, so one needs to survive multiple failures, not a small feat! Thus, you get a new way at looking at parental care, play, and other phenomena, but you are still unable to put a number on this cost. [there are more costs, but you get my gist, I’m sure]

Furthermore, [I] depends on the variability of ecological conditions: in an environment that is 100% static, [I] disappears. However, the simple presence of conscious organisms guarantees that ecological conditions will vary, as individuals will learn new ways to secure (and consume) resources, sending us straight into the realm of differential equations and more complex maths, effectively cutting me out of the discussion :-/. Furthermore, even the presence of any biological organism is in itself a good reason to expect that some variability is always present: natural selection would be at work, thus producing unpredictable changes, sooner or later.

Therefore the prediction depends on the balance of [I] against [II], and since we don’t know how to quantify either, we can only make very few, speculative and vague predictions. One prediction is that the presence of conscious organisms will increase [I], making the costs of consciousness less and less relevant. Another is that consciousness should be likely to emerge in volatile environments, and that it would not normally independently evolve in very stable ones. For example, I’m pretty sure that micro-niches such as enclosed caves are not favourable to the evolution of consciousness. However, if a conscious species gets to colonise such a place, this apparition in itself will act on [I], possibly tipping the balance.
Finally, as I’ve already hinted above, if ETC is right and consciousness is the typical evolutionary route to maximise adaptive behavioural flexibility, then the prediction is that it should be possible for consciousness to evolve independently more than once. We would expect to find similar patterns of behavioural plasticity to appear along multiple evolutionary lines, even some that diverged very early. If we accept the intuitively very reasonable arguments of Edelman at al. (2005), we can then reach and validate the conclusion that (some?) birds (Butler 2008), some Cephalopoda (Mather 2008), and maybe even fruit files (!! – Verjiden 2015 – NOTE: I’ve just read the abstract and should get hold of the FT shortly) are indeed conscious.
ETC however, especially in the absence of (a) and (b), isn’t really helping much: at best, it reinforces the intuition at the basis of the efforts cited above, and thus indirectly validates the pre-existing conclusions. The only one addition provided by ETC is the expectation that the evolution of consciousness can only happen in organisms that already have evolved something similar to what I call level 3. According to ETC, organisms that don’t model their surrounding and don’t monitor the state and position of their body parts should have very little chances of evolving what is needed to generate human-like consciousness (hardly a surprising prediction).

Conclusion:

The questions I’m trying to answer above are perfectly pertinent (and very interesting!), but I can’t avoid thinking that my answers are worryingly speculative. On one side, the fact that I can try to answer is encouraging, on the other, I don’t think that it would be appropriate to add most of the above to the paper itself. If I did, I would expect the peer reviewers to object on the basis that I’m pushing speculations far beyond what’s acceptable in a scientific paper (even if it is a hypothetical theory: a proposed theory that still needs to be validated).
Naturally: more comments, questions, criticism and rebuttals are very welcome!

References:

ResearchBlogging.org
Butler, A. (2008). Evolution of brains, cognition, and consciousness Brain Research Bulletin, 75 (2-4), 442-449 DOI: 10.1016/j.brainresbull.2007.10.017

Edelman DB, Baars BJ, & Seth AK (2005). Identifying hallmarks of consciousness in non-mammalian species. Consciousness and cognition, 14 (1), 169-87 PMID: 15766896

Mather, J. (2008). Cephalopod consciousness: Behavioural evidence Consciousness and Cognition, 17 (1), 37-48 DOI: 10.1016/j.concog.2006.11.006

Verzijden, M., Abbott, J. K., von Philipsborn, A., & Loeschcke, V. (2015) Male Drosophila melanogaster learn to prefer an arbitrary trait associated with female mating status. Current Zoology, 61 (6)

Tagged with: , , , , , ,
Posted in Consciousness, Evolutionary Psychology, Neuroscience, Psychology
12 comments on “Evolutionary Theory of Consciousness: first reply to comments.
  1. Jochen says:

    Hey Sergio, I had a quick look at your paper, and have an empirical question: how does it sit with the evidence for implicit learning, such as artificial grammar learning? This seems to me a case of ‘learning where the relevant information does not reach full, transitive consciousness’: the classic experiment presented subjects with strings of symbols to memorize; that was the entirety of their task. However, these strings were not random, but in fact, organized according to an artificial grammar; over time, the subjects became better at memorizing those strings well-formed according to the grammar.

    There, the relevant information is the rules of the grammar, i.e. rules about what symbols may follow others, etc. But that information was never consciously apprehended—the conscious information was only the strings themselves. But the subjects nevertheless demonstrated some mastery of the grammar’s rules, by their ability to better recall well-formed strings.

    More broadly, I’m curious as to how infant learning works on your theory. Basically, there seems to be strong support towards infants not having ‘full, transitive consciousness’ prior to some developmental phase; nevertheless, they demonstrate learning capabilities. Isn’t this somewhat in tension with your theory?

    • Sergio Graziosi says:

      Jochen,
      thanks! Looking forward to more feedback. (Really nice to meet you here 🙂 )
      I have one post in planning stage, all sitting in my mind at the moment, where the main topic is about the issues you’re raising. It’s a crucial part and it is overlooked in the ETC paper itself. There is a tension indeed, and the implicit grammar-learning paradigm does need to be tackled, as it links to the discrimination promise of ETC: I need to explore more in detail the difference between the kind of learning that doesn’t require the EM, the kind that does, but doesn’t require recursion, and so forth.
      For now, you’ll have to settle for my promise of writing more.
      The only short thing I can add is that the EM picks what perceptions might be worth remembering, other processes/mechanisms will then be responsible for digesting the information further, so to reduce the amount of information to be remembered (minimise long-term cost), and some others will actually store it (and here the mechanisms blur from a neat “module” conceptualisation down to the cellular and molecular levels, of which we know something, but not really a lot). The important thing for ETC is to note that these additional modules/mechanisms don’t necessarily need to be represented in level 3 or to always send their output back to level three, so they may well be unconscious (always or on occasion), as they are in the famous case you point to. Thus, implicit learning of patterns within an explicitly attended stimulus (a kind of stimulus that is pretty much guaranteed to pass through the EM) is not only possible under ETC: it is positively predicted (that’s why I’m proposing to try falsifying ETC using masked stimuli instead).
      The developmental stage, along with the assessment of consciousness in non human organisms requires more work and careful interpretation of the available evidence (a lot of evidence!), so I can’t hope to cover it all with a single blog post: my aim will be to organise my preliminary thoughts on the subject. Nothing more than that, I’m afraid.

      PS, All: It seems that my idea of asking readers to fill in the short survey is failing miserably. It would be grand if someone could break the ice and complete it, shouldn’t take long at all.

      PPS, Jochen: does ETC qualify as proposing that “precisely this kind of complexity” is needed, instead of “opaque complexity”? This is the most pressing question I have for you in particular.

    • Jochen says:

      I did fill out the survey, so if it didn’t register, it seems something’s not working… Re opaque complexity, well, you do have a concrete proposal, so that’s a good thing; but whether it does the work needed is quite another. More later!

  2. Sergio Graziosi says:

    Indeed! Your survey data is there, but I’ll try to resist the temptation to look at it because I’ll know it’s from you (feels like cheating).

    you do have a concrete proposal, so that’s a good thing; but whether it does the work needed is quite another.

    Agreed. That’s one reason I’m collecting feedback. Thanks again.

    • Jochen says:

      Well, if you do end up succumbing to the temptation—and I wouldn’t blame you—please keep in mind that I’ve written my answers from the point of view of a less-than-sympathetic reviewer, because, unless you get very lucky, that’s what your paper must be capable of withstanding; and I don’t think it’s quite there yet.

      There are, however, a couple of quite intriguing ideas there, so I definitely think that you’ll get it into publishable form. I was particularly taken with the notion of the two selves, the representational and acting self, so to speak. In fact, I think that there’s some chance to avoid some paradoxical notions with such a concept, since you can use it to avoid self-referential (or should that be self-self-referential?) problems; a bit like the distinction between object- and metalanguage in logic.

      In fact, if you could flesh that out a bit more, it might even stand on its own as a short, slightly less ambitious paper; if you can find some concrete problem to address with this concept, then maybe you could publish it in advance (I think the chances may well be higher than if you just come right out with the full, all-encompassing shebang), sort of like putting out a single before the full album. Then, with a foot in the door, you can just casually saunter in saying, ‘and by the way, here’s a theory in which the two-selves concept is naturally realized, and it also has all those neat properties!’

      Also, I wonder if you’re familiar with David Mumford’s ‘active blackboards’-theory; if you aren’t, I’d suggest checking it out, you might find a few things that are quite germaine to your views (although the work is a bit older now, and I’m not really aware of any followup, so I’m not entirely sure it’s still viable). Basically, he considers thalamo-cortical and cortico-cortical resonance, and proposes that, say, raw vision data arrives in the lateral geniculate nucleus, which is ‘looked at’ by Brodmann’s area 17 (I think? Somewhere in the visual cortex, I’m too lazy to look it up, you’ll know this better than me anyway). Cortical processing then ‘cleans up’ the data, and—here’s the crucial part—backpropagates it to the LGN, thus engendering a recursive influence like that between your levels 3 and 4. It might be advantageous to you if you could point to a proposed instantiation of the sorts of things you posit in the brain; plus, Mumford’s a mathematician by trade, and the work is in terms of explicit algorithms, and hence, quite sharply defined.

    • Sergio Graziosi says:

      Jochen,
      so far I’m resisting the temptation: knowing more or less what I’ll find helps, in the end. Of course, the “less-than-sympathetic reviewer” POV is the most informative.
      I thought long and hard about how it could be possible to take one element out of ETC, make it in a standalone paper and publish that first. In this attempt, the tale of two selves never even entered in my radar. It’s an excellent idea and it may indeed work. I could shoot me in the knee for not thinking about it myself, I suppose the idea looks so spectacularly obvious to me (it’s a necessity, once you take a computational stance) that it felt like treating my readers as uninterested school children that can’t be bothered to think for their own.
      I will let the idea (of a smaller paper on the two selves) sink-in and see what comes out, I really do think your suggestion makes a lot of sense.

      I have never studied the ‘active blackboards’ theory, but certainly there is a lot of current computational neuroscience going on which gives a lot of attention to the re-entrant circuitry in the thalamo-cortical system. Fleshing out how one could exploit ETC to push forward such efforts is something that makes my mouth water with anticipation, but it’s a long journey, so has to be put on hold while ETC is only a wild dream of mine.

      On finding some experimental psychologists to try out some experiments about implicit learning with visual masking, that was certainly in my plans, I wasn’t planning to use my old connections though (would be exploiting old friendships and never feels quite right to do so), and least of all trying to make it happen before ETC got published, but I guess you’re right again: it could be seen as an interesting thing to explore on its own…

      Hmmm, I’m asking for food for thought and I already got more than I can chew.
      Thanks! (Will continue our other discussion shortly, I promise!)

  3. Jochen says:

    Oh, and the experiment you’re proposing—implicit learning with visual masking—sounds also like it could be interesting on its own. If you happen to know any experimental psychologists, maybe it would be something worth suggesting? Perhaps even as a master’s thesis project for an enterprising young student?

  4. ihtio says:

    Just some thoughts in a disarray:

    We are trying hard to kill the idea of a “self” in mind sciences, and putting two “selves” really doesn’t help… It is as bad as introducing two souls. “Self” is maybe good for some old-style psychology or philosophy, but current trends in neuroscience make clear a picture of a human mind as a system of various processes and memories, with no core “self” to be found. A “self” is a bad, fuzzy, hard to define concept. It is smoke and mirrors that makes experimentation and scientific development harder, not easier.

    if the EM can create a level 3 element which marks the condition of “focussing attention on (perception)X“, the system will inevitably be able to “know” that X was perceived.

    I really don’t see why a mind would have to have “focusing attention on (perception)X” representation instead of just representation of “X” active for the system to be able to function correctly with regard to X (what you describe as “know”). Wouldn’t the mind have to have a “focusing attention on focusing attention on (perception)X” to “know” that it is focusing attention on X? There’s a lot of recurrent representations, and I don’t know which inception-level is enough for the system to behave appropriately in the context of X. ETC doesn’t say anything about it either.

    I had a feeling that the “consciousness” and “qualia” could be replaced by terms pertaining to “attention”, “focus”, etc. The organism would be able to learn, and no qualia would be necessary.

  5. Sergio Graziosi says:

    Ihtio, I’m hoping I’m failing to understand your questions, otherwise I’ll have to ask: are you trying to trick me?

    putting two “selves” really doesn’t help… It is as bad as introducing two souls […] [Self] is smoke and mirrors that makes experimentation and scientific development harder, not easier.

    The idea of two selves is either right or wrong, if it’s right, introducing it will “help”, because simpler accounts, where no self is admitted, will only be able to explain/predict less and/or will require many more elements/calculations to reach equivalent conclusions.
    Also, “mind” sciences try to explain our inner life (what we experience as our mental life) in physicalist terms: if there is no self, who is having the inner life that we are trying to explain?
    Sure, there is no unique core, but jumping to the conclusion and trying to eliminate the whole concept is equivalent to giving up. You would deny the existence of what you are trying to explain.
    Of course the philosophical notion of self is fuzzy, obscure and slippery. A good reason to look for more precise definitions, which is what I’m trying to do. ETC proposes operational definitions of “self”, two of them, linked with discrete functions. Thus, I’m trying to reduce fuzziness and obscurity.
    All in all, we have already discussed this, remember the “Illusory illusions of reductionism”? We are trying to subdivide the (sense of) self into its smaller mechanisms, are we not? ETC starts by dividing it in two; not the end of the road, but it’s the expected start.

    I really don’t see why a mind would have to have “focusing attention on (perception)X” representation instead of just representation of “X” active

    Hmm, how about: “that bull almost killed me because I was focussing too much on the calf”?
    Attention has its costs, and therefore is a phenomenon that is worth tracking. Having the ability to remember when attention was focussed on what is ecologically meaningful. This is all ETC is saying: what might be worth remembering needs to be represented in level 3.

    Same for “deliberately taking a decision”, it’s costly, comes with pros and cons. To learn when to react by impulse and when to deliberately weight our options, one needs to remember when one or the other route was chosen, and whether it worked. Thus, the activity of “taking a deliberate decision” needs to be represented.

    On the other hand, recognising anger/joy/pain in someone’s face is something that is always (or almost always) useful. Therefore the recognition-activity is always-on, not optional and not represented in level 3, even if it’s certainly costly (computationally). We believe it’s effortless because the state “evaluating the mood of this face” never appears in level 3, we only get the results. [Yes, sometimes we do get the “I’m not sure, is she angry?” feeling, and thus we deliberately give a second look, something that ETC has no problem accommodating.]

    I had a feeling that the “consciousness” and “qualia” could be replaced by terms pertaining to “attention”, “focus”, etc. The organism would be able to learn, and no qualia would be necessary.

    Sorry, I don’t understand this bit!

    • ihtio says:

      Sergio, you’ve just made me laugh with this tricking part 🙂

      The concept of a “self” is like Newton’s dynamics – it was great for a long time, but we already know that it was too crude approximation. We already know that and we want something better, even if it is more complicated and it requires more “calculations”. “Self” is such a loaded term that it would be as bad to postulate two operationalized souls or ghosts in a theory.
      Just my opinion.

      The short paragraph about attention was to point to a possibility that instead of talking about consciousness and qualia as aids to learning, one could as well talk about much less controversial but much more researched phenomenon – attention. One could build a computational theory that uses attention mechanisms to select relevant stimuli that need to be taken care of / remembered / etc. Why consciousness?

  6. Thomas says:

    Dear Sergio.

    I stumbled across your blog via your essay rebuttal to Robert Epstein’s essay on IP (which was a link promoted via Quartz Daily Brief).

    My knowledge in this field is rudimentary at best.

    I am coming from a slightly different angle.

    I am currently training as a therapist, and because of this I am exposed and directed to a vast array of information on Meditation, hypnotherapy (a relatively passive process requiring a hypnotherapist having ‘access’ to our subconscious mind), self-talk (an active process by the individual), relaxation techniques, neuro-linguistic programming etc. – and many other similar fields that have a lot to offer, but are unfortunately rife with pseudo-science.

    The main issue I have is that in essence they purport that the subconscious mind accepts everything around it. No matter what it is. you are not really in control of that. It does not know right from wrong. so if you are exposed / told / experience that you are ‘no good at music’ for example, then that becomes your programming, and influences your subsequent conscious behaviours and beliefs.

    The underlying premise of these fields being that we want to ‘bring peace and quiet’ to the conscious mind, reduce the ‘conscious processes’ in order to allow the ‘sub-conscious’ mind to be accessed or “come to the surface” and to “re-programme” the subconscious mind in order to have lasting effects on our behavior – with a goal to change habits, improve self-esteem, recover from trauma and so on.

    you can see why your essays interest me. As you say, there really is no generally accepted definition of what consciousness is.

    Have you come across any credible articles that may in-fact have a sensible discussion on the conscious v’s subconscious debate.

    I can bare witness to the powerful effects of meditation. I have been “hypnotized” (whatever that really means – maybe I was just ultra-relaxed). However, what is actually going when these things are happening is much more interesting to me.

    I look forward to your, and your readers positive inputs on this rather large tangent of the discussion.

    warm wishes

    Thomas

  7. Sergio Graziosi says:

    Thomas,
    as I’ve said in the comments of the latest post, the fact that I’m receiving input from people coming from very separate backgrounds is truly gratifying, thanks for stopping by!
    Your comment sent my thoughts in a variety of relatively unexplored directions (for me), so I don’t know if I’ll be able to put together an organised reply, what follows is probably going to be a series of disjointed observations…

    First, your question:

    Have you come across any credible articles that may in-fact have a sensible discussion on the conscious v’s subconscious debate.

    Short answer is “No”, I’m afraid. The type of brain/mind sciences I’m following is typically very sceptic of the whole concept of “subconscious” and of psychoanalysis in general. The “pseudoscience” label will be applied to anything which mentions the subconscious as a form of automatic reflex. There are good reasons for this, but at the same time I do think it is somewhat unfortunate. Before digressing, I’ll add two pointers, with little hope they will be of much use to you.
    First one is very obvious and I would imagine isn’t new for you at all. Have you read Kahneman’s Thinking fast and slow? (I’d be surprised if you didn’t!) It’s thoroughly enjoyable, even if it did get on my nerves on a few occasions (the “rationality fetish” side of it did annoy me). It doesn’t directly address the conscious/subconscious divide, not with this kind of language and associated cultural baggage, but it goes as close as mainstream psychology is currently allowed to get.
    Second pointer, not likely to be useful (because it’s nothing more that a little heuristic), is to one of my oldest posts here: cognitive attraction.

    I’m mentioning these because together they allow to make a relevant (I hope!) point. The starting point is that, per Kahneman, there are a lot of cognitive mechanisms which happen virtually instantaneously and entirely outside of conscious control. Second point (mine) is that how we behave can systematically influence how we experience the world (the yapping small dog) and therefore, becoming aware of our automatic reactions, and thinking about the systematic effects they may have on our experience can have life-changing effects (a marginally more rigorous attempt of mine to explore a similar direction is here).

    This leads me to the observation that despite all the pseudoscientific babble, approaches that focus on our unconscious mechanisms (aka, system 1 automatic reactions), carry a lot of potential, and therefore I hope you’ll never find me dismissing them by mounting on the high horse of proper science.

    [Big digression: I’m not even sure why I’ve written it, but I’ll leave it in, just in case it contains some useful reflections]
    For a long stretch of my life I have practised Karate very seriously (in what feels now like a previous life!). The amount of mumbo-jumbo that comes with the training is, as you would expect, virtually unbounded. You get to think about the famous Chi (vital energy, or something like that) as well as Zen-related concepts about states of mind, acting without thought, and the like. With many years of training, I’ve learned an important lesson: all these concepts, despite their complete lack of scientific grounding, and for the limited purpose of the practice itself, work astonishingly well. Visualising you Chi flowing from your hara/tanden is very efficacious in helping you learn how to deliver powerful blows. Conversely, the Zen philosophical side helps recognising what states of mind work best and thus lead to more effective sparring, etc. The lesson I’ve learned in this way is that all this stuff, which sounds very mystic and supernatural, is in fact very much empirical/practical. These concepts are used, taught and explored because they seem to work, above all. You end up developing abilities that looked supernatural on your first day of training, and, if like me, you paid attention to the process of developing them, you will have a very concrete idea of how you got there. You would also have absolutely no idea of exactly how you manage to do such things, so the mysteriousness of the concepts which you used along the way will also immediately feel perfectly sensible.
    I’m allowing myself to go off this particular tangent because the parallel with the spectrum of practices inspired by the idea of the Subconscious is, to me, glaringly obvious: here as well, a lot of concepts are used, even in the absence of solid scientific grounding, but are used because once you buy into them, they shape your experience efficaciously.
    I suppose I’m vaguely waving in the direction of the effects that buying into a particular world-view can have in self-reinforcing the world view itself. This is, in my view, the power-source of this sort of disciplines, and pseudoscientific or not, I find it very interesting! However, trying to find the scientifically verifiable mechanisms that underlie the “power of meditation” and similar practices is extremely hard. There is a lot of respectable work on the effects of mindfulness (et al.) on brain structures and brain activity. However, I don’t think that current research will be groundbreaking because both the technical and theoretical tools that we have at the moment seem too blunt for me. I guess it will take a long time before it will be possible to reconcile personal experience of pseudo-scientific mental practices with proper brain science. That in a nutshell explains why, despite my interest, I don’t invest too much time in finding convincing scientific accounts: I don’t expect to find them, not at this time. (another self-fulfilling, self-reinforcing prophecy?)

    Right, enough “stream of unconsciousness” ;-), maybe some of the above will make some sense to you. Most likely not, so I’ll stop making a fool of myself and I’ll zip-it.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s

Follow me on Twitter

All original content published on this blog is licensed under a Creative Commons Attribution-NonCommercial 4.0 International License.
Creative Commons Licence
Please feel free to re-use and adapt. I would appreciate if you'll let me know about any reuse, you may do so via twitter or the comments section. Thanks!

%d bloggers like this: