Sources of Error: Epiphenomenalism (part 2)

thinking zombie (image)

Image Credit: Pete Linforth

Epiphenomena haunt me: the actual idea that we can explain any phenomena with the aid of the concept is thoroughly alien to me. In turn, this means that I don’t understand why people do rely on the concept, and consequently suggests that I’m probably missing something important. This post is the second part of a series: in part one I’ve explained the concept in general, and explored my understanding of why it seems absent from standard scientific discourse. I’ve also specified that science does occasionally invoke a weak version of Epiphenomenalism, with which I have no issue. The interim conclusion is that science by definition has no use for strong epiphenomena as they have no effect whatsoever. Therefore, whether they exist or not makes no difference, and since science does value parsimony, one can safely treat epiphenomena as non-existent.

In the following I will explore how the concept enters in the topic of philosophy of mind. I will claim that the concept is normally used in the most unhelpful of ways. It’s worth noting that many people before me have tried to show why Epiphenomenalism is logically impossible (see for example Bailey 2006) but for reasons that I can’t even begin to comprehend, the debate seems destined to continue (e.g. Robinson 2012). As always, my own contribution is an attempt to simplify and/or make the issue at hand intelligible to people (simpletons?) like me: it is also a direct consequence of the fact that arguments a la Robinson look like hopeless sophistry to me. I assume my impression is wrong, and that’s why it’s worth trying to put in writing my own personal approach to the problem. Thus, I will claim (as strongly as I can) that Epiphenomenalism is manifestly incoherent, but that it is also actively harmful: if we want to explain minds, we need to reject the idea in the most uncompromising way. If you think I’m wrong, please do let me know!

Mind epiphenomena

To the best of my understanding, epiphenomena enter in the philosophy of mind discourse because we can’t figure out how minds relate to matter. We explain the physical world by means of mechanistic (or at least probabilistic) models, but so far there is no accepted idea of why on earth any kind of mechanism would generate a mind. Moreover, it is generally assumed that one day we’ll be able to mechanistically explain how brains work. It’s worth pausing a little while and make explicit what such and explanation entails: we expect that some day we’ll be able to make predictions about the mechanisms within whole brains, which is a slightly convoluted way of saying that we’ll be able to predict what a brain will do. To some extent, we already do: for example we can predict with decent accuracy how certain outputs of a brain will react to changes in hormonal concentrations and so forth. However, the expectation is that someday our understanding will be improved dramatically, and we’ll be able to mechanistically predict what actual people would do in a given situation by simulating the molecular/neural mechanisms that we suppose would happen in their brains.

From a philosophical point of view, this expectation is deeply problematic because the expected “explanation” doesn’t seem to include minds. It would be something that describes molecules that move about, dynamically generating electric potentials and who knows what else. Thus, at the same time, it seems reasonable to expect that such an explanation will not include anything about the self (if it’s just atoms/molecules moving about, there is no individual to be modelled), let alone the feeling of being that self. From here, many cave in the temptation of epiphenomenalism and propose something like:

The mind, the sense of self, the what-it-is-like to be something or someone, the raw sensory experiences, pain and pleasure, are all epiphenomena (for brevity, I’ll call this the Problematic Bunch – PB). Or at least, some PB members are epiphenomena.

Superficially, this stance seems very solid, because otherwise we must conclude that our expectation of being able to mechanistically explain brains is misguided: if any PB member has causal powers, then it will have to be included in the mechanistic explanation. But we have no clue of why a mere mechanism might somehow generate or acquire a feeling,  therefore we can’t even start producing a mechanistic explanation of minds, the self, the what-it-is-likeness and so forth. It seems that the only reasonable hypothesis is that a mechanistic explanation of brains will exclude such things. If that is true, then mind, the sense of self, the what-it-is-like to be something, the raw sensory experiences, pain and pleasure are either epiphenomena or illusions in the “they don’t exist” sense.

Uh oh. According to my previous post, For All Imaginable Purposes (FAIP) we should regard epiphenomena as non-existent (if something has no effect, why should we include it in our models?). Thus, we remain with two options (epiphenomena or illusions), and both lead to the same conclusion: members of the PB, For All Imaginable Purposes, don’t exist.

What did I say?

Hang on a second: we have just reached the conclusion that minds, the sense of self, the what-it-is-like to be something, the raw sensory experiences, pain and pleasure don’t exist. You will excuse me if I take this conclusion to be a clear indication that there must be an error in our reasoning somewhere. Sure enough, all of the above does explain very well why the mind-body problem is really tricky (or if you wish, the question of how mere mechanisms get to feel like something), but the reasoning can’t be right! In other words, either my claim that we can safely ignore (as equivalent to non-existing) everything we consider to be a strong epiphenomenon is wrong, or we have to assume that no PB member is an epiphenomenon.

Let’s see if I can convince you: if something is an epiphenomenon, it has no consequences, therefore, when you say “ouch, that hurts” if pain is an epiphenomenon, it isn’t what caused you to say it. Fine: it’s not the pain that made you yell, it’s the mechanism that causes the pain. But saying this is equivalent to saying that your utterance “pain” refers to the mechanism, because otherwise our neat epiphenomenal way out crumbles: if your utterance refers to “pain” the epiphenomenon, then that pain is not an epiphenomenon for it caused you to refer to it. It all boils down to the general principle: if something is supposed to have no causal powers at all, positing its existence has zero explanatory power. Whether it exists or not makes precisely zero difference to our explanation. Thus, if we want to explain minds, or the sense of self, or the what-it-is-like to be something, or raw sensory experiences, or pain and pleasure, we are implicitly assuming they are not epiphenomenal, not in the strong, uncompromising sense: at the very least, we do want to explain why we think that explaining something that makes no difference is not a waste of time.

Now, this line of reasoning would be useful (according to me!), but unfortunately I seem to be strolling very far from the beaten track. On philosophical matters, the role played by epiphenomenalism seems to be the exact opposite. To me, epiphenomenalism suggests that our reasoning must be wrong, and that therefore it’s likely that mechanistic explanations can and have to include the problematic bunch. To a good portion of the rest of the world, it suggests the opposite, by way of the infamous Zombie thought experiment.

Guess what? Zombies are harmful.

If you’re reading this you are almost certainly aware of the idea of Philosophical Zombie (PZ), however, I will summarise my own understanding of it, in order to help my reader spotting if or where I’m getting it wrong. I will start with acknowledging that PZs come in many different forms and variations, however, in this case, I’m concerned only with PZs that entail epiphenomenalism. In particular, from now on, a PZ will be assumed to comply with the following:

A Philosophical Zombie is an exact replica of an actual human being, down to every conceivable physical and behavioural detail, but lacking any inner feeling (what is usually referred to as Phenomenal Experience – PE, the fact that there is something it is like to be that being).

Since I will maintain that such PE doesn’t exist (FAIP), I will refer to the specific PE that zombies don’t have as PPE (Philosophical PE). The argument goes like this, in Bailey’s words:

1) If theory X were true, conscious states would be type-identical with, or at least logically supervenient on X-ish states. […] From this it would follow that – if we hold constant the relevant background facts – it is logically impossible for X-states to occur in the absence of conscious states.
2) Creatures possessing X-states but lacking consciousness – X-Zombies – are conceivable, and whatever is conceivable is logically possible.
3) Thus it is logically possible for X-states to come apart from conscious states and so theory X is refuted.

In more colloquial terms, we start by assuming we do have a theory X which claims to explain why conscious states necessarily happen (1). The zombist then proposes to consider X-Zombies which follow the mechanisms proposed by X but are in fact not-conscious. If we can consider the possibility of such creatures, then they are logically possible (2). Thus, (1) is refuted because logic alone doesn’t show that conscious states necessarily happen (3).

I see two problems with this formulation, but it still looks like the most compelling of the ones I’m aware of. The first problem is something that Bailey doesn’t even mention:

Simply put, we don’t currently have any theory X, only hopes of producing it. Thus, we don’t know what these X-states are, and if we don’t, we can’t properly asses whether X-Zombies are conceivable.  Without the full description of the X-mechanisms, we can’t say whether you can build a picture of X that doesn’t entail consciousness (2). In this context, one may be tempted to say: “No, you’re cheating. Physicalism is the theory that consciousness can be explained in mechanistic terms, and we certainly can imagine mechanisms which lack inner feelings, granting us the validity of step 2“. This is the main reason why PZs are so attractive, but is mistaken: physicalism is currently a hope, it doesn’t specify what mechanisms are necessary to produce PE, so it would be wiser not to assume that, whatever X-mechanisms will be proposed, it will always be possible to conceive the corresponding X-Zombies.

However, the second problem seems stronger to me, and is something that underlines most of Bailey’s critique. PZ arguments, when used to support epiphenomenalism, necessarily conform to my first definition above (exact physical replica), and therefore imply that PPE is an epiphenomenon. From this, the direct, unavoidable consequence is that PPE makes no difference whatsoever. In other words, including or not PPE in our explanatory model with leave the model untouched: ex-hypothesis, PPE causes nothing. Now, if our model is explanatory, it makes some phenomena somewhat predictable, so, in case of PE, if our model does claim to explain it (for example, allows to predict what mechanisms produce PE and what don’t), it follows that the PE it explains is not PPE, because PPE is irrelevant, non detectable and completely transparent to all conceivable explanations.

Consider a Zombie Twin World (ZTW), in which all of us are enacted by our own PZ twin. In this world, because my PZ is an exact replica of myself, it would be writing this same essay. My twin would act as if its overall aim would be to contribute to an explanation of PE, but this PE can’t be of the PPE sort, because PPE is completely absent in the ZTW. One could at this point quibble on the fact that I have an aim, but my twin doesn’t (as meaning belongs to the PB), it just acts “as if”. Therefore I should not ascribe aims onto Zombie twins, and in fact my phrase “contribute to an explanation of PE, but this PE can’t be of the PPE sort” is meaningless because in the ZTW there is no meaning. Fine, but then how can you be sure we are not already inside ZTW? Well, according to some, we can because we know we are conscious and we all experience PE. In other words, we (not the zombies!) produce thoughts such as “seeing this red-rose feels of something”, and these thoughts are what tell us that we have the true PE. Furthermore, according to epiphenomenalists, these thoughts are caused by physical events, but the same physical events happen in exactly the same way in ZTW, so zombies will report having the same thoughts as us. So once again: where is the difference?

Therefore, we must acknowledge that our hypothesis assumes that between our world and a ZTW there is a difference that doesn’t make a difference. Such an assumption seems pointless to me, but even if granted, it means that For All Imaginable Purposes, our world and ZTW differ in exactly nothing. This entails that PPE is an empty set: there is no discernible difference between me and my PZ twin, ex-hypothesis. In other words, I’ve been writing about nothing from the start. For example, we can try imagining what would happen if a mechanistic explanation of consciousness is produced: in our world, we expect that it won’t explain PPE, allowing anti-physicalists to ask for more explanations. In ZTW, ex-hypothesis, the same needs to happen: the Zombie replicas of our anti-physicalists will physically mirror what happens here, so they would complain just the same. From this, it follows that a convincing mechanistic explanation of consciousness needs to explain what is in common between our world and ZTW: to be convincing it has to explain what makes people look for epiphenomena, which would grant explanatory powers (or its zombie-equivalent) in both worlds.

On the other hand, trying to explain PPE (Phenomenal Experience under the assumption that it is an epiphenomenon), would, ex-hypothesis, always fail: there is no way to test the prediction “this mechanism produces PPE” because whether it does or doesn’t can’t cause any difference, so at best, one could claim “this system behaves as if it had PPE”, to which the epiphenomenalist can always answer “Congratulations! You’ve produced a convincing zombie”. Thus, the epiphenomenalist, by convincing most specialists that we need to explain something which (FAIP) doesn’t exist, has placed herself in an unbeatable stronghold. No explanation will ever satisfy the epiphenomenalist criteria. That’s because Epiphenomenalism is internally incoherent (it requires to explain something that is FAIP non-existent).

Apart for the clear waste of effort which results from accepting impossible-to-meet criteria (would normally be denounced in standard scientific discourse), I also wish to claim that the notions of PZs and thus PPE are actively harmful. Passively, they waste our time and brainpower, which would be bad enough, but actively, they make finding some answers positively more difficult. Why is it so? Because, if one wants to create a theory of consciousness which is able to command some consensus, it needs to be convincing to scientists and philosophers in equal measure. This generates an impossible to solve deadlock.

On one hand, to satisfy PZ-friendly philosophers, a theory will need to include epiphenomena, and give them a quite important role. Thus, to a trained scientist, such a theory will immediately look suspicious: why would anyone include elements that make no difference? Theories that try to raise to the epiphenomenalist challenge look thoroughly unscientific to the trained eye, and therefore are (rightly) frowned upon by scientists. This is harmful, because it deepens the divide between brain scientists and mind-philosophers: it would be a useful consequence only if philosophy of mind contained exclusively bad ideas, but this seems a manifestly preposterous assumption to make.

On the other hand, as explained above, such a theory is not even possible, on philosophical grounds: any theory that can be produced will either leave out PPE, or fail to necessarily entail PPE, leaving PZ-friendly philosophers able to declare “told you, it’s impossible!”. But this is nonsense: the moment PE is declared an epiphenomenon is the moment when we should stop and admit that it isn’t logically possible.

Whatever it is that we want to explain, it can’t be an epiphenomenon, because we want to explain it!

Thus, epiphenomenalism in theory of mind is inherently nonsensical, at least to my eyes. If something causes nothing, it doesn’t flipping exist, not for any imaginable purpose. However, a great deal of brainpower has been spent in obscuring the internal inconsistency of the whole idea, letting philosophy of mind to more or less collectively accept PPE as a legitimate explananda, which makes every and all proposed explanations unable to gain any philosophical traction. But it’s worse than that: on the science side, scientists at some level detect the absurdity of the situation, shake their heads, and (more or less collectively) turn their backs to philosophy of mind. This is a catastrophe: philosophy of mind is necessary to brain sciences*, precisely because it is able to look ahead and spot a vast number of conceptual obstacles that science will need to tackle. Ignoring those does no one any intellectual good: sure enough, it allows to propose “scientific”** theories of consciousness that  are manifestly unable to explain why minds exist, it even allows to take such theories seriously!

In other words, the idea of epiphenomenalism inherently makes its explananda unexplainable, and by doing so allows to waste time, brainpower, money and resources in virtually any direction. For this reason (I’m out!) the idea should be put aside once and for all.

Notes and Bibliography

*Simply put: you can’t tackle the most interesting questions about human brains without considering philosophical questions. For example, this excellent essay by Kevin Mitchell (a bona fide, no-nonsense neuroscientist – not a wannabe like yours truly), makes it perfectly clear that if we want to link brain mechanisms to the choices we make, it is necessary to adopt one or the other theory of causation. Moreover, one gets inevitably thrown in the deep waters of the free-will debate: does proposing a mechanistic explanation negate the freedom of our choices? There is no way to scientifically explore these topics without encountering one or the other philosophical conundrum. When it comes to cognitive neuroscience, ignoring philosophical debates can be considered useful if and only if we assume that all philosophy of mind is worthless. Considering how many of the brightest people have tried their luck on these waters, I find this assumption absurd and a clear manifestation of hubris and wilful ignorance.

**The scare quotes here stand for: theories which claim to be scientific and justify this claim by proudly ignoring the relevant philosophical debates.

ResearchBlogging.org
Bailey, A. (2006). Zombies, Epiphenomenalism, and Physicalist Theories of Consciousness Canadian Journal of Philosophy, 36 (4), 481-509 DOI: 10.1353/cjp.2007.0000

Robinson, W. (2012). Phenomenal Realist Physicalism Implies Coherency of Epiphenomenalist Meaning Journal of Consciousness Studies, 19 (3-4), 145-163

Tagged with: , , , , ,
Posted in Consciousness, Philosophy, Psychology, Science, Stupidity
23 comments on “Sources of Error: Epiphenomenalism (part 2)
  1. Jochen says:

    Hey Sergio!
    Onward to part 2…

    First of all, a bit of a pet peeve of mine: parsimony in science is not a good in itself; it is necessary to achieve predictivity. For any theory, there are infinitely many theories that account for the evidence just as well, but whose behavior differs in empirically unaccessed regions. Thus, all of these theories make wildly different predictions. In order to settle for one, it thus stands to reason to take the simplest, and expose its predictions to empirical testing; if it fails, one takes the next one, and so on. Without parsimony, there would be no way to decide between these theories, and Newtonian gravitation would be just as good a theory as one in which Newtonian gravitation holds until next Tuesday, where all the planets suddenly reverse their orbits.

    However, with respect to in-principle unobservable entities such as epiphenomena, this criterion has no teeth: as long as there is no empirical difference between a theory with, and one without epiphenomena (which follows from the definition of epiphenomena), parsimony does not yield grounds on which to favor one over the other.

    I also broadly disagree with your sketch of how the idea of epiphenomenalism comes about. Certainly, many people expect a mechanistic explanation of mental phenomena that doesn’t mention some irreducibly qualitative aspect thereof; but from there, it doesn’t follow that there’s no causal power to mental phenomena. Take, for instance, my left arm: in a mechanistic explanation for my movement on a molecular scale, nothing such as ‘my left arm’ will figure. However, my arm has clear causal powers; indeed, I’m exerting them right now in typing these words.

    Point being, there is room, even in the conception you sketch, for the emergence of causally efficacious mental states; certainly, everything my arm does is accounted for by molecules acting on molecules, but that doesn’t mean that my arm doesn’t exist.

    The epiphenomenalist hunch comes from the idea that one could have that particular arrangement of molecules without there being anything like ‘my left arm’—viz. the same set of physical facts, without attendant mental experience. It’s only in such a case that the idea that if ‘my left arm’ nevertheless exists, it must not ‘make a difference’ in the world.

    Moving on, you argue that nothing that has no causal efficacy makes any difference to our explanations. This is surely right, but the thing that is purportedly causally powerless is not part of our explanations, but instead, the thing to be explained. Thus, following this argumentation means a ‘cheap win’: causally powerless things can be left out of our explanations, hence, there’s nothing left to be explained. But the argument that we can leave out things that are causally inert only applies if they’re not the very things to be explained.

    This comes to bite you in your discussion of zombies and the zombie-world, where you imply that PPE makes no difference. But it does (or so the epiphenomenalist would hold): without it, there would be nothing it is like to be you—just like it is for the zombie. PPE makes all the difference!

    Perhaps consider an analogy: a computer executes a certain computation, whether or not there is a screen plugged in to look in on that computation. Thus, the representations on the screen are epiphenomena with respect to the computation: they are not causally efficacious in computation-space. In the same sense, phenomenal experience, to an epiphenomenalist, will not be efficacious in physical space. But your conclusion, that we’re all computations running without plugged-in screens, because there’s no difference made by the screens, is false: whether there is a screen or not does make a difference to the user (of course, this metaphor should be treated with caution, in order not to fall victim of homunculi).

  2. ihtio says:

    Epiphenomenalism is internally incoherent (it requires to explain something that is FAIP non-existent)

    Try to replace “epiphenomenalism” with “atheism” and you will see that both are not internally incoherent. They seem to be saying, that if you cannot scientifically observe and test X, then X [most likely] doesn’t exist.

    You wrote that there are two possibilities for saying that experiences don’t exist: epiphenomena or illusion. For the second one, I think Blind Brain Theory would be a good candidate to provide some explanation of why such illusions occur.

    I would like to note that in science, especially medicine, people don’t really treat phenomenal experiences as non-existent. Much work went into the production of pain relievers and anesthetics, which don’t work by blocking language mechanisms that would produce sentences “it’s so painful!”, but work in such a way as to block mechanisms that produce actual pain.
    Whether we are fighting cancer of migraine or multiple sclerosis, we want patients to feel as less pain as is possible.

    • Sergio Graziosi says:

      Ihtio, thanks for stopping by!
      Quick question, to make sure we’re on the same page: are you under the impression that I’m an eliminationist of any sort? I ask because I think I’m not.

    • ihtio says:

      I subscribed to your blog so whenever you post something, I read it 🙂

      To your question I give you a quick answer: No, I’m not under any such impressions 🙂

  3. Sergio Graziosi says:

    Ihtio,
    thanks again!

    To clarify, a schema of why I claim epiphenomenalism is incoherent:
    Explananda: some X of choiche.
    Explanatory Method: mechanistic. I.e. specify causal relations.
    Proposed “explanation”: X is a strong epiphenomenon.

    The explananda implies the existence of X. The proposed solution, unwillingly(?) suggests the opposite. But mind-epiphenomenalists of the classic kind maintain that PB members exist and need to be explained. Which I find incompatible with the proposed “solution”.

    By contrast, when thinking about God, typical “New Atheists” will start with the question “does God exist?” which is very different from “explain God”. They find no evidence and answer their question with “probably not”. Because this time the question is different, and doesn’t smuggle existence as an unstated (not-to-be-tested) assumption, there is no incoherence in this second case.

    In other words, I am saying that epiphenomenalism suggests (entails?) non-existence, and on this grounds reject the hypothesis. By contrast, my understanding of the typical epiphenomenalist stance is that existence is taken for granted, and the argument used to eventually claim that mechanistic explanations will always fall short of the mark (a three cards trick, in trivial terms).
    Does this help?

    [Pain is so annoyingly real, isn’t it?]

  4. David Duffy says:

    Only just saw this. My simple-minded model is where the true state of nature is that A causes B and also causes C. So, B and C are correlated: is B a cause of C or just an epiphenomenon? Is serum urate a cause of coronary artery disease, or is it a marker of evil living? In epidemiology, the term risk factor refers both to causes and usefully indicative epiphenomena. So, in the Libet experiment (we will pretend this is actually correct), the reportable experience of making a conscious decision is epiphenomenal to the actual preconscious decision that “two or three seconds from now will be the right time to move”. The reportable conscious experience conveys some valid information about the actual cognitive process, and is “real”, but caused.

  5. Sergio Graziosi says:

    Thanks David,
    I fear I’m missing your point, as I could twist your last sentence in ways that could either support or reject my point:
    [reject]: the reportable conscious experience is real (it’s reportable, conveys some valid information), therefore [my] claim that epiphenomena don’t exist is false.
    [support]: the reportable conscious experience is real because, being reportable it’s detectable. But for the same reason, being at least a co-cause of the reports, it’s not epiphenomenal.

    Some confusion may be due to my (pretty startling, I’ve come to realise) claim that epiphenomenalism suggests (entails?) non-existence. Unfortunately, to make this claim work I need to trawl through a long and hard to accept epistemological detour. If you have time to waste, you may want to check the comments under the previous post.

    It’s also possible that some confusion is coming from how the idea of epiphenomena is used in life sciences and how different it is from strong epiphenomena: as far as I can tell, in bio/med contexts, the label is used to indicate stuff which is detectable (so can cause something measurable – you can detect Urate salts in serum) but doesn’t play an active role in the process/phenomenon that’s being studied. These are weak epiphenomena, and I have zero problems with them. If mind-epiphenomenalism didn’t rest on strictly strong epiphenomena, I’d have no trouble with it as well, but it does, and therefore I just can’t see how to accept it as a possible explanation.

    In my discussion with Jochen under Part 1, parallel universes take a central role because they are the only case we could find where science proposes the existence of strong epiphenomena. We’re far from reaching the bottom of that discussion, but I’m slowly learning that the idea of “strong epiphenomena” is so outlandish that it is quite hard to grasp. You can’t interact with, detect or measure strong epiphenomena at all! So, like parallel universes, they may be all around us, but we just wouldn’t know.

    • David Duffy says:

      I guess where I’m coming from, a strong epiphenomenalism is either incoherent (what kind of physical basis would it have here?) or is a model that is completely equivalent to other ways of looking at things. A weak epiphenomenalism such as that Freudianism brought on us, where consciousness spends much of it’s time rationalizing what you’ve already decided is quite troubling enough.

      If you can produce a verbal description that you feel partly represents your mental state a second ago (as you experienced it), then you have presented data to me about some kind of internal informational structure. I can decide whether your description chimes with my own experience. Whether subjective experience is strongly correlated with that which really causes all your behaviours or is one cause along with other unconscious processes (I don’t really have any idea where these words I am typing are coming from, though I might go back and delete some) is not currently decidable – coming back to mentioning “risk factor”.

  6. Sergio Graziosi says:

    David,
    Yes, I think I follow, and if I do, then we’re on the same page. Do you think it’s a coincidence that we share a bio or bio-med background? The more I ruminate about it, the more I realise how much my background has shaped my larger metaphysical commitments: I find it surprising and predictable, both at the same time. Odd!

    I’m intrigued by your “epiphenomenalism [as] a model that is completely equivalent to other ways of looking at things” snippet. Do you have anything specific in mind?
    I get the “risk factor” side (I think), in fact in the comments for the second post I have a reply to Jochen (in preparation, it’s not out yet!) which ties into undecidability issues.

    • David Duffy says:

      “completely equivalent models”: these come up in structural equation modelling and related approaches – the diagrams that Robinson uses in his SEP review of epiphenomalism could easily be from that literature. A correlation can often replace a causative path between unobserved (“latent”) variables in a large system giving identical outcomes at the level of the manifest outputs. The only (possibly not very good) example I can think of just now is pre-established harmony versus supervenience as mechanisms for the mind-body relationship (or the equivalence between various QM ontologies). I’m not sure if strong epiphenomenalism is equivalent to strong supervenience – I don’t think so.

      I really like the discussion by Rappaport [1997] on the differences between implementation, reduction, instantiation, and supervenience – which seem kind of relevant to these questions:

      Click to access implementation.pdf

  7. Sergio Graziosi says:

    Thanks David,

    A correlation can often replace a causative path between unobserved (“latent”) variables in a large system giving identical outcomes at the level of the manifest outputs.

    now I’m annoyed that I had to ask: makes perfect sense.
    I don’t think strong epiphenomenalism is equivalent to strong supervenience either (not at all, to be honest).

    Thanks for the pointer! The link you’ve added doesn’t seem to work for me, is the paper you’re referring to the same as the one below?

    Click to access implementation.pdf

    Looks interesting, although my “anti essentialist” circuits started tripping already on page 4 (which isn’t bad, and I should probably ignore them as essentialism is mostly legitimate when talking about computers, even for me).

    Nice link to equivalence classes, if I’m reading it right (mind you, brain started to call for mercy at p14): what I’m not sure I understand is the overall aim/scope. Explicitly, the scope is computer science, and only that, but then the topics touched range from linguistics and cognition all the way to pure maths, so it leaves me unsure: for example, is the paper claiming to explore what should be understood as an “abstraction”? Anyway, I think I mostly agree.

  8. David Duffy says:

    “overall aim/scope”…I liked it just as an accessible introduction to the possibilities for a computationalist to deal with mind, intentionality etc.

    Click to access RAPSSC.pdf

    The Exclusion Argument does states AIUI that if mental states are supervenient on neural mechanisms, then mental is epiphenomenal. So it is useful for me to consider Rapaport’s analogies eg whether a musical composition is supervenient on it’s performance etc.

  9. Sergio Graziosi says:

    David,
    this latter article is both very exciting and confusing/disappointing to me. The exciting part comes from the fact that I essentially agree with the aim (original emphasis omitted in all following quotes):

    computationalism should not be the view that (human) cognition is computation, but that it should be the view that cognition (simpliciter) is computable

    The confusion with disappointment comes from the method and unstated(?) assumptions. For me, computations are abstract, they are interpretations we use to make sense of mechanisms. Thus, when I reach passages such as :

    That cognition is computable is a necessary—but not sufficient—condition for it to be computation. The crucial difference between cognition as being computable rather than as being computation is that, on the weaker view, the implementational implication holds even if humans don’t implement cognition computationally.

    I get lost, but at the same time I lose interest. From my view, nothing is computation. Lots of real world processes can be modelled as computations, so it’s legitimate to ask if something is computational, but to ask if something is computation is like asking if water is music: the question entails a category error. Similarly, asking if humans “implement cognition computationally” makes no sense to me, the question is malformed to my eyes. A good question is “can we model cognition computationally?”, or “if we model cognition computationally, how much significant details will necessarily be left out?”. If the answer to the latter is “a great deal”, then the quest of classical cognitive neuroscience would be doomed to fail.
    I’ve tried to overcome this problem by accepting the implied ontological claim (computations exist as independent phenomena, they are not merely our interpretations of independent phenomena), but then again, by page 11 I get kicked out once more:

    In neither case [seeing and imagining a pink ice cube] is there anything internal that is pink or cubic; in both cases, there is something that represents pinkness or cubicness.

    .
    Problem is, pinkness is most definitely a quality which is observer-dependent: if our photo-receptors had different light-absorption curves, pink would be something entirely different (I say this as a mildly colour-blind person, so I have some sort of almost direct experience of the difference!). Once again, I find myself to read a passage that I cannot parse: if in the brain there is something that represents pinkness but not pink itself, and there is no brain independent pinkness in the world, what would pinkness refer to?
    The end result is frustration: I want to agree, but I can’t.

    On the The Exclusion Argument and epiphenomenalism: once we accept that no single (explanatory) model can encompass all of reality, I think that the exclusion argument evaporates. I see no internal flaw in the argument itself, but as you pointed out already, one can model the same system in different ways, in one approach something may end up doing no work (a weak epiphenomenon – as we still assume we can detect this something) in the other it may be accounted as a causal step. This doesn’t entail that one model is wrong and the other (might be) right. Both are wrong (as in: they approximate the truth and thus contain some error): the differences simply spring out from the different assumptions used to build the models.
    The conclusion that I get from this view is that at the present time, our mechanistic/physicalist ways to describe the world seem unable to describe the mental: thus, as Chalmers would probably agree, it would be nice to extend our mechanistic/physicalist ways to describe the world so to make descriptions of the physical and the mental fit in one single framework. We don’t even know if it’s possible, but it’s certainly worth a try (I think).
    Going back to the topic, this means that we need to avoid declaring the mental a strong epiphenomenon (in our final model) because if it is, in our model it will be completely superfluous: in that framework it would even be undetectable (see main post even if our model was so good as to allow producing what seem to be conscious artefacts, “the epiphenomenalist can always answer ‘Congratulations! You’ve produced a convincing zombie'”). Therefore, the model we’ve built would not be fit for purpose: such a model will be declaring that the mental is irrelevant, which is not what we wanted to achieve.
    Instead, my bet is that some physical mechanisms are one and the same as PE, the open question is whether it’s possible to build a single framework which shows why. I know my attempt is not convincing, and I’m still not sure if it even shows why (as unconvincingly as it may). Meh, long and hard road ahead: that’s why it’s worth persisting.

    Hope some of the above is intelligible…

    • ihtio says:

      Sergio,

      Are you saying that it is possible for something to be “computational”, while not performing any computations (that is: this something is not a “computation”)?

      What then are the meanings of “computation”, “computational”, “computable”?

  10. Sergio Graziosi says:

    Ihtio,
    please don’t think that I’m not puzzled and surprised by what I’m saying, I am, just as much as you seem to be. This line of thinking is confusing: I fear that I am doing what Jochen suggests, that is, I may be moving the goalposts to suit my taste, invoking “arbitrariness” to readjust my interpretations as I go along.
    This fear is the reason why I need the help of people like you, David and Jochen, constantly probing my position in search for inconsistencies. The second-order fear is that I might be committed to such an extent that, despite your best efforts, I would still fail to recognise fatal flaws in my reasoning.

    Anyway, computations are mathematical abstractions, Turing machines, finite states automations and the like are abstract objects. So far, I don’t need to be scared because I’m just re-stating orthodox positions, right?
    From there, the problem of implementation springs out with disruptive force: the dancing with pixies argument (we know all about it, but for the benefit of the casual reader, an accessible introduction can be found on Conscious Entities – where else?) has traction and needs to be addressed. Once again, I’m going with the main stream: computations are abstractions, so when we attribute them to the real world, “this thing computes X, this other thing computes Y” we are doing something arbitrary – there is no a priori way to say if the attributions we made are indeed right (besides the number of identifiable states of “this thing and the other”, every interpretation is formally correct) – I’ve discussed this area at length, also on Conscious Entities.

    My view on this goes back to noting that the above follows how cognition works: cognition starts by making distinctions (see my post on essentialism and my comment about apples) and there always is an element of arbitrariness in them.
    Thus, I found the first Rapaport paper (Implementation is semantic interpretation) agreeable (the keyword being “interpretation”), refreshing and reassuring (maybe I haven’t lost my mind, after all). The second paper however sends me back into the mud-pit: the distinction Rapaport makes (“That cognition is computable is a necessary—but not sufficient—condition for it to be computation.”) is, once unpacked, negating where I stand. I think Rapaport is giving to “being a computation” an observer-independent ontological status. But we already know this isn’t the case, computations are abstract concepts, saying that something physical is computing is a matter of interpretation, and whether some event is a “computation” or not is matter of convention (including what is being computed). In other words, I think that expecting computations to be observer-independent is the wrong approach, one that is wrong because it leads to unsolvable riddles.

    Instead, I do think that questions such as “can we model cognition computationally?”, or “if we model cognition computationally, how much significant details will necessarily be left out?” are well formed and very much worth asking. I suppose I could answer your questions by noting that if one can answer my second question (how much is necessarily left out) with “nothing at all”, then our object of study can be said to be truly computational and that it does perform computations. However, I don’t think we can be sure about this answer for any physical structure at all, so my answer would be moot.

    Final bit: designed computers. These are man-made objects explicitly designed to realise in practice the abstract concept of computation (I’m avoiding the word “implementation” to steer away from the way Rapaport uses it). From theoretical Turing machines (and equivalent) to physical realisations: these are designed in such a way as to make the interpretation obvious, the arbitrariness of the matching between abstract concept and physical states is embedded in the design process (transistors are open or close / on or off, by design, we don’t want them to be “36.718% closed”). Thus, when we ask ourselves “what is this (designed) computer computing?” we have much better chances of finding a sensible answer – we can take for granted much of the mapping [note that I’ve written “sensible answer”, not “correct answer”!]. This is the same passage as the one I propose for “drinking glasses” in my post on essentialism:
    – Thinking of the essence of a glass (“glassness”) is less wrong than thinking of the essence of a tiger (“tigerness”). In the world, you can find something which approximately corresponds to “glassness” (or own idea of drinking glasses and how they facilitate drinking) and this has visible effect on the shape of those things we put on our dining tables. By contrast, the idea of “tigerness” has zero effects on what tigers are – the relation is inverted: it’s tigers that have an effect on our idea of tigers.
    – (designed) computers are like glasses, our abstract concept of computation was used to design computers, they embody an idea. Brains are like tigers, our idea of computation did not influence the structure of brains. The arbitrariness of interpreting brains as computers is, to my eyes, undeniable. It doesn’t mean that we shouldn’t be doing it, it just negates the ontological – observer independent – side of it.
    Because of design, saying “computers perform computations” is much less wrong that saying “brains perform computations”.

    I hope this makes sense!

    • ihtio says:

      Thanks for a lenghty response, Sergio.

      However I still don’t know what you mean by the terms “computation”, “computational”, “computable”. You only stated that the first one is an abstraction.

      I will give you a “so what” argument here. I hope you’ll like it.
      Most of the concepts we are using are abstract. Most of the words we use are arbitrary. The concept “dog” is abstract as it applies to many things (many dogs), and it leaves out many things (the color of the fur, the size, etc.). The word “dog” is arbitrary, in different languages dogs are called differently. So when you say that “computation” is an abstract concept and that implementations/interpretations are arbitrary you aren’t saying anything interesting, really – because it applies to virtually the whole human communication system. If you would like to somehow critique “computation”, “computationalism” (or use it in a positive manner), you should say something that differentiates these concepts and words from all others. Saying that they are abstract and arbitrary is not the way to go.

      You may say that “computations” don’t exist “in the world”, in essense you are going full Kant on the idea of “computationalism”. However it’s not something specific to “computation” – it is an argument that applies generally to the way minds conceptualize and think about the world – they use phenomena (thoughts, images, language, etc.), instead of perceiving “directly” (whatever that would mean) what is out there (noumena).

      The last thing is that you may think that “computation” is something abstract and not something that we may come across in the real world. The counterargument that one could give is the following: “metabolism” is something concrete, something realized in many ways. Krebs cycle is a mechanism that produces energy (and other things, etc.) – it is a metabolic mechanism. When we come across Krebs cycle in an organism we come across metabolism. Taking this analogy to the conclusion: computation is something concrete. Sorting children by their height is a computation. When we come across someone sorting children by height we come across a “sorting computation”. And that’s that.

  11. Sergio Graziosi says:

    Nice! I like the so what answer.
    I’m not defining “computation”, “computational” and “computable”, because I’m not a mathematician and I’m scared of making a mistake. Honest, take a mathematical/CS textbook and look it up, I’ll accept the definition you’ll find there.
    That’s the sort of thing described on this Wikipedia page:
    https://en.wikipedia.org/wiki/Theory_of_computation
    but not necessarily this one:
    https://en.wikipedia.org/wiki/Computation

    What I’m adding is something on top of the mathematical theory, and I’m worried that Kant is acting as an undesired (by me) attractor: I’m avoiding to mention Kant, noumena and the lot because I have the bad feeling that I’m saying something almost identical, but not equivalent, so I’d like people to judge what I am saying, not how close to Kant I’ve got.

    Computations, like dogs, work pretty well as concepts to use in normal life: the dogs next door bark and my computer computes. So far, it’s ok, the trouble happens when we try to build an additional theory starting from “dogness” or “computational”, i.e. latching on the essence of both concepts. Here the concepts work in remarkably different ways:
    “dogness” exists out there only insofar it is present within minds (not only human), even dogs treat each other according to their own habits, and switch to different patterns when dealing with humans. Nevertheless, if we needed to find a precise, 100% reliable criteria to distinguish dog from non-dog, we would not find any – we can find things that aren’t dog without reasonable doubt, but there are others that we can’t really say, there is a grey area around the concept. By contrast, with computations we can start with a perfect definition (while for dogs, there is no exact definition), but have the universal version of the same problem: the concept is radically underdetermined. Whenever we have something that transitions across a series of steps (and again, we have infinite ways of identifying/discriminating separate states), we can say that it computes, or that it’s computational and that the transition between steps is a computation.

    Thus, for the first case, we need to accept the fact that “dogness” is a heuristic, fuzzy class. For computations, we need to find additional criteria so to reduce the degrees of freedom (I suppose you could say the class is so fuzzy that it can contain everything). Thus my questions: “can we model X computationally?” (i.e. would we be able to capture the full range of observable/observed behaviours of X), or “if we model X computationally, how much significant details will necessarily be left out?” (i.e. what observable/observed behaviour of X can’t be described in this way?) – you may think of these questions as add-ons, used to make the original concept less universal. Hint: if X is able to generate truly random output, we do know we can’t describe its behaviour in computational terms (although I’m not sure we can know without doubt if something is truly random).
    If you want you can say that something is entirely computational if all its behaviours can be modelled via algorithms. I’m not too convinced anything is: even computers eventually break down and start behaving unpredictably, so I’m happy to try capturing as many behaviours as possible, expecting that some will always escape.
    Does this help? There is a difference between “dogness” and “computational”: the first is fuzzy, but we can’t expect that it can be applied to any arbitrary entity. The second is so fuzzy as to be applicable to anything (and it’s dog! 😉 ), so by itself, it can’t do any explanatory work: we need additional criteria (we agree in part, but I’m not proposing to change the canonical definitions of comput*, I am however adding additional criteria, in order to allow the concepts to be used fruitfully).

    Metabolism: that’s another interesting case. It is also an abstract concept (a theoretical notion we’ve invented to describe the world), but unlike computation, it does come with additional constraints: we consider chemical reactions “metabolic” when they happen inside a living organism, and when they are part of the normal things that keep the organism alive. So in this case, it’s not massively underspecified, only a bit, but it’s kind of in between the two other extreme cases.
    Not sure if I’m answering, but I’ll let you tell me!
    Thanks again ^_^

    • ihtio says:

      Sergio,

      If you are not defining “computation”, “computational”, “computable”, then it virtually impossible to understand what you are trying to say. If you say that you are afraid to define these concept and yet state that the term “computational” is undetermined and we should rather ask if something can be modelled “computationally”, then one can only guess what you may have in mind. I, for one, cannot come to the dark conclusions you are making from the mathematical definition of computation nor from the physical (implementation) definition of computation.
      Mathematical computation – what is studied by mathematicians and computer scientists; perform by Turing machines or in lambda-calculus.
      Physical computation – processes that happen in normal computers.

      It seems to me that you started with some ill construed feeling that “computation” is such and such and/or that it is “any linked steps producing something” and then you can come to some crazy conclusions such as “rocks perform all possible computations / implement all possible minds at the same time” or that “computation is extremely undetermined” or that “computation is interpretation-specific” or whatever.

      Metabolism is exactly like computation. Metabolism happens in living organisms, computation happens in physical computers.

      You have a problem only because you created the problem yourself – by starting with ill-conceived definitions. You can’t build a coherent line of reasoning of shaky and fuzzy foundations.

  12. Sergio Graziosi says:

    Ihtio,
    We are definitely not understanding each other. I have stated that I understand “computation” as the mathematical approach does. I.e. Mathematical computation – what is studied by mathematicians and computer scientists; performed by finite state automata, Turing machines and so forth. I’m not refusing to commit to a given definition, I’m pointing you to the existing ones. I’ve tried to find short-enough definitions, but they all entail lots of text and then branch out, calling for definitions of tricky stuff such as “problem” or “information”.
    Shall we settle for “Computation = symbol manipulation”? (with apologies for cutting all corners!)

    If you remain in the mathematical domain, all is fine, no (new) dark conclusions to find.
    We also build computers, and do so using the work of mathematicians as a guide. Little problems there, as we explicitly design the stuff so we start with plans that allow us to map the abstract concepts to what happens in hardware.

    Last bit is about the other way round, we have a piece of hardware, to use an example you’ve used recently, we’ll consider some water being poured on a rock. We mas ask questions such as “is it computing?”, “is it computational?” or “what does it compute?”. To do so we need to define possible states of our water on rock system and assign some meaning to them. The underdetermination problem comes around because the theory we have (“Mathematical computation”, entirely abstract) doesn’t give us any constraint in how to do this mapping – as we have no constraints (besides the number of different states we can identify), we can interpret the system as computing an inordinate amount of different things. If you have a problem with this, you should ask Hilary Putnam, John Bishop and others as I don’t own this problem at all.

    My tiny (hypothetical) contribution is slotting the above inside my (sigh, I admit, Kantian) view of cognition, and from that, into knowledge, epistemology and whatnot. In this context, I find that we need to limit ourselves to asking questions such as:
    “can we model X computationally?”, or “if we model X computationally, how much significant details will be necessarily left out?”. That is: can we predict where water will drip by writing an algorithm that accepts a measure of where and how much water we put on the rock? If we keep going and try describe all possible water-rock dynamics in this way, what will we be missing? (will we be able to predict all the eventualities that we care for?)
    I’m also saying that if the answers are all “yes”, then we could say that the system is computational, but as the example suggests, I’m also implying that the question is moot, its answer doesn’t teach us anything that I can readily consider interesting. I also think that if we are interested about all possible interactions, without exceptions, then the answer to all the questions above can’t be an unqualified “Yes”.

    More generally, I don’t know what you are criticising. This conversation started from my explanation of why I liked Rapaport’s “Implementation is semantic interpretation” and why I got lost while reading the newer “Semiotic Systems, Computers, and the Mind: How Cognition Could Be Computing”. If I have a problem, it’s that from my starting point, I can’t plug-in what Rapaport is saying in the latter article because it seems that we adopted incompatible premises.

    So, I (may) have a problem because I’ve accepted the underdetermination issue, but I haven’t invented it: the problems of what is happening when implementing abstract algorithms and how to correctly understand what algorithms are executed by not-designed systems are real, very much open and are being actively discussed by people that know much better than me. To me (from the inside), however, I don’t have a problem: if anything, I’m noting that my general view allows to frame the rather unrelated problems posed by the theory of computation in a rather neat way that doesn’t smell of “ad-hoc” solution at all. What I am worried about is that my general stance seems to be able to accommodate too much too well, so it feels cheap and makes me wonder whether I’m cheating without noticing.

    OTOH, based on your last comment, it seems that you reject the underdetermination issue, but if you do, you should address the points made by the people who identified it, I’m not qualified to reply on their behalf. All I can say is that yes, they may be wrong, but I can’t see how.
    In your view, does computation happen “only” in physical computers? Is it confined to a particular subset of designed artefacts?

    • ihtio says:

      Sergio,

      I very well understand that you subscribe to mathematical definition of “computation”. What I would like to suggest is to maybe try to use some other definition and see what happens :). If we use the mathematical definition then certainly nothing computes, as computations are abstract mathematical objects studied by computer scientists. However the word “computation” can be understood in a different way – as important physical processes (important, that is not – for example – heat production) performed by computers. Then you don’t have any arbitrariness, dancing pixies or whatever. You have computers performing computations.

      The two quotes then could be attacked, preferably by you:

      For me, computations are abstract, they are interpretations we use to make sense of mechanisms.

      And

      From my view, nothing is computation. Lots of real world processes can be modelled as computations, so it’s legitimate to ask if something is computational, but to ask if something is computation is like asking if water is music: the question entails a category error. Similarly, asking if humans “implement cognition computationally” makes no sense to me, the question is malformed to my eyes. A good question is “can we model cognition computationally?”, or “if we model cognition computationally, how much significant details will necessarily be left out?”. If the answer to the latter is “a great deal”, then the quest of classical cognitive neuroscience would be doomed to fail.

      I think you may think that “X is computational” == “X can be modelled using computers”.

      You say you think that we should ask whether we can model cognition computationally. Was there any serious doubt about it?

      Now, back to the immediate comment.

      Last bit is about the other way round, we have a piece of hardware, to use an example you’ve used recently, we’ll consider some water being poured on a rock. We mas ask questions such as “is it computing?”, “is it computational?” or “what does it compute?”. To do so we need to define possible states of our water on rock system and assign some meaning to them. The underdetermination problem comes around because the theory we have (“Mathematical computation”, entirely abstract) doesn’t give us any constraint in how to do this mapping – as we have no constraints (besides the number of different states we can identify), we can interpret the system as computing an inordinate amount of different things.

      Well, try to do the same with metabolism. We’ll consider water dripping on a rock. We may also ask questions such as “is it metabolising?”, “what does is it?”, etc. We only have different states of the rock with no a priori interpretation of these states. Maybe the rock has a Krebs cycle! The rock doesn’t give us any constraints… We can interpret the rock as metabolizing various things. Of course it is absurd. It is as absurd as thinking that the rock may be computing anything. Rock is not a living organism. Rock is not a computer. Rock doesn’t metabolize. Rock does not compute.
      Trying to go “the other way round” is not the way to go.

  13. David Duffy says:

    “computations are abstract concepts”

    I personally didn’t like the line of argument that Rapaport [2012] takes, but I haven’t digested his other papers. In the comments about Jochen’s recent paper at Conscious Entities I mentioned a few recent papers on the thermodynamics of information in non-equilibrium systems which I feel might point to an escape from dancing pixies and the like. In these partitioned systems (there is a Subject and an Environment), perceiving the environment increases the free energy available to S, which in turn allows work to be extracted by S by doing computations (think Maxwell’s demon or biochemistry). So these are not abstractions occurring ubiquitously in rocks, but concretized abstractions (spending energy doing things of value). What has this to do with consciousness? Friston is one who links these – for example see around p19 of

    Click to access Consciousness,%20dreams%20and%20inference.pdf

    Friston also explicitly mentions dynamical systems – the internal model principle

    The Internal Model Principle


    is also relevant to the idea that the brain is the organ that homeostatically minimizes surprise.

  14. Sergio Graziosi says:

    Ihtio,
    that’s better, I think I understand. However, I’m not changing my position, for one simple reason: we have (long and complicated) mathematical definitions of “computation”, but we don’t, as far as I know, have precise, not fuzzy, not-ultimately-arbitrary criteria to match physical phenomena to unique “computational descriptions” of them.

    Thus, if I were to try following the path you’re pointing to, immediately at the first step you’ll be able to tell me (or anyone trying) that I’m doing it wrong because my starting definitions are not precise enough. Of course there is people trying to follow that path, but for me, I need to solidify the mess that got on Jochen’s nerves first: I think I’m onto something which will help a great deal, but even if I’m right (and of course, I expect to be somewhat wrong!) I’m clearly unable to make my position intelligible, so I can’t start that route with any sense of optimism quite yet.
    If you have time to spare, and haven’t been there already, there is an interesting series of posts on the Brains Blog (from August 10, 2015), by Gualtiero Piccinini, accessible here (with plenty of refs to keep you busy for as long as you wish). I haven’t digested the whole of Piccinini’s views (TBH, I’m not even close), but I know enough to think that you are suggesting we should embrace his overall aims.
    In some ways I agree, but I think this opens up many problems, I don’t think it’s a trouble-free route at all.
    You’re probably already following the conversation below GO Ai, but in case you are not, I think it’s relevant:
    Is it legitimate to consider neurons as exclusively computational? If you ask me, the “exclusively” qualifier makes my answer easy: no, not exclusively. But if we change the “exclusively” to “mostly” and/or “essentially”, answering becomes really tricky! I don’t think anyone has entirely convincing answers – but hey, what do I know about my own unknown unknowns? (suggestions are welcome)

  15. Sergio Graziosi says:

    David,
    How will I ever be able to keep up with this stuff?
    You are explicitly pointing in the direction I wish to move, problem is that I’m stuck at the registration desk, unable to even get enlisted.
    The unifying potential of “prediction based” and/or “free energy minimisation” approaches is just staggering: if we can plug in Phenomenal Experience and construct a coherent picture, the whole will encompass topics such as homoeostasis, perception, consciousness, sleep & dreams, decision making, semantics, knowledge and science – I’m pretty sure evolution plugs in as well, somewhere/somehow. All on the basis of the same principle, applicable at different scales. My head spins when I think about it.

    I suppose the optimistic view is that people like Friston, Clark, Graziano and others are finally joining the dots – albeit the cultural barriers between the people who play in this field are a bit scary. Anyway, I’m starting to think that this trend is making my ETC redundant, so perhaps I can relax a bit, and either go with the flow or play more guitar.

Leave a comment

This site uses Akismet to reduce spam. Learn how your comment data is processed.

Follow me on Twitter

All original content published on this blog is licensed under a Creative Commons Attribution-NonCommercial 4.0 International License.
Creative Commons Licence
Please feel free to re-use and adapt. I would appreciate if you'll let me know about any reuse, you may do so via twitter or the comments section. Thanks!