Ecological representations: a reply to Golonka and Wilson

This is going to be a very unusual post, it’s an ad-hoc effort, responding directly to Sabrina Golonka and Andrew D Wilson‘s call for feedback: they have recently published a pre-print on bioRxiv, entitled “Ecological Representations“. In the accompanying blog post, they explicitly ask for comments, and since I’ve found their paper extremely promising, I’ve agreed to produce my detailed commentary here. This is also the proper follow-up to a brief discussion I’ve had with Sabrina, mentioned also in my reply to Epstein’s essay.

You are about to read a form of pseudo “open peer review”: it’s not exactly open peer review because I’m not really their peer. I’m not a psychologist and not even an active academic in related fields. Never mind: I have strong opinions and when I started blogging I’ve decided I will make them public.
For the reminder of this post, I will address Sabrina and Andrew directly (makes for easier writing), however, note that they are interested in collecting opinions (not my opinions in particular!), so please do feel free to chip-in with your own comments.
Before reading what follows, it’s probably better to read their paper in full (time well spent: you won’t regret it).

Introduction

Sabrina and Andrew,
thank you for writing this paper, reading it confirmed my higher hopes: I think it’s a very needed move in the right direction, and could help cut through an impressive amount of conceptual knots. I really hope your paper will become a cornerstone of both psychology and philosophy of mind, so I am thrilled by the opportunity you’re giving me to try contributing. However, I do think you’ll find what follows difficult to take onboard, for multiple reasons, so I guess it’s better to make the main ones explicit.

First, our backgrounds are very different, my formal preparation is in molecular biology, biophysics and neuroscience (all growing steadily out of date). My interest in psychology is personal, pursued in my private time, not formal in any way (meaning it is patchy, as I only dig into what grabs my interest) and not focussed on Gibson’s work at all. Furthermore, in the last few years I’ve been actively concentrating more on philosophy of mind, not empirical psychology per se. Thus, my language and point of view is very different from yours, making effective communication harder; it is also quite possible that I’ll be barking at the wrong tree (as far as you are concerned), and that my views/comments will simply not apply to what you’re trying to achieve.

With this important disclaimer in mind, I can describe the structure of what follows.
In general, I found your paper to be well written, (mostly) clear, easy to follow and very promising. However, I also think that you have overlooked or misplaced an important conceptual step, which in turn justifies why I disagree with one important conclusion you reach. The possibility that I am misreading or misunderstanding you however is very concrete: it’s possible that I simply don’t grasp the concept of Ecological Information (EI) well enough, and that therefore my main criticism is misplaced. If that’s so, please feel free to expose my own mistake/ignorance, without sugar-coating. As you’ll see, I won’t be pulling my punches either: in the interest of clarity, I’ll be very direct (knowing/expecting that this is what you’d appreciate more than anything).

The big reason why I’m investing a few days of my spare time is simple (also insanely ambitious): I hope that addressing the step I believe you’ve missed will make the paper even more groundbreaking; as it is, your paper tries to bridge EI with almost “traditional” representations (as used in Cognitive Psychology), but in my opinion, it currently falls just short of the mark. If you’ll find my comments somewhat useful, you might be able to also bridge EI with Shannon’s Information (SI) and, for the same price, open the door to the field of prediction-based perception. If I’ll succeed (unlikely, but it’s worth trying) you may eventually see how to unify all four approaches (instead of “just” two), saving the best sides of each and concurrently solving related philosophical problems. A high-stakes, high-risk game, so I’m happy to take the risk as I personally have nothing to lose :-).
As it’s unlikely that you’ll find my high-risk suggestions useable, I’ll add a few low-risk, low-gain suggestions at the bottom of this post, hoping to be useful at least a little.

What follows is rich in quotes (I hope you don’t mind), I’ll start them with the indication of the page where they appear, to aid navigation.

Take-home message so far: please be aware of the distance between our backgrounds, also do keep in mind that I’m deliberately taking a long shot, so the chances I’ll miss my self-selected distant mark are high. In the rest of this post I will try to:

  1. Expose what I think is a (very big) gap in the picture you are painting.
  2. Explain why I think the presence of this gap doesn’t allow you to justify some of your conclusions.
  3. Propose my own way of filling the gap.
  4. Briefly explain why I think the proposed addition is worth the risk.
  5. Finally, I’ll close up with minor suggestions and general praise.

Main Commentary

I will start by declaring my main problem with your aim and conclusions, you write:

[P3-4] We propose that Gibsonian ecological information perfectly fits the definition of a representation.

I think this is wrong: Gibsonian ecological information (EI)  does not fit the definition (on its own), but what you propose as “neural representations” do fit the definition of representation you’ve adopted. This ties-in nicely with what I think is the main gap you’ve failed to bridge.

The way I read you, you claim that EI is “out there”, it is collected by sensory organs, transformed into “neural representations” of EI and then used to both control and select actions. At worse, this picture is sketchy to the point of being wrong (my biophysics background comes to the fore).

In plain terms, what we do know is that sensory organs collect and transduce (transform into nervous signals) a vast amount of data. I’m not using the term “information” yet, for reasons that should become clear below. At any given time, touch receptors, smell-cells, photoreceptors, proprioceptors (and more) are all active doing this: they collect everything that hits them (if it’s able to influence their specific receptors) and send a corresponding signal towards the brain. Once transduced, what was before unspecific energy or molecules becomes something which can be directly interpreted as a signal (the action potentials travelling through the axons of sensory neurons). Nothing particularly new in this, but this very general and universally accepted picture is apparently hard to reconcile with the vision you are proposing.

In your view, EI is out there, it is collected by the senses and then used to control/select action. This isn’t plain wrong, but glosses over important details:

EI is indeed out there, but is bathed in a sea of unspecific stuff. All this unspecific energy and molecules will disorderly hit receptors and all of it has the potential of being transduced (within the boundaries of what sensory organs can collect). Thus, the first signals that are transmitted within an organism are unspecific, they potentially contain also EI, but they actually “contain” a lot more. At this point the task for the organism is to discard all the information that isn’t currently useful and retain only what counts as EI. Crucially, in your paper you mention something that is close to this need only when you briefly comment on “learning to perceive”, but otherwise you ignore the whole subject. This for me is a deal breaker: if I were formally peer-reviewing your paper I would recommend to reject unless you are willing to show how organisms manage (or may manage) to extract EI from unspecific stimuli. Unfortunately, doing so is potentially negating one of the things you find exciting: EI is indeed present outside, but already considering it a representation is at the very least misleading, as it is effectively hidden by the vast amount of potentially irrelevant data. At “collection time” EI is present, but useless: one needs a way to extract it, and a way which is flexible enough to accommodate the somewhat unpredictable ecological needs of the perceiving organism.

Thus, one of the main points of the paper:

[P9] we propose that ecological information simply is the representation that closes the poverty of stimulus gap, though it is external and ecological rather than internal and mental.

Is strongly undermined by what I’m proposing here (sorry!). Specifically: I’m not convinced that it is already useful to consider EI, when external, as a representation. To be considered as such, one needs to take as a given (gloss over?) the context and internal state of the perceiving organism: depending on contingent factors, including the task at hand, what counts as EI changes all the time, so I think we’d be better off by accepting that EI is such in virtue of internal factors as defined by the organism itself and, crucially, its own ecological needs.
To use the example you make of coordinated rhythmic movement: the dynamic pattern of “relative direction” used to coordinate action is external in the sense that it is potentially available to any third party observer. This is of extreme importance when it comes to empirically reverse engineering how organisms produce behaviour. However, in the world out there there is a hell of a lot more structures and dynamics, all of them co-existing in a seemingly chaotic mixture. A priori, all of them may have important ecological implications for a perceiving agent. Importantly, in your own example, what makes the “relative direction” criteria relevant to the subject is determined by something inside the subject (in this case, what the subject is trying to do).

Of the collected signals, what comes from visual (and I’d guess propioceptive) sensors can be used to determine in what direction each limb is moving. From there, you can also derive the relative direction, and use it to coordinate. Thus, functionally, the Shannon’s kind of information (SI) that is available outside is massively filtered at the level of sensory organs (only some physical properties can have effects on the activity of a given receptor), it is then processed, and via this processing, the ecological information is “extracted”. The process is computationally analogous to compression: you have a hell of a lot of bits outside, transduce only some (the potentially relevant, as defined by evolutionary processes) and “process” it further in order to progressively reduce them, ending up with the minimal amount of bits which are enough to react appropriately (coordinate, in this case). If we change the task, what information is thrown away and what is instead retained would change accordingly. Thus, if we are trying to explain how an organism does all this, the fact that the relevant information, AKA Ecological Gibsonian information is out there is indeed important, at least because it enables to design solid empirical investigations. However, the crucial functionalities that can allow an organism to function are:

  1. Ability to collect whatever is potentially relevant.
  2. Ability to extract what is actually relevant for a given task.

Because what is actually relevant is a function of the organism, the context, and of the contingent organism’s state, saying that the representation is already external hides the fact that what counts as EI is determined by the organism itself. For this reason, I fail to see why it’s useful to declare that external EI is representational (apart from saving a signature aspect of radical embodiment). Photons bouncing around can be seen as ecological information only if someone is detecting them.
Time for a little detour: one factor of extreme importance is that something like the “relative direction” of the organism’s limbs is something that can be detected, using the vast amount of (potentially) detectable stuff in the world. Crucially, this data is collectable by virtually any third-party observer: it is an objectively measurable property of the environment, and thus, directly amenable to empirical investigation. This enables doing science, justifying the successes of radical embodiment; one can hypothesise: “this particular pattern is what the organism uses to coordinate”. With such a hypothesis, you can make predictions and on this basis, verify if the hypothesis seems to hold. In this way, you get to specify what kind of signals may be collected in 1. and how they need to be transformed (filtered/compressed) to perform 2. Paired with good old boring biophysics (specifying what and organism can actually collect, in terms of SI), you’ve narrowed down the possibilities to a tremendous extent.

Yes, in a sense the EI is out there, it is external, but what makes it “ecological” or, if you prefer, what makes it possible to extract the signal, differentiate it from the irrelevant (not ecological, not relevant for the organism for the current task) is exclusively internal. Since we are interested in understanding how the organism detects the relevant information (from the messy bulk of stimuli collected by lots of sensory organs), the information needed is by definition out there, but it actually becomes proper Ecological Information because of how it is internally processed.

This leads to what you call “neural representations”. What your paper seems to suggest is that EI is directly collected and transformed into “neural representations”. What I’m suggesting is that the “directly” part is (if implied, as I think) misleading. Furthermore, how neural representation of EI are generated is exactly the interesting passage in the whole story. I appreciate you probably have consciously decided not to tackle this aspect, but I think it’s a mistake:

a. It makes your paper vulnerable to the kind of criticism I’m making.
b. It misses a tremendous opportunity, while weakening your claims.

Specifically, your paper already tries to unify traditional Cognitive Psychology and Radical Embodiment, while keeping the best sides of both views. To do so, you gloss over a major aspect of perception, opening up to criticism. Instead, you could bite the bullet, strengthen your argument, and get additional unifying powers:

I. As hinted above, the revised story I’m proposing is also mapping the relation between Shannon’s Information and Ecological Information. (See also my attempt to link structure and dynamics to SI.)
II. Showing how information is filtered/compressed in order to extract EI from raw sensory input allows to slot-in the other main hot-topic in neuroscience: the predictive approach. Doing so solves a problem and reconciles apparently antagonistic views…

I’ll allow myself to briefly discuss this second benefit. We already know that all sorts of raw unspecific signals are collected by sensory organs, we know they are processed along neural pathways at each identifiable step (at the very least, when signals pass from one cell to the other). The story I’ve been painting starting from your paper then allows to clearly define what is the main function of the transformations that happen during and after the first transduction. The aim is to isolate EI and to discard the rest.

The problem is that what counts as EI is both context-dependent and internally defined (depends on the state of the organism). Thus, the system that extracts EI needs to be potentially universal (we agree on this, apparently), or at least, as versatile as possible. It’s like designing a targeting system while not knowing what kind of projectiles and targets will be used. Such a system needs to be dynamically able to identify the correct kinematic projections from the original (outside world) dynamics. At any given time, the set of possible kinematic projections is effectively infinite. How can a system optimally isolate the correct ones when it can’t make many assumptions on what will make them “correct”? [If you wish, I’m merely restating the framing problem.]
One solution comes from the prediction-based approach: if you can manage to transform input at time A in such a way that it efficaciously predicts input at time A+1, you are guaranteed that you are keeping as much potential EI as possible, while at the same time you are discarding everything else – you are distilling the potential EI while filtering out all the noise. For brevity, I’ll leave this as a hint, but do note that I have a lot more to say, so in case I’ve tickled your curiosity, feel free to ask. [Note also that, like Andrew, I still have to read Clarke’s latest book, but I do usually agree with him. See also this brief article by Tim Genewein on why Bayesian approaches can be understood in terms of lossy compression.]

This concludes the highly challenging and propositional side of my comments. To close off the main commentary, I still need to address the one conclusion you make which I don’t think is appropriately justified. It will take just a little longer.

[P18] Our developing solution begins by identifying that information can not only control actions; it can also select them

Yes, no problem with this. Once the organism has isolated an applicable form of EI, it can select actions, not only control them. Interesting here: to select, one discards most of the collected SI, and remains with the amount of bits necessary to discriminate across the available options, so very few bits. In controlling action, frequent and highly tuned corrections are needed, so less SI is discarded. This leads to a vision of “higher order” cognition as the most impoverished form of cognition! It is also the only cognition which we consciously experience, so putting the two things together, you end up explaining a few interesting things:

i. Traditional cognitive psychology starts from the ideas of impoverished signals and of enriching representations because, well, that’s what we experience, so makes intuitive sense. It’s also somewhat wrong. The most impoverished signals are objectively poor because they are very rich in EI. One could say they are objectively poor and subjectively rich (!).
ii. A signal rich in EI, can be used to produce high-level predictions, making it possible that such signals are indeed sometimes used to fill-in the blanks, as assumed by cognitive psychology.
iii. The enrichment/filtering process is likely gradual: if used to control movements the signal can be routed towards outputs without being impoverished to the max.
iv. This also directly explains why [P19] “there is no convincing evidence that we can instantiate a neural representation of information sufficient to support action control unless the relevant information is present in the current environment“. We only store the most enriched EI, why would we store anything else? But because of that, the information is poor (objectively) and thus, not sufficient to drive highly refined behaviours. It only suffices to effectively select behaviours. In other words, I agree with your entire “motivation 3” discussion (from P16), and think it should be extended.

This brings us to my last problem, appearing at the bottom of p20:

But these neural representations, while internal, are not the mental representations of standard cognitive theories.

Unfortunately, you have not convinced me that the two kinds of representations aren’t one and the same. You describe impoverished representations which can produce perceptions (hear my inner speech, for example), and can be used both to produce inferences and select behaviour. Without other context, I would be recognising these representations as the classic cognitive psychology ones. The one thing you’ve added is showing why they need to be intensional (because by being so they solve the problems associated with representations and therefore make representations ecologically useful!), and thus you are showing why cognitive psychology is wrong when it understands representations in a way that can only make them extensional.
In other words, you are correcting a very big and frequent mistake made in cognitive psychology, you are showing what the representations we talk about actually are, but you are not negating their existence. Perhaps is my relentless drive towards unification that is speaking here. [Side note: the intensional/extensional distinction you make is spot on, and the main reason why I agree that your view actually goes a long way in naturalising intentionality. I would love to see your paper published for this reason alone.]

Other comments

While re-reading your paper, I took a lot of notes. I will include them here with minimal editing. If the ambitious commentary above will prove to be useless to you, perhaps you’ll find something useful in what follows.

Across the whole paper you sometimes refer to “information”, sometimes to “ecological information”. When I read (unqualified) “information”, I automatically understand “Shannon’s information” – problem is, I don’t think you ever refer to SI, so the effect is confusing and (for me, a non-Gibsonian) an extra effort is required. I guess most scientists would experience a similar effect, so why not introducing the EI acronym and using it throughout?

[P3] They have yet to develop any widely accepted explanations for the ‘high-order’ cognitive activities driving Motivation 3.

Not sure I follow the grammar, here. What is the “3” for? Why “driving motivation”?

Use of “intentionality” on P3: it’s not immediately clear whether you are talking about having “intentions/plans” or intentionality as “aboutness”. P4 clarifies that it’s the latter, but:

[P4] a cognitive system must be able to behave as an (almost) arbitrary function of the environment. In other words, a cognitive system has to be able to be ‘about’ anything it encounters in the world.

I do see the link between these two sentences, but only because I already agree with it, thus I fear this passage might be confusing to others.

[P5] We take Motivation 1 (getting intentionality out of a physical system) to be the primary job of representations. Motivations 2 and 3 are constraints on exactly how Motivation 1 might be implemented given the existence of the two gaps.”

Do you need the second sentence? After re-reading the rest of the paper I don’t think you need to qualify.

Note 5 on P6: I don’t understand it! This note confused me more than anything else.

[P14] Ecological Information Supports System-Detectable Error

This is the only section where you hint towards the big problem you are otherwise largely ignoring: how is the correct EI isolated? The fact that you do mention this makes me hope that my main criticism may not be too wrong.

[P16] Motivation 2 is that representations are required to bridge a poverty of stimulus gap.

I found the bit that follows a little confusing. For me the poverty of the stimulus refers to the fact that we don’t collect all possible signals from the environment, and that sometimes the signals are very indirect (i.e. a pawn print isn’t the tiger, but still a worrying sign, I guess). However, as I’ve explained above, a huge issue is the one of isolating EI from the raw incoming signals, it’s a matter of reducing a huge amount of bits to much, much fewer (i.e. to specify whether to do this instead of that you end up needing only one bit!), but of course, the problem is doing it effectively. Thus, once we have a grasp of how to collect intensional information (see above: I think you can bridge this gap with the predictive approach), reducing it to its bare minimum, AKA impoverishing the signal, is precisely what needs to be done. It goes without saying that the result is necessarily symbolic/representational.

The good stuff

Before concluding, a little praise, in the form of a selection of quotes I’ve absolutely loved (there are many more!).

[P4] These informational representations solve both the symbol grounding and system-detectable error problems [yes, they do], and they constrain the form (and empirical investigation) of neural representations caused by interacting with information.

If we do fill in the blanks (see above), this hits the nail on the head.

[P4] these two ecological representations then address all three motivations for representational accounts described above, including, as we develop below, the major challenge of supporting ‘higher-order’ cognition.

Yes! If you’ll manage to get this view in the mainstream, major problems could be finally surpassed, great stuff.

[P9, on coordinating rhythmic movement] The kinematic information variable ‘relative direction’ is standing-in for the dynamical world property ‘relative phase’ and it requires no additional enrichment from a mental representation in order to do so.

[P12] understanding the function and structure of neural representations requires understanding the structure and environmental cause of ecological information, which is not how cognitive neuroscience currently guides its work.

These two quotes, along with the ones below, summarise why I’m so excited, you are fixing stuff that has been broken for much too long…

[P14] Informational representations [ecological information, once distilled inside the organism], however, are immune to the grounding problem.

Agreed, with the modification I’m proposing.

[P14-15] because, from an ecological perspective, perceiving and acting are fundamentally intensional (Gibson, 1979, Turvey et al., 1981) and because the content of informational representations is accessible to the perceiving-acting organism, organisms can be aware of when these representations are wrong and this awareness can have a consequence on future behavior.

And:

[P16] because information specifies properties and not individuals (Turvey et al, 1981) informational representations can explain our behavioral flexibility. When we encounter a novel object or event, it will likely project at least some familiar information variables (e.g., whether it is moveable, alive, etc), giving us a basis for functional action in a novel context.

In a way, I’m paradoxically disappointed by how quickly (but, in my view, effectively) you sweep through the solution to the problems of intentionality, of grounding representations and the role of the intensional/extensional distinction. I realise this isn’t new, but does bear repeating and re-proposing because it is exactly right. Failing to focus on intensional content is the source of so many errors and confusion in traditional cognitive approaches.

Conclusion

I am very aware that I’ve grabbed your ball, took it to a different court and started playing my own game with it. I’m doing so because I have some hope that you’ll like my modified game. If you don’t, please do feel free to tell me to give the ball back and eff-off. I’ll comply and won’t be holding a grudge, it’s a promise. If, otherwise, you do like some of my lucubrations, please feel free to use them as you wish (no strings attached). What you are doing is immensely useful to what I’m trying to put together here, so I do know I’m not wasting my energy (just thinking about these things is useful to me a-priori).
Finally, if, in virtue of some extraordinary stroke of luck, you think I can help you some more, please do let me know, I’ll be very happy to try.

In all cases, I’m looking forward to your feedback. Thanks for reading!

Bibliography
ResearchBlogging.org

Golonka, S, & Wilson, AD (2016). Ecological Representations bioRxiv DOI: 10.1101/058925

Tagged with: , , , , , , ,
Posted in Evolutionary Psychology, Philosophy, Psychology
7 comments on “Ecological representations: a reply to Golonka and Wilson
  1. Sabrina says:

    I’ve just had time for a quick read through and I’m really excited to get into this in more detail. Fantastically useful critique! I’m off to a conference tomorrow so will not have much time to get into this for a week or so, but I’m sure I’ll be mulling this over until I can get to it properly.

    • Sergio Graziosi says:

      Excellent!
      Reading the comments on your blog I was starting to fear the distance between our cultures might be a real obstacle, glad to hear what I wrote looks useful, at least at first sight. 😉

      I’ll be on a short holiday over the week end, I normally have little/no spare brainpower during the working week, so your conference is happening at the right moment (hope you’ll have a great time).
      Moreover, general rule here is that time to think is required, so the pace is supposed to be slow!
      It might be easier to start asking me to specify better this or that point, or respond to peripheral concerns first: otherwise I fear I’m asking you to (re)write a full paper in response, making the task daunting.
      I’m offering an outside view, if it seems worthwhile I’d like to make it as easy as possible for you to tap into it.

  2. Charles Wolverton says:

    Hi Sergio –

    Having had some interaction with the “two psycholoists”, perhaps I can fill in for Sabrina re one narrow issue until her return. At the least, perhaps I can reduce her need to correct what you’ve said to her corrections to my response.

    First, you don’t seem to have fully grasped the distinction made between dynamics – a multi-modal energy array that is indeed just “out there” – and kinematics, the projection of dynamics onto what I take to be a uni-modal energy array that must be “perceived” by an organism based on the sensory stimulation consequent to its impinging on the organism. It is in the kinematics that EI is to be found. Thus, your assumption that the organism is confronted by a multi-sensory mish-mash of stimulation in which the EI is submerged appears to be too pessimistic. As I understand it, in order to get a handle on modeling/analyzing complex environments, they’re simplifying the scanario to keep the problem tarctable.

    To the extent that I have any specialized knowledge it’s in comm theory. So, I too am inclined to use it (at least for my own thinking) as a heuristic too. But that heuristic must be applied accurately. Thus, in the early stages of an organism’s processing chain you inject the comm concept “signal”, saying:

    Once transduced, what was before unspecific energy or molecules becomes something which can be directly interpreted as a signal

    is misleading and again seems too pessimistic. At that point, the input to further processing expressed in comm terms is signal+noise. This may seem like a quibble, but the implication of your description is that your “signal” comprises useful content (“signal” in comm vocabulary) seemingly buried in intimidating mounds of clutter (“noise” in comm vocabulary). But that’s a garden variety situation in comm systems that are more sophisticated than the ones we all know and love (AM/FM radio, TV, etc). In either case, the comm problem is first to extract the signal from the noise and then to extract the “information” from the signal, even in complex comm systems straightforward even if complex. But now we get to the reason that I’m now a convert to skepticism about the utility of trying to inject the concept of Shannon information into the ecological picture. In a comm system, getting to semantics (what to do in response to the extracted signal) requires further processing. But as I understand the ecological approach (at least ala Andrew and Sabrina), there is no analogous step. When EI is used for control (as opposed to its being used to select), the signal containing the EI is used (more or less) directly to dynamically control the organism’s behavior.

    In the case of using EI to select behavioral options (eg, in response to aural stimulation by speech), I think one can think of EI in terms of Shannon info for heuristic purposes, but I’m not sure the cost of doing so outweighs whatever benfit there may be. The problem is that Shannon information requires an agreement between sender and receiver as to the assumed selection space (eg, symbol set). Clearly, in the case of EI that agreement must be implicit – we don’t explicitly negotiate with the environment the semantics of visual sensory stimulation by light from a stop sign. We learn through training/learning to associate a certain pattern of neural activity with a responsive action. The training/learning can be thought of as an implicit “agreement”, but again it isn’t obvious to me what is gained by doing so.

    Gotta stop at that for now. Hope it helps..

  3. Sergio Graziosi says:

    Hi Charles,
    Nice to greet you here and thanks for the feedback, always appreciated!

    Leaving the niceties aside, I’ll jump onto the matter, in our usual direct manner.
    I’m afraid that I’ll have to disagree. I sure didn’t get that the dynamics element is (naturally) expected to be multimodal, but somehow the corresponding perceived kinematics is always unimodal. I wouldn’t agree to such a sweeping generalisation without some strong supporting arguments which will necessarily cover some empirical proof and a big theoretical argument to explain why the empirically verifiable cases necessarily generalise to all possible ones. Maybe I’m setting the bar too high? Never mind, this is really a small detail, I believe.

    Thus, your assumption that the organism is confronted by a multi-sensory mish-mash of stimulation in which the EI is submerged appears to be too pessimistic. As I understand it, in order to get a handle on modelling/analyzing complex environments, they’re simplifying the scenario to keep the problem tractable.

    We can agree on the “simplify to make it tractable” bit, but I can’t agree on the rest. Actually, I think the “simplify to make it tractable” bit is central to the story and has to be unpacked with plenty of details.
    First part of my rebuttal is actually a repetition of my main points: our sensory organs collect an inordinate mish-mash of stimuli, that much can’t be denied. Apparently a human retina contains about 3 million cones and 100 million rods, right now, all of them are collecting photons, while modulating the activity of downstream inter-neurons even as I speak. Add all the other receptors and you get an idea of how many “signals” (with scare quotes, see below) are being generated by sensory cells.

    This leads me to the necessity of using the scare quotes: I do appreciate I’ve probably been sloppy with my terminology in the original post, so I am genuinely glad you’re giving me the chance to rectify.
    The problem with standard comms theory is exactly how you depict it: as it is, it doesn’t apply. It doesn’t because it relies on the presence of an intentional sender (frequently acting by proxy, but that’s another detail), a pre-agreed code and a fully intentional receiver. This is why I wrote the article on “what the hell is information anyway” a long time ago.
    [Without that addendum Jochen would be right in protesting that using standard comms concepts to interpret perceptions implies an infinite regression.]

    Luckily I did remember to link the addendum to my main article here (even my sloppiness is limited!)… 😉
    In the case of perception, the stimuli which affect sensory cells cannot be regarded as signals without some additional gymnastic, this is precisely because they don’t imply an intentional sender, an agreed upon code, and even less a monolithic intentional receiver. My article on information tries to fill in this gap, I’m very much hoping you’ll find the time to read it, disagree with me and then explain me why! In the mean time, I’ll take it for agreed (aha!) and move on.

    Sensory receptors transform the stimuli they receive (multiple times, if we count a single cell as our unit) and modulate their resulting activity based on another astonishingly high number of variables. This suggests that they already integrate plenty of information, understood as the extended version of Shannon’s that I’ve proposed above. If we agree up to this point, it follows that we can describe the interaction between sensory cells and other parts of the organism (this isn’t limited to downstream inter-neurons) as exchanges of signals. Without this detour, talking about signals requires the addition of scare quotes, because otherwise people like you could ask me about where the missing elements are (there is no intentional agent, let alone an agreed code: to agree on something you need to be an intentional agent!).

    The signal a receptor sends to inter-neurons can be interpreted in infinite ways. If you (the scientist, third party “perfect/idealised” observer) already know what you’re looking for, these boil down to one definition of signal+noise, but for the organism the signal is just an, ahem, “representation” of the stimulus transduced by the single receptor. It could be all “noise” (doesn’t contain any EI currently relevant to the organism – do note where I put or omit the scare quotes, it’s important!), it could contribute to the detection of many (potentially independent) EI tokens, and so forth. At the level of the single receptor cells, beside the builtin assumptions accumulated through evolution, no other assumption can be made.

    Thus, I take onboard the problem that I’ve been misleading: I have, if you read what I’ve written without keeping in mind my addendum about Shannon’s information. Guilty as charged. I should have known better! However, I disagree with vigour to the idea that I’ve been pessimistic. Not at all. The framing problem (very closely related, but not identical to the symbol grounding problem) is big, scary and very real. Ask the developers of self driving cars: you can’t make many assumptions on what distinguishes signal from noise (in this context, EI from all the rest), in fact this distinction changes all the time. This is even before we get to understanding what the signal means: for organisms, also this changes all the time.
    For these reasons I haven’t framed the issue in terms of noise versus signal at all: in this context (old fashioned biophysics), noise should be considered the background fluctuations of signalling activity between cells*, all the rest is a sorta signal (to snatch a shorthand from Dennett), it is (at the periphery) all “potential EI”. Thus we reach the need of extracting EI from all the incoming sensory signals, my claim is that understanding how this can be done is equivalent of showing how organisms solve the framing problem.
    I also think that Andrew and Sabrina might agree, because they say:

    Gibson made a mistake calling his approach ‘direct perception’. The name made a certain amount of sense at the time, but it has come to imply (for supporters
    and critics alike) that perception is a free ride; that we simply ‘see’ the world, no cognitive gymnastics required. This isn’t true.

    In other words, much of your critique does apply (because of my sloppy language), but I don’t think your main point sticks. Andrew and Sabrina did gloss over how EI is identified and extracted, and by doing so, I still think they missed a valuable opportunity (hence my comment).
    Perhaps the point I failed to communicate can be clarified by temporarily shifting into your position:

    This may seem like a quibble, but the implication of your description is that your “signal” comprises useful content (“signal” in comm vocabulary) seemingly buried in intimidating mounds of clutter (“noise” in comm vocabulary). But that’s a garden variety situation in comm systems

    What you’re missing is the reason why it most emphatically isn’t “a garden variety comms situation”. It isn’t because in ecologically relevant contexts, for actual organisms, the distinction between (“signal” in comm vocabulary) and (“noise” in comm vocabulary) changes all the time (not to mention that at different levels the same “noise+signal” mesh may segregate in different ways, triggering different reactions), and the organism has only (your kind of) “noise+signal” to understand where to draw the line. The less assumptions you make about where this distinction is, while still being able to make the right distinctions most of the times, the more adaptive potential you have, leading to learning and what we refer to as “meaning”.
    Overall, I’m inclined to stick to my “compression as decision making” metaphor and can even sort-of agree that direct application of Shannon’s comms concepts isn’t that interesting if you accept my own definition(*above) of what to consider noise (background fluctuations).

    Hope this helps clarifying! (with apologies for being very direct)

    • Sergio –

      “simplify to make it tractable” bit is central to the story and has to be unpacked with plenty of details.

      Your criticism seems to be at base that they are glossing over the details of an undeniably complex environment. I assumed from this that prior to reading this paper you haven’t followed their blog in which actual modeling and results are presented. Inherent in such modeling is simplification of the actual environment. If you are familiar with their work, then I have nothing to add since I’m not competent to defend their methodology. If not, then perhaps reviewing their work will allay some of your concerns.

      What you’re missing is the reason why it most emphatically isn’t “a garden variety comms situation”.

      Which I why I use comm terminology only as a “heuristic”, not as an analytical tool to be applied to the problems they tackle.

  4. Sergio Graziosi says:

    Charles,
    thanks again. I’m no expert on Andrew and Sabrina’s work, but I do follow their blog (presuming my ignorance is another heuristic that usually works ;-)). Nevertheless, I’m still not entirely sure what you mean with “inherent in such modeling is simplification of the actual environment”, if you have a precise example, please do share.
    From their latest “mechanisms week” I’ve got the impression that they are trying to produce a full mechanistic model. If you want, my point is that what they have is definitely good enough for a “functional model”, while I’m proposing one more step to move towards the final goal.
    Anyway, do keep sharing your thoughts and criticism, sometimes I’m slow in picking them up, but I do appreciate.

  5. […] on Krakauer et. al. (2016) [Highly recommended reading!]. From the other side of the fence, my discussions (see also) with Golonka and Wilson might provide some insight on why and how classic cognitive […]

Leave a comment

This site uses Akismet to reduce spam. Learn how your comment data is processed.

Follow me on Twitter

All original content published on this blog is licensed under a Creative Commons Attribution-NonCommercial 4.0 International License.
Creative Commons Licence
Please feel free to re-use and adapt. I would appreciate if you'll let me know about any reuse, you may do so via twitter or the comments section. Thanks!