The predictive brain (part one): what is this about?

LSE hosted quite an exciting public event (15/01/2015): Is the Brain a Predictive Machine? I had the pleasure to attend it and now find myself toying with a handful of ideas that are perhaps worth sharing. But first, the basics:

The event was a public debate between Prof. Paul Fletcher (Bernard Wolfe Professor of Health Neuroscience at the University of Cambridge), Prof. Karl Friston (Professor of Neuroscience at UCL), Dr. Demis Hassabis (Vice President of Engineering at Google DeepMind) and Prof. Richard Holton (Professor of Philosophy at the University of Cambridge); the chair was Dr. Benedetto De Martino (Sir Henry Dale Senior Research Fellow at the Department of Psychology at the University of Cambridge).

The topic of the debate was the radical idea that perception, understanding and action, and thus all that happens in the brain (conceived here as the information processing organ that mediates between input  – sensory stimuli – and output – action), can be modelled (explained) by a single framework based on prediction. This idea comes with many related labels, including “Predictive Brain”, “Bayesian Brain”, “Predictive Coding” and “Free Energy Principle”. To the uninitiated the whole business may look quite confusing, esoteric and maybe even extravagant; in all cases, I find that it is relatively difficult to locate a good starting point to get a basic grasp of the concept; therefore this post will contain my own attempt to provide a short, non technical explanation (with added links at the bottom). The second part will concern a couple of ideas that I got from the interesting discussion at the LSE event.

I will start from the same example made by Dr. De Martino at the beginning of the event: let us consider a room, fitted with a heater that is controlled by a thermostat. The typical understanding of the thermostat sees it as rather passive: it has a sensor that measures the temperature, encodes it in some internal signal, compares it with the desired temperature and switches on the heater if the measured temperature is lower than a given threshold. The predictive mind would posit that biological perception works in a completely different way: there would be some internally generated “expected value” of the predicted temperature, which is then compared with the measured signal, the difference is computed, and used to refine the system that generated the expected/predicted value. On a more detailed account, all the proposals about the predictive mind posit that this general circuitry is used in a series of layers, so that what is actively modelled are different features of the original input, starting from very basic characteristics, and getting progressively more “high level”, or conceptual, as the signal progresses through the layered system.

This may still look rather extravagant: why would natural selection promote the emergence of such a complicated system? The standard answer is that such a layered, gradual evaluation solves a wide range of remarkable and interconnected problems. The first is sometimes referred as the “view from inside the black box”, but I much prefer the visualisation provided by Dennett in “Intuition pumps and other tools for thinking” (chapter 23). Imagine that you wake up in a strange and closed room; the room is full of dials, indicators and buttons, and the only explanation is provided in a note: “you are trapped in the control room of a robot, dials and indicators are controlled by a number of sensors. Buttons and levers control the robot. In order to survive, you’ll need to understand the world outside the robot, and make sure the robot integrity is preserved. Good luck.” In this situation, how would you ever hope to understand what the various dials measure, what the buttons command and thus how to control the robot effectively? Without a starting point, an initial seed of reliable information, you would probably have no chance. However, if an initial seed of information is provided, you may be able to formulate a hypothesis (for example “one of the dials measures external temperature”) and consequently try to test it. In other words, you’d use pre-existing information (a set of priors, for example “this lever tells the robot to crouch” and the knowledge that hot air tends to rise) to make a prediction: if the robot crouches, one dial, the one that measures temperature, may decrease the measured value by a tiny bit. This would allow you to identify the “external temperature” indicator, establish a useful fact and proceed to more hypothesis testing cycles.

Now, consider a brain: in many ways, it is in a very similar situation. A newly born brain has the need to understand what the different inputs mean and what happens when a given output is sent to the muscles. A predictive brain will be in the business of generating a prediction (the hypothesis above) based on some pre-existing information (in this case, provided by the optimisation system that is natural selection) and evaluating how and if the prediction is accurate. The predictive brain hypothesis thus provides an explanation of how it may be possible for a brain to understand the world around it (genetically encoded priors, plus hypothesis-generation and testing).

In practice, this general mechanism is thought to happen in a multitude of successive and conceptually similar steps: for example, at the first level, the activation of a single photo-receptor is used to make the prediction that adjacent ones will also be activated, at the second level the fact that this prediction is true along the X but not the Y axis will be used to make the prediction of what more distal receptors will measure. Similarly, at a third level, the system will recognise and predict that the perceived horizontal line of light is moving in a given direction. In more detail, the idea is that the higher level (more conceptual) integration generates a hypothesis on what it expects to be received by the lower (closer to the signal receptors) layer, the prediction is sent down, and what is sent back is not the original perception, but only the difference: perceived signal minus prediction. If the prediction is perfect (perfect understanding), no signal is sent back, if the prediction is completely wrong, a lot of data will bounce back.

Interestingly, this kind of processing is useful as a compression algorithm (it is in this context that the original idea was conceived). In the example above, at the end of the three steps we can describe the perceived scene in terms of extracted features (a horizontal line of light that moves in a given direction) instead of having to report the detail of what every single photoreceptor measured. So here is a second reason why such a seemingly overcomplicated system may turn out to be handy: it allows to gradually extract useful information from the original stimuli, and reduce the amount of data that is needed to transmit and analyse as we progress from raw data to more conceptual representations. This should start resonating with your intuitions: if you function more or less like me, introspection should tell you that when we perceive reality, we grasp the significant information and are not overwhelmed by the sheer amount of data that our sensory organs produce. In the predictive mind model, this happens because the conscious level is at the top of many “predictive” layers and thus receives already “classified” information (in the example above, “a horizontal line, moving that way”).

Third interesting observation: a layered predictive system would therefore be able to recognise more and more abstract features of reality. By simply adding more layers it should be possible to generate more and more understanding. Evolutionarily, this makes a lot of sense: once the genetic instructions needed to put the first predictive circuit have evolved (and we presume these would be rather complicated), the genetic changes necessary to make this circuit even more useful would be rather small: one would only need the instructions to “pile these circuits one on top of the other”, so the mutations necessary would be relatively small. For obvious probabilistic reasons, evolution usually invents little and recycles a lot.

But we are far from the complete picture… What happens when a given signal wasn’t predicted? After all, unexpected things happen all the time, at any level of “abstraction”: here the general idea is that the neural circuitry registers and focuses on what was not predicted, and uses the mismatched information to guide what to change in the predictive machinery. The result is a continuously refined model of the external world: at any given layer, the information received is used to improve the prediction, and since this happens at more and more abstract levels, the top layers will be in the business of keeping up to date a general, conceptual model of the world out there. Thus, the predictive brain hypothesis unifies in a single, evolutionarily plausible model our understanding of perception and understanding. Furthermore, one can extend the general idea and include also attention and learning: making a big prediction error will automatically increase the amount of data that is sent to the next layer, and this data will automatically trigger the model-refining (learning) machinery. Surprising data are the only kind of input that can teach us something (if it isn’t surprising you knew it already), and in this framework, surprising data are also what is not removed while travelling across the layers. Thus, the conscious level will be receiving primarily what is worth attending to, providing a hint of what may be the basic mechanism of attention.

A short recap. We started with what looked like an extravagant idea: our brains don’t passively receive and classify sensory information, they actually continuously try to guess what will be perceived. Exploring this idea we found that it provides a plausible way to generate viable understanding of the world, provided that minimal “seeding” information is available. We also found that the predictive brain idea explains how information is analysed and made more abstract along the way, how unnecessary data is discarded, how a model of the world is generated and refined (i.e. how we learn), and how we figure out what is worth our attention. As far as explanatory scope goes, the predictive brain hypothesis seems to be pretty powerful! But wait, there is more.

In the past years, an even more general proposal has emerged. The idea is that the same basic architecture may be used not only in perceiving, but also in acting. Really? Surely this is over-stretching an already bold hypothesis… Well, maybe, but it’s worth mentioning it: action wise, the same basic kind of system may be used to generate motor signals. Dissecting what I’ve summarily described above, we have an internally generated prediction and an incoming (passively collected) signal. The comparison of the two inputs generates a third signal which is normally referred to as the prediction error. This is what a single layer produces, it is its output. Hang on, did I mention an output? When the last layer is reached, where does this output go? If we conceive a brain as what stands between input and output, and something that is nothing but a long series of layers that process information in the way I’ve described, the last layer will be generating genuine output, or, in other words, a signal that is sent outside the brain. Of course, brains have outputs, they are known as motor signals. So maybe there is a connection here… The hypothesis therefore is that the same sort of organisation may be used to produce motor signals based on expectations. This time, instead of predictions, we have expectations, and it should be quite intuitive that the two concepts are closely related. Where the perceiving layer received the sensory stimulus, a prediction, and produced the difference, an “acting” layer may receive a sensory stimulus (of what the body/muscles are doing) and an “expectation” of what should be perceived instead. The layer would compute the difference and this signal would be the one that proceeds towards the effector organs. Makes sense? No? To me neither, not yet!

I can try to explain this other idea by going back to the beginning, and recall our thermostat. If we consider the thermostat+heater as a discrete system that does one thing (heats the room), we can say the following: the temperature at which the thermostat is set is our “expectation”, the measured temperature is the stimulus, the “difference” is a binary variable, 1 or 0, on or off. From this angle, the thermostat is a system that acts as our predictive layer, it computes the difference between the measured state and the expectation: if the measured temperature is higher than the expected, it outputs zero/off. If the temperature is lower, it outputs 1, or “on”, and switches the heater on. Here we have a system that computes some information and is effectively analogous to what I have described for perception, but instead, it integrates perception and action, by using an expectation instead of a prediction. The other difference is that here there is no “expectation/prediction-generating” system, the expectation is set a priori.

In other words, there is something missing: we haven’t quite closed the circle yet. How to fill the last gap is the subject of the intuition I had at the LSE event and will be covered in the next post.

Credits

To conclude, I would like to clarify that I owe the idea of considering the thermostat+heater as a predictive machinery to Prof. Friston himself: Dr. De Martino introduced the discussion by describing the thermostat as a typical example of how we understand a passive receptor; Prof. Friston immediately turned this description head over feet and suggested (but, as far as I can recall, didn’t quite explain) that one could see the same system in terms of an active predictor instead.

Micah Allen produced a super-fast and accurate summary of what was discussed with the audience at the LSE event. It was really useful to help me reconstruct my own train of thoughts and thus write this post.

Links, bibliography and further reading

If you are a proper geek, and thrive amongst maths and formulas, you may want to read more at the very source of all this. The academic works of Prof. Friston himself are summarised here.

If you are geeky, but would rather avoid maths, the best entry point that I know of is:
Clark A. (2013). Whatever next? Predictive brains, situated agents, and the future of cognitive science, Behavioral and Brain Sciences, 36 (03) 181-204. DOI: http://dx.doi.org/10.1017/s0140525x12000477
(Full text is here)

If you are interested, but not a self-punishment glutton, you may want to read Andy Clark’s essay “Do Thrifty Brains Make Better Minds?“.

Also cited: Dennett, D. C. (2014). Intuition pumps and other tools for thinking. Penguin.

Tagged with: , , , , , ,
Posted in Evolution, Neuroscience, Science
24 comments on “The predictive brain (part one): what is this about?
  1. Hey, Sergio! Thanks for this post, I’ve read Friston’s 2010 article about the free energy principle last year, found it interesting but didn’t go into researching more. Your post made me visit his page and there is a lot of interesting stuff! Intuitively, this all makes a lot of sense and I find it interesting that there are several languages/ways to conceptualize this one overarching principle of brain operation. I myself usually think about brain as a best-match machine, not a predictive machine, in this conceptualization prediction is just a side product, but truly it is all the same – all these cells growing, dying and communicating, you know )
    For me it is also valuable, as I am not a part part of any scientific community, to be sort of accidentally on the same page with the scientists on the interwebs, so thanks for that too)
    P.S.: Someday I’m going to revisit our Information theory discussion, when I have enough time!

    • Sergio Graziosi says:

      Alexander, I’m glad you liked my high level introduction.
      Yes, the affair with the thermostat makes it clear that what really happens can be described in many alternative and possibly equivalent ways, pattern-matching, predictive modelling, whatever.
      It took me some effort to find a way to make sense of the whole predictive-coding business (a couple of years ago, I think) and when I finally did the bulb-lights in my brain started blinking as mad, so I thought it was time to try and see if I can pass-on the feeling.

      But the real juice should come with part two… Again based on Friston’s work: I can’t say it’s going to be my own ideas, just ideas on how to express what’s already out there…

  2. Very well, I’ll wait for the part two!

  3. ihtio says:

    Sergio, what an interesting post!

    I wonder whether you know of any computer simulations that would be based on this predictive brain theory. If the theory is indeed correct, then we should try to make machines act/think in similar ways. Maybe they will become more intelligent and we’ll know more about our own brains.

    In your post you pointed to memory and learning as having natural (that is, in line with the theory) explanations according to Predictive Brain theory. However you’ve only remarked that learning happens, when the prediction error is very large. Do you plan on writing a bit more on that? That is, how would learning actually work?

    I must disagree with your treatment of attention. Surely, attention tends toward things that are worth attending to :), and most unexpected events automatically grasp our attention. However it is by no means exhaustive account of attention. We can as easily attend to really boring stimuli. How could Predictive Brain theory explain that?

    Predictive Brain theory may be useful in explaining perception and maybe some types of action, but the way it is formulated (the prediction error is propagated between layers) makes it of little use in explaining the vastness of mental phenomena we experience. For example emotions, planning and decision making, dreaming and imagination, creativity, learning, communication and language, and many, many other.
    The explanations of imagination and dreaming in the context of Predictive Brain theory that I have read state that in case of impoverished or missing external stimuli lower brain layers “run amok” and what we experience (our thoughts, imaginations, dreams) are mostly predictions unconstrained by errors. That is an interesting hypothesis, but hardly a plausible one. Why would those unconstrained predictions/expectations be so coherent and form consistent stories?…

    Either way, a fascinating and very well written article. Can’t wait for part two!

  4. Sergio Graziosi says:

    Ihtio,
    Thanks for reading and the feedback.

    Is there some functioning General Artificial Intelligence that uses the Predictive Brain (PB) paradigm? I don’t know, but I don’t think so – may be wrong!
    Are there less ambitious, smaller scale attempts to reproduce PB circuitry in silico? I am sure there are, but I haven’t researched the field well enough to provide pointers. If you find something, let me know.

    Learning and Memory: at some level the answer would be very simple. The “difference” signal can be interpreted as instructions telling the “prediction source” how to change its output. However, time, time-integration and other issues (both theoretical and biological) get in the way, so it becomes complicated. That’s why it’s something I felt I couldn’t unpack here, maybe in another post.

    Attention: I’ve limited myself to concluding that PB provides “a hint of what may be the basic mechanism of attention”, nothing more than that. [You are lucky if you can “as easily attend to really boring stimuli”, it surely costs me a lot of effort! 😉 ] Seriously though, of course you need more than vanilla PB to explain all human behaviour (see below).

    So, what’s missing? The most serious researchers will agree that PB can’t account for all mental phenomena, and I agree. However, I do think that one obvious trick is frequently overlooked: what happens at the intersection of different pathways? For example, when you match hearing and vision to locate the source of a sound? On the computational level, it is possible (I think) to keep describing/hypothesising a PB system also at interceptions, and this perhaps is what makes it very powerful. On the conceptual level however, it strikes me as something very different (or not intuitive), so there is “room for thought” on this subject (and I’m not rushing it).
    This is somehow related to what I’m planning for part two: I’m trying to see if/how the general paradigm looks a little too universal, but I’m still exploring my thoughts, so I can’t write it quite yet.

    • Sergio Graziosi says:

      [PS to Ihtio]
      On the subject of dreaming. I do agree there is something very different going on there, have you read my three (old) posts on the subject? Of course there is plenty more that needs explaining.
      The posts started from “Threat Simulation Theory” and moved on from there, they are:
      Dreaming and Sleeping, getting closer to an explanation, Dreaming and Sleeping: a follow up on Threat Simulation Theory, and the grand finale “More on sleep: what we know and what we don’t“.
      Feedback is always welcome, but replies may be very slow, depending on chance alone.

    • ihtio says:

      Sergio,

      Regarding AI: I had in mind some simpler applications of PB theory to construct AI algorithms, for example artificial neural networks that would perform better at classifying images, predicting and generating words, etc. GAI is still too far in the future for us, I guess.
      The thing is, if the PB is really a good approximation of what happens in the brain, then it would make sense to build algorithms that also work in analogous ways.

      When memory is concerned I don’t see how knowing the difference (error) would make learning that much easier. Today we use prediction/classification error in artificial neural networks, but the process is extremely slow. And in real world we also have take into account events that are linked causally in various ways, and that would make things even worse for such algorithms. In fact, in does makes things worse in current AI algorithms: when the outcome depends on an event that happened 10 events ago, how do you compute the error?
      In AI/Machine Learning we already use the error, you can get a glimpse of it here: http://en.wikipedia.org/wiki/Delta_rule

      I agree that we would need more than PB (or any other theory) to explain all human behavior. I have two points refering to this problem:
      First, I have pointed only to most basic cognitive capacities, and not the extremely high level, like playing games, programming computers, creating art, building bridges, etc.
      And second, as far as I understand it, PB postulates that prediction and error propagation are fundamental functional elements of the brain. If that is the case, then it, I think, makes it hard to incorporate other theories with other functional explanations of other brain capacities.

      I will take a look at your posts on dreaming.

  5. This is a cute approach, but to me it seems that many proponents of this often succumb to false advertisement. In particular, there is often an advertisement that the predictive brain will help us understand the human mind or brain or some important aspect of it. Yet, there is nothing particularly human or animal about this; all the explanations and arguments we use would apply just as well to C. Elegans, or in fact to any living organism capable of action — even ones that don’t have nervous systems. If nothing about the predictive brain distinguishes between C. Elegans and H. s. sapiens then we should be satisfied with the predictive brain as an explanation of any feature of life that we believe that we have and nematodes lack.

    Of course, if one considers, for instance, Alva Noë’s mind as abstract as “driven, active, compelling engagement” then something like the predictive brain might be enough to separate ‘intelligence’ from artificial intelligence.

  6. Sergio Graziosi says:

    Artem and Ihtio,
    you are both trying to push me outside my comfort zone, please keep it up!
    I’m slowly forming the skeleton of the next post, and I do hope to finish it up during the week-end (I can think & read during the week, but week ends are required for serious writing). I also think that the next post will include (at least obliquely) some replies to your main points.
    Watch this space, and I’ll hope to be able to fulfil (at least) my own expectations.

    Artem: I find that over-hype in science is as common as infuriating, especially when it comes to the public face of science, but it’s really a quasi-ubiquitous issue. Grrrrr. Don’t get me started or I’ll rant for 5k words without interruption.

  7. Tim Klaassen says:

    Some time ago I have read Clark’s article that you mention. To be honest, at the time I didn’t really understand what makes the predictive brain hypothesis new and significant (It reminded me a bit of Dennett’s multiple drafts theory.). After reading your article, I still don’t quite understand what is so revolutionary about it. But I may be missing something here.

    To say that the brain only has very limited informational input from the environment, and that it must supplement this information from within inside itself by applying all sorts of hypotheses to the limited data that is available: isn’t this an old idea within classical cognitive science?

    Am I perhaps missing the significance of the predictive brain because its significance lies in the details of how the brain accomplishes this?

    Thanks for clearing it up!

    • Sergio Graziosi says:

      Tim (thanks for asking):
      I don’t think I’ll be tackling your question head-on with the next post, so I’ll use this comment as a kind of (very rushed) pre-emptive integration. I’ll cut many corners to keep this short, so please do come back in case I won’t manage to make myself clear (and/or if you disagree!).

      The first thing to note, as hinted in the other comments (I will tackle this one in the next post), is that as expressed/summarised here, the Predictive Brain idea (PB) is not yet a standard scientific hypothesis. This is because I think that anything that acts as a more or less independent agent can in principle be modelled as a “predictive machine”. See Artem’s comment for a hint and my next post for the details, but in short, the Predictive Brain idea does not in itself allow to produce a unique set of verifiable predictions.

      However, what does make PB interesting and potentially ground-breaking is what happens when it’s used as a general framework to produce more detailed hypotheses about different systems (cfr “Predictive Coding” and “Free Energy Principle”, for example). One could (and many do) use PB to formulate specific hypotheses of what a particular neural system does, and how it does it. E.g. “this subpopulation of neurons generates predictions, these neurons compute the differences, these connections carry the prediction error” and so forth. At the same time, as Ihtio points out, people can try to apply the PB framework to the problem of designing artificial intelligence.
      For both contexts, the generality of PB has the function of a conceptual aid, it allows us to look at known unsolved problems (for example “how do we store the concept of a tree in our brain?”) with a new and very promising search-light (with apologies with the very inadequate metaphor). The overarching hypothesis is that this search-light should allow us to “see” the patters that have eluded us so far, with the adjunct hope that these patterns will be present in many different places (and possibly different scales) within brains. If this is correct, then we should expect to learn how to use the knew understanding so to produce smarter AIs as well.

      It’s all a big if, but the hype (and I hate hype!) and expectation around the idea stems from this “if”. It’s exciting because until now whatever we thought was understood about brain function had very little scope: we may have a decent model of how we detect basic shapes and movement, but understanding it tells us almost nothing about how we determine if what we are looking at is a tree.
      Personally, I am excited about PB (and its various more detailed implementations) because it promises to provide a general, super-arching meta-theory that could potentially help solving lots of problems by guiding the formulation of plausible, understandable and testable detailed hypotheses. So far, neuroscience as I know it looks like a seemingly infinite amount of little bits of data (lots of data), scarcely connected by a very diverse collection of quite provisional fragments of (often very complex) knowledge. PB has the potential of providing a quasi-unifying way to understand/interpret what we observe. Whether this potential will be fulfilled is too early to say.

  8. […] explaining the main concepts and promises of the Predictive Brain (PB) idea in Part 1 (you may want to read also the comments), it’s now time to explore its boundaries. In this […]

  9. ihtio says:

    I have just watched a talk given by Karl Friston, titled “Consciousness and the Bayesian brain” (https://www.youtube.com/watch?v=HeQfO4byFhg).
    I must I enjoyed the talk and I like the idea some more.

    I would just add two notes that are relevant to our discussion here:
    1. The terminology is misleading: the theory should not be called Predictive Brain, because the brain does much more than prediction.
    2. “In order to see something you have to know what you are looking for” (an inexact quote) — so in effect we can never learn anything 🙂

    • Sergio Graziosi says:

      Ihtio,
      glad to hear that I’ve managed to stimulate your interest. No surprise in finding out that Friston is better than me in explaining his own thoughts.
      On 2.: you forget the role of priors. We inherited them from our ancestry and they have been shaped throughout the whole history of evolution. You can include your sensory architecture in the list of priors, along with the innate ways to interpret sensory signals that are built-in our brains at birth. Things like: movement in the visual field is important, but not the one caused by our own movement, which is effectively cancelled automatically; the list would include thousands more. Thus, when you are born you already know something, and what you know is indeed very reliable knowledge because it has allowed all your ancestors (think about it: all of them, from the first living organism on) to survive and reproduce. Hence, you can learn a lot of things, because you are born with plenty of already built-in and exceptionally reliable knowledge [in a sense, this is what allows us to escape the framing problem: we are born “expecting” to perceive certain things and not others].

      [You may also want to insert here the usual warnings on the dangers of induction: black swans and the like. You can learn plenty of new things, but you can never have absolute certainty that any knowledge, prior or not, is 100% reliable]

    • ihtio says:

      Sergio,

      Surely an hour long presentation must have much more detail than a blog post, so that’s understandable that some things are easier to grasp from a full talk or a paper.

      Concerning point 2: Priors are priors. I get it. What I’m after is learning high level concepts, such as “a ship”, “a can of soda”, “a machine”. It would be hard to incorporate learning of such concepts in the PB framework. At least that is the impression I still have.

  10. […] can read great posts about this idea on Sergio Graziosi’s blog: The predictive brain (part one): what is this about? and The predictive brain (part two): is the idea too generic?, and on Conscious Entities: […]

  11. […] terms of a subject that I have discussed here, the predictive brain, all the above explains why I’m inclined to dismiss objections such as “yes, but you […]

  12. […] of course, some very interesting theories do exist: for example, to remain close to home, the Bayesian Brain hypothesis, and when it comes to Consciousness, Tononi’s Information Integration Theory (IIT) […]

  13. […] what could have been your opposition becomes a partner. This leads me to an interesting detour: in Bayesian approaches to psychology, what counts are priors. How people evaluate new evidence is a function of what they […]

  14. […] engines, especially when modelled as Bayesian engines (for a gentle, very broad introduction, see here). Clark (2015, footnote #1, Chapter 10, p328), makes the point […]

  15. […] the Embodied Mind” by Andy Clark. Why is this a partisan review? Because Clark himself had already convinced me that the general idea is worth pursuing, well before writing the book. To use a famous […]

  16. […] as prediction engines, from perception to action. For a general introduction to the idea, see this post and the literature cited therein. As usual, I’ll try to avoid highly abstract maths, as […]

  17. […] on Clark’s “Surfing Uncertainty” book. For a general introduction to the idea, see this post and the literature cited therein. The concept of precision weighting and how it fits in the PP […]

  18. […] “Surfing Uncertainty” book. For a general introduction to the Predictive Brain idea, see this post and the literature cited therein. The concept of precision weighting and how it fits in the PP […]

Leave a comment

This site uses Akismet to reduce spam. Learn how your comment data is processed.

Follow me on Twitter

All original content published on this blog is licensed under a Creative Commons Attribution-NonCommercial 4.0 International License.
Creative Commons Licence
Please feel free to re-use and adapt. I would appreciate if you'll let me know about any reuse, you may do so via twitter or the comments section. Thanks!