In our exploration of the Predictive Processing (PP) framework, it is time to complete the overall theoretical sketch by discussing how action and action control fit in the overall picture. This will allow to finally appreciate the astonishing explanatory power that precision weighting is proposed to carry.[Note: this series of posts concentrates on Clark’s “Surfing Uncertainty” book. For a general introduction to the idea, see this post and the literature cited therein. The concept of precision weighting and how it fits in the PP framework is discussed in the previous post of this series.]
So far, we’ve seen that sensory input is processed by trying to anticipate it across a series of hierarchical layers which compare mini top-down predictions with the bottom up signal coming from sensory pathways. One concept that I find important to fully grasp is that, when a sensory organ transduces a stimulus into a nervous signal, only the first PP layer will actually receive what we can easily consider as the nervous representation of the original stimulus (probably including the expected precision of the signal itself), the next level up will receive only the prediction error, meaning that if the prediction was spot-on, no further signal will be sent to the higher levels at all. The absence of an error signal then must be considered as a signal in itself, meaning: “prediction was correct”. In terms of action and action control, this special quality of PP signalling pattern will play a crucial role, which we are about to explore.
Clark discusses the problem of action control and the solution proposed by PP in a biological-centric way, he does not ignore the engineering perspective (i.e. action-control of manufactured robots and effectors), but doesn’t quite put it into the centre stage. Clark’s approach makes a lot of sense, of course. However, I found that in order to appreciate it in full one needs to be armed with a large amount of multidisciplinary knowledge, which I wouldn’t be able to summarise here. For this post, I will try to explore the same topic starting from an engineering point of view, which I hope will make the subject easier to follow, even for non-specialists.
As we saw for the case of measuring instruments, also action control is a problem that has been extensively studied by engineers. It turns out that allowing mechanical artefacts to autonomously act on the world is a hard problem to solve, especially if high precision is a requirement. Since the world is noisy, even in a highly controlled environment (such as an automated factory), noise, in the form of random deviations from the perfect “action” (as idealised in the engineers’ “plan”) will interfere with the movements enacted by a given robot/effector. This poses the problem of detecting such deviations and correcting them in real-time. The intuitively sensible way to allow machines to interact with the world with high precision is to allow feedback loops, where the robot finely readjusts its movements according to the aforementioned “plan”. This strategy is potentially very powerful, but it is extremely difficult to implement in practice, as it requires to design complex control systems: these define how the robot will detect each possible deviation and how it should dynamically readjust its actions while they are already occurring. The standard way to tackle this problem would be to have a long sequence of logic “if/then” steps. In the real world, this approach becomes quickly impractical as it entails an explosion of interacting possibilities; it is really hard to produce robots that are able to run the program quickly enough to intervene on their actions in a timely fashion. Moreover, the situation becomes unmanageable once one realises that changing the action plan while it is executed inherently changes what should count as new anomalies. If the “plan” itself keeps changing, also the systems used to detect deviations need to dynamically readjust accordingly, while what would be an appropriate reaction to further departures from an already changing plan would also change at the same time! If you sensed the dangers posed by bottom-less recursion, I’d say that you have grasped the computational difficulty that is inherent in action-control.
Realising how hard this problem is has a direct consequence in our context: it is self-evident that animals in general and humans in particular are very good at the what we have just found to be computationally difficult (to put it mildly). The questions that should therefore puzzle more or less every neuroscientist interested in action control are:
How can nervous systems achieve what seems to be almost impossible?
How does it happen that extremely complex dynamic actions such as walking along a hiking trail (where the surface is uneven and each step requires different and fine adjustments) are normally fluid and feel effortless?
As you’re probably guessing, PP promises to solve this particular conundrum. Let’s see how.
It is well known that proprioception (the ensemble of sensory signals that report about position of our movable parts along with forces applied to them) follows its own sensory pathways, which, in somewhat surprising ways, are still hard to fully understand. In PP, sensory “prediction-based” architecture is expected to apply also to proprioception, with the added expectation that proprioception error signals are also used to control effectors (muscles). In this context, predictions represent, as before, the best guess the organism can produce for what a given sensory signal should be in the current context. Importantly, the last sentence implicitly contains a major twist in our story: in the proprioceptive arena, the context necessarily includes what the body is doing, or, if you prefer, it includes action. Better still, action (how the sensed body is moving) is inevitably a major ingredient of what signals are produced by proprioceptive organs. This means that context-dependent predictions have to be heavily influenced by what the organism is doing; it is a clearly strict requirement for the PP model to even apply to proprioception as a whole. Thus, according to PP, at any given level in a proprioceptive pathway, a higher PP layer would produce a prediction of what the proprioceptive signals would be if the body was moving in the expected way.
As a consequence, if PP does apply to proprioception, the relevant prediction error signals become concise descriptions of what isn’t moving according to the “original plan” (the prediction). PP theorists therefore propose that the prediction error, besides participating in the usual PP pattern, can also be recycled to control muscles. The key element here is that error signals are the “distilled” representations of the deviations from the expected action plan: they are inherently the exact kind of information that is required to readjust and can therefore be used more or less directly to control muscles **[Update 01/07/2017: following Clark’s kind feedback, please see note below for two important addenda]. Moreover, because the same error signals also participate in the multilayered PP pathway, large deviations will get a chance to travel upwards to higher level layers, and will thus be able to influence the overall plan and/or trigger a radical re-evaluation of the current high-level hypotheses. In this way, the overall PP architecture is able to directly explain how finely tuned control is even possible, as well as the role that proprioception is expected to have in our ability to understand what is going on in the real world. Depending on the strength and amount of prediction errors, error signals may trigger fine movement readjustments, and/or a change of plan, and/or force the organism to realise that the current best hypothesis about the state of the world was wrong and needs to be re-evaluated.
Naturally, real-time control must be supported, and this is inherently included, for the lower layers will be able to produce quick and small adjustments (with minimal impact on the overall plan), while big prediction errors will fail to be ironed out by the lower layers and will keep travelling upwards, where, if necessary, the original plan itself might change in more significant ways (which would, unsurprisingly, require more time). If even major action plan changes would fail to minimise proprioceptive prediction errors, the overall increase of error signals would force a re-evaluation of the context itself, as this condition inevitably occurs if/when the current state of affairs is likely to be quite different from the currently active “best explanation” computed by the overall PP system.
Going back to our engineering perspective, it is worth noting that, for control problems that include more that one linear degree of freedom (applies to virtually all action-control issues encountered by complex organisms), common artificial controllers end up being error-minimising feedback circuits (see for example Proportional-Integral-Derivative controllers / PID-controllers), which are at the very least analogous to a single PP layer.
For single PP action-controlling (proprioceptive) layers (as well as PID-controllers), if the computed error signal is entirely cancelled, it means that “everything is proceeding according to plan” and therefore the effectors receive no new control signal and can continue operating as planned. This chimes with the observation I’ve reported above: the absence of an error signal becomes a signal than means “all is well, no adjustment is needed”.
To complete the “action-control” picture I’ve tried to summarise, one element is still missing: the role of precision weighting. As per “passive” sensory pathways, proprioception organs will also have their inherent precision, thus, the initial sensory stimulus will still have an associated precision which can be weighted against top-down confidence. In terms of action planning and control (fine tuning of an action plan can be conceptualised as action planning and control at a high spatio-temporal resolution), the confidence that we have on a given action plan would be a direct function of how confident we are on our assessment of the current situation, as well as “how robust” the current plan seems to be. In PP, this confidence measure can be obtained by recycling the residual error produced by whichever PP layer is issuing the relevant “prediction for action”. Since all PP layers are expected to report a prediction error along with its precision/confidence weighting, this information is always available, making it theoretically possible for any PP layer to control action. This is important, but requires a long digression which I plan to follow separately. For now, I will concentrate on the proposed function of the precision weighting signal in action control.
At one level, it is obvious: low confidence in a given action plan (justified either by low confidence on our current evaluation of the external state of affairs, or by a low confidence on the effectiveness of the plan itself), means that deviations from the plan will have higher relative importance. Thus, error signals will have a bigger chance of travelling towards higher level PP layers and less propensity of being “explained away” by adjustments to the action itself. This mechanism follows the general PP architecture without any ad-hoc change and seems entirely appropriate: the lower the confidence on our action plan, the higher our propensity of radically changing the plan should be. Moreover, one interesting case is what happens when confidence is minimal (I don’t think it can be zero*). In this case, the possibility that such “extremely low confidence action predictions” will have of actually controlling action will be minimal, perhaps to the point of having no chance of initiating and/or influencing any movement at all. Thus, such predictions will remain output-less: they should be understood as action-plans that are not expected to be acted upon.
… !!! …
Yes, what you are thinking is what I meant! Adding precision weighting to the proposed PP action control mechanism immediately explains how brains may become able to produce and evaluate alternative action plans and, by extension, allows to start building an explanation of how imagination and day-dreaming can be implemented. Along the way, the basic mechanism underpinning actual dreams is also implied. QED, if you are reading this, I hope you are starting to understand the huge explanatory potential of PP in general and of precision weighting in particular.
Bibliography and notes
* In mainstream PP implementations, precision weighting is encoded as the gain of a given signal (irrespective of its direction). Thus, a prediction issued with zero confidence would be implemented as a signal with zero gain, which means “no signal at all”.
** [Update] Andy Clark has very kindly (thanks!) made me realise that it may be useful to make two additional points explicit.
1. In the special case when action is being initiated, the error signal will be maximal (the prediction would be entirely wrong, as the expected movement isn’t happening at all). In this situation, the error signal itself would contain precisely the information needed to get the planned action started. To be translated into actual movement, precision weighting must, in this case, markedly favour the prediction itself. In this way, PP becomes a unified framework which may be able to encompass perception, action selection (issuing the prediction in question), action control, and learning (see below).
2. Importantly, the proposed architecture is also able to learn. The whole idea is that error signals that can’t be cancelled by issuing more accurate predictions will ignite additional mechanisms dedicated to finding new and better predictions. I confess that I don’t have a clear idea of how such mechanisms are expected to operate (in terms of precise neurophysiological mechanisms, I might tackle this point in a later post), but in this context, it is important to note that the multilayered architecture allows for a concurrent “search” of more apt predictions across the whole stack, from perception to action control, passing through action planning/initiation. This allows to dynamically accommodate deviations that are due to noise, as well as bigger changes (say, in the case of a damaged limb, extreme tiredness, or a change of situation – swimming, for example).
The proposed architecture actually (/theoretically) allows to bootstrap action control itself: in fact, this view directly affects how we might interpret the uncoordinated movements of newborns. The main purpose of such relatively random (or apparently aimless) movements might in fact be to allow the whole stack of layers to search for and select appropriate predictions, based on the feedback signals that are triggered by the movements themselves.
Clark, A (2016). Surfing Uncertainty: Prediction, Action, and the Embodied Mind Oxford Scholarship DOI: 10.1093/acprof:oso/9780190217013.003.0011