Here I will explore the idea of Cognitive Attractors: this stems directly from what I’ve stated already and can be seen as another important foundation concept.
According to Wikipedia (as of 07/07/2013) an attractor is “a set towards which a variable, moving according to the dictates of a dynamical system, evolves over time. That is, points that get close enough to the attractor remain close even if slightly disturbed”.
Of course, thoughts are dynamic: they change over time and I find it striking that in fact we are immersed in cognitive attractors and their results. However, for some reason, it doesn’t seem that this concept is commonly used, and since I find it useful on more than one level, it makes sense to put the idea in written form.
Why are cognitive attractors relevant to everyday life? If you think that something is true, but even a slight perturbation may change that belief, then it’s likely that you’ll never hold the same idea again: your own thoughts will drift away, and since their degrees of freedom are virtually infinite, it’s very likely that the drift will never generate the same belief again, ever. If that’s true, the chances that you’ll make some significant action because of the original, one-off, belief are almost zero. The more significant that action is for you, the less it’s likely that you’ll do it because of an unstable idea: if the thought is about something important, you would probably think about it at least twice, and if the thought is not under the influence of an attractor, reconsidering your own assumptions would make it change.
On the other hand, if your belief is the result of a cognitive attractor, reconsidering it will not make it drift significantly, so that it’s very likely that you will, one day or another, make some action because you hold that particular belief.
The first observation therefore is that almost everything you see, including houses, cars, televisions, clothes, foods, social structures, and every artefact (material or not) that has some value is the result of one or more cognitive attractors.
I’m writing this here, in my second post, because the consequence is that if I want to understand the world I live in I need to investigate the rules of cognitive attraction: what causes thoughts to be stabilised in our brains? My point here is that understanding cognitive attraction seems like a god way to start understanding most of the (human) world.
But I’m getting ahead of myself: this conclusion is based on some premises that I haven’t neither stated nor tested:
- Most human actions are the consequence of their beliefs.
- Most human beliefs are reasonably stable.
To test these assumptions, let’s see my definition of belief: as stated above, a belief is something that the subject accepts as true. That’s a very wide definition and includes very basic thoughts, starting with our ordinary understanding of Newtonian physics.
So, yes, it does seem that all of my actions are the result of at least one of my beliefs. I open my front door because I expect it to be solid and I know/believe that I can’t just pass through it.
[Note: together with my first post, this is already getting philosophically intense. I’ve stated before that all knowledge is some sort of model, that all models are by definition approximations and that therefore they are never entirely true. Now I’m talking about beliefs, and using “knowledge” and “belief” more or less as synonyms. That’s right, the word “belief” suits me better than “knowledge” because I accept that all knowledge is, in some way, never entirely true. However, on the instrumental level, I still use my knowledge to make predictions, generally ignoring its inherent fallacy. The word “belief” is therefore better in describing what is going on, as it includes some level of uncertainty along with the tendency of ignoring the possibility of error. Hence, from now on, I will use “knowledge” and “belief” as synonyms, even if I will probably prefer to use the word “belief” whenever possible.]
Furthermore, I clearly hold a huge number of beliefs, and therefore it’s safe to say that, when I’m thinking about any subject, I may change one belief or two, but I hold millions of them. This point can be reinforced by observing that most of my everyday actions reflect a big number of stable beliefs: I allow my alarm clock to wake me up because I believe that I do need to go to work; I’ll get up and go to bathroom first, and to the kitchen afterwards, because I believe that’s where I will be able to fulfil my physiological needs in the most convenient way, and so on…
The result is promising, it does seem that cognitive attractors shape at least a significant proportion of human activity and therefore that trying to understand what stabilises our thoughts is a worthy effort. This may seem trivial, but it does have consequences: it makes understanding what makes us create new thoughts a secondary question, in a literal sense. Assuming that, in one form or another, we do create new thoughts, one has to admit that understanding how we do so contains less explanatory power than understanding how we select “successful thoughts” and discard the others. The other consequence is that we have already established one important detail: we know that, in order to change any belief, tiny amounts of (mental) energy will generally not suffice, because it will be necessary to push such belief outside the sphere of influence of its own attractor.
Even more, we can also predict that the age of a belief will carry a strong indication of how much effort is necessary to change it. If we admit that life is, to some extent, unpredictable, we can assume that old beliefs have been subjected to many challenges and re-evaluations, and that each time the force of the attractor was strong enough to preserve the stability of that thought, so it’s likely that the attractor is pretty strong, otherwise that particular idea would have already drifted away.
Cognitive attraction therefore already explains a few interesting observations: the fact that it’s pretty difficult to make someone change their mind, and that the ideas that are more resilient are usually the oldest. But the same concept also has consequences on how we understand growing old: as experience is accumulated, we’ll inevitably get influenced by more and more powerful cognitive attractors and consequently become less and less able to change our own ideas.
So far it seems like a cute idea, but how do cognitive attractors get established? Or, if you prefer, what are the laws that underpin thought stabilisation? These aren’t trivial questions, and a proper discussion would probably be long enough to fill a full-length book. In here I will only sketch a generic, first approximation.
If all knowledge is part of a model of reality, and if each model is defined by its purpose (in general, the domain of reality on which the model allows to create predictions), it follows that reality is the source of most cognitive attractions, but, and this is crucial, in an indirect way. The normal process is that whenever a belief is used to make a choice, the outcome of the choice will be evaluated:
• If the outcome essentially fits the expectation, the underlying belief will receive a confirmation, and therefore the belief will be strengthened.
• If the outcome doesn’t fit the expectation, the belief will be somewhat destabilised. How this happens and when it’s enough to push the thought outside the influence of the original attractor is a complex matter that I will briefly consider below.
The interesting observation is that the influence of reality is indirect, and in more than one way. First of all, reality acts through the prediction made by a pre-existing model, this means that it may (and frequently does) reinforce beliefs that are spectacularly wrong. For example, this explains why we needed many millennia to figure out that gravity exerts the same acceleration on all objects, regardless of their weight.
Secondarily, the outcome-evaluation generally acts on elements of the model, but it doesn’t normally challenge the model as a whole. For example, let’s consider an ancient sailor that believed that the earth is flat, and that the edge could be found not far away from Gibraltar. If this sailor got caught by a storm and pushed all the way to the Canary Islands, he would be forced to reconsider his model, because, after all, he didn’t fall off the mighty cliff. However, it’s likely that the re-evaluation would make him change the location of the earth’s edge further away and still keep the (spectacularly wrong) idea that the earth is flat.
In other words, it is true that all our beliefs ultimately are generated and, to some extent, validated by reality, but it is also true that, the validation process is indirect enough not to guarantee that our models will be constantly refined in meaningful ways. It’s also important to note that there clearly is a hierarchy of attractors, where elements of a model depend on their own attractor, but also on the one that keeps the whole model in place. This hierarchy is conceptual, but it is also reflected in the relative attraction strengths: it will be relatively easy to change an element of a model, while it is always much more difficult (or unlikely to happen) to change or reposition the model as a whole.
This consideration has some important consequences for my own discourse:
- When an unexpected thing happens, it may be the sign that a single element in the model is inadequate. While it’s easy to address this case, it is much more difficult to properly react to (and even detect) the other possibility: that’s when the whole model is wrong.
- Therefore, an important source of human error is to be found in the wrong assumptions that underlie fundamental models.
- Crucially, showing that some predictions made by the model are wrong, or showing that some other model does not suffer the same shortcomings, is usually not enough to break the link between the model and its own attractor.
- In fact, the only way to successfully challenge the stability of a whole model is to directly challenge, in a non-ambiguous way, the founding beliefs of the model in question. That is: one needs tackle the cognitive attractor that keeps the model in place. Systematically destroying the attractors that live within the model is easier, but does not usually challenge the model itself: new constructs can be used to replace the destroyed elements, and the overall model will remain unchallenged.
This explains why it is so difficult to make people change their mind about any fundamental subject. First of all, when arguing, most people will challenge single concepts, and fail miserably in their attempt to shake a whole model. Secondarily, the effort required to break the model attractor is guaranteed to be bigger than what’s required to affect the model contents, and because of this hierarchy, challenging a fundamental view (such as the validity of one religion or the other) is extremely difficult: it needs to address a root attractor that is itself the source of a vast tree of subsequent attractors/models and therefore is the strongest of the whole hierarchy. This also explains my obsession on premises: when facing a seemingly impossible to overcome disagreement, I look for the root causes, for the attractors that support the diverging models, not for the different attractors that generate the actual argument. Even if the effort needed to change an underlying model is by definition bigger than the effort needed to act on a lower order attractors, exposing the underlying differences is in itself more powerful, and can, when successful, produce better results.
Let’s see why: first of all, discussing the premises makes people understand why it is difficult for the other party to accept an alternative view, so that, even if the disagreement doesn’t get resolved, at least it becomes more bearable. Secondarily, in the unlikely chance that tackling the sources of disagreement does succeed, and has an effect on one of the “disagreeing” models, this will reduce the chances that the two models will clash in the future, and that’s a result in itself and one that is likely to produce more reliable knowledge as well.
However, my interest on cognitive attractors doesn’t end here, in fact, the idea first occurred to me by observing another, quite frequent effect of cognitive attraction. This happens when the behaviours produced by an attractor happen to influence the observable world in a way strengthen the attractor itself. Although this kind of mechanism frequently applies to human behaviour, I’ll explain it with an example that involves dogs.
Surely we all have encountered many annoying and nervous small dogs, the ones that start yapping obsessively whenever another dog is around. This usually puzzles the casual observer because it’s clearly a stupid reaction to a possible threat. The little dog is clearly aware that other (bigger and stronger) dogs represent a danger, but instead of trying to remain unnoticed, or to show a non-threatening, submissive posture, it rumps up aggression, increasing the danger significantly. My explanation is that this happens because the little dog is trapped in a nasty cognitive attractor. Since the little bugger behaves aggressively, the overwhelming majority of the other dogs will respond in an unfriendly way, and because of that, the little one will be reinforced in its belief that all other dogs represent a clear and present danger. Furthermore, since dogs (at least in the urban settings where I usually observe this phenomenon) are generally supervised, the little dog would frequently get away not only unharmed, but even untouched: chances are that it will be rescued by one of the two owners before any physical contact may happen. So, judging on outcome alone, from the little dog perspective, its own pre-emptive declaration of aggression indeed looks like a winning strategy. The poor little dog is trapped by a sub-optimal attractor because some illuminating facts will never (or rarely) happen, and therefore his thoughts will never get pushed away from the unhelpful attractor. First of all, the fact that the danger isn’t necessarily so clear and present, but that it is mostly created by the dog’s own behaviour will never be observable. Secondarily, in the majority of cases the strategy will be successful, and the few failures will confirm the danger assumption, reinforcing the vicious circle.
For humans, the same sort of feedback circle happens all the time. The parallel with violence and aggression amongst humans should be obvious to most readers: we even use the “cycle of violence” expression to describe a very similar kind of deadlock. However, what is probably not so self-evident is how frequently such self-sustaining cognitive attractors happen to shape our experiences, especially when it comes to relationships. As it happens between dogs, humans usually respond to like with like: an open and friendly attitude will favour friendly responses, while mistrust or aggression will foster negative reactions. This will change our life experiences in such a fundamental way that we can say that the world experienced by two opposite types, the friendly and the aggressive ones, are truly different (and I have no doubt we can all agree that one of these worlds is certainly preferable). But we don’t need to look at extreme cases to find self-sustaining cognitive attractors, they happen also within single interpersonal relationships: by now you should probably be able to recognise plenty of examples of this mechanism within your own personal experience, but I will mention one, the situation that made me develop the concept itself.
Imagine a very attractive girl, one that puts a lot of effort in optimising and exposing her good looks. Guys like me, that can be easily hooked with intellectual baits, and that expect to be interesting more than (physically) attractive, will be unlikely to consider such a girl as a likely and worthwhile long-term partner. On the other hand, the one-night-stand chasers, and those that bet mostly on their own physical appearances, will flock around her in herds. Furthermore, if she doubles up with provocative behaviours, innuendos and the like, most men in her presence will probably tune into a sexually-charged mental disposition. As you probably have guessed, I once knew a girl exactly like this, and a very smart girl as well. She was convinced, beyond reasonable doubt, that 99% of men are worthless, sex-crazed, selfish idiots, and that the few that aren’t are for some reason beyond her reach. So she kept trying to maximise her attractiveness, hoping to increase her chances to catch the rare “good one”.
At the time I was happily engaged, and because of work, we had a chance to become friends: this allowed me to notice all this, and see how she was trapped in her own world, and realise how different that world was from mine!
Luckily I am still trapped by an opposite attractor, one that comes with the advantage that I tend to result interesting to the kind of women that I do find fascinating, and this explains why I’ve got few and very stable relationships and why my old friend got many, short and damaging ones instead. The key point here is that although we live in the same world, because our behaviours shape the world around us, what we perceive changes accordingly, and hence what is true to me, can and often is, irremediably irreconcilable with what is true to my next-door neighbour. This happens because self-sustaining cognitive attractors shape our own subjective experience, and since subjective experience is the only way we have to sustain our own knowledge, one can actually say that people who are influenced by diverging attractors often live in different worlds, places that are genuinely dissimilar, even if the reality that creates them is one and the same.
Overall, I find promise in the concept of cognitive attraction, and if you’ve read this and disagree, it means that I’ve written a lousy explanation. Either this, or you are trapped by a powerful attractor that is incompatible with the concept itself!
PS if you do like the idea, and are not scared of maths and formulas, you may want to check: Henrich, Joseph, and Robert Boyd. “On modelling cognition and culture.” Journal of Cognition and Culture 2.2 (2002): 87-112.