Sources of error: Essentialism Fallacy

There is a mistake we all make, and keep making, over and over, at every possible occasion, and will keep making, even after realising that we do. This happens because it is, in general terms, the most useful cognitive mistake that one could dream of. It underlies much of cognition (human or otherwise) and is necessary to us almost as much as oxygen itself. I will not try to get rid of it, but I believe that naming and dissecting it is of paramount importance: the process should deliver insights on the limits of human cognition, and hopefully also some hints on how to work around it.

The latest Edge annual question “What Scientific Idea is Ready for Retirement?” generated at least three answers that are explicitly against Essentialism (Lisa Barrett, Richard Dawkins and Peter Richerson) but there are plenty other answers that can be linked to the same sort of criticism. What puzzles me (and what prompted this post) is that it is apparently necessary to point out the limit of Essentialism; even more surprisingly, I have the somewhat founded suspicion that many professional thinkers do not start their journeys well aware of these limits and of their cognitive foundations. In this post, I will look at the biology of cognition first (a speculative quick glance, as we still know very little!) where the glaringly obvious seed of Essentialism is to be found. I will then very briefly observe the connection with logic and maths, arrive at fully formed Essentialism and provide a couple of examples of its most ugly consequences. In the process I hope to explain a few things:

  1. Essentialism is here to stay: in its most basic forms, it’s inevitable.
  2. This introduces the mother of most errors, it “essentially” (!) explains why we can only approach a full understanding of reality, but will never be able to reach it.
  3. There are some very important exceptions, as I have noted before.
  4. Every self-respecting thinker should be very aware of all this, and should always understand whether her current thoughts apply to the general rule (2.) or the exceptions (3.); failure to do so is one of the reasons why pure thought (and maths) can pull our conclusions into random, and utterly wrong, directions.

The Biological side:

The starting point here is quite simple: our brains deal with symbols. They manipulate representations of reality, and use these virtual manipulations to drive our behaviour. To use an example Paul Bloom has recently employed, if I’m thirsty, I’ll grab a glass and fill it at the tap, because I know what glasses and taps are for. This requires symbolic reasoning, where “glass” and “tap” are symbols that my brain can manipulate, and come associated with the information of what defines them: a glass is a tool that makes drinking easier, a tap is a water-delivery system, and so on. Sure, this description is speculative, but there are plenty of accredited scientists that share this view; for an excellent discussion, see: Marcus G. (2009). How the Mind Work? Insights from Biology, Topics in Cognitive Science, 1 (1) 145-172. DOI: (incidentally, this is one of the best articles I’ve ever read, it is highly recommended reading).

The Formalisation:

One way to describe the idea of symbols comes from logic/maths, in the form of Equivalence Classes (ECs). I will not bore you with the formal definition, and will use the description I was given when the concept was introduced to me at school: Equivalence Classes are labels that can be used as shorthand descriptions of a given collection of qualities. A glass is a glass if it can be used to facilitate drinking is such and such way. ECs are fantastically powerful cognitive tools, because they allow to brush aside all the trivial details, and consider only what is relevant. Another way of saying this is that ECs are the building blocks of all models, and models (of one sort or the other) are necessary for cognition, reasoning and communication.

The Essentialism Fallacy:

The problem is that ECs are so useful (and so unavoidable, as all reasoning can do is manipulate symbols, not the actual objects) that once they are applied to something, it is terribly easy to forget that the symbol is not the real object. If our particular symbol is a very good one, the properties associated with it may indeed include all the information we need to know about an actual object, and sooner or later, we may become completely unaware that by dealing with ECs we are trading off precision in return of handiness. We are applying a useful simplification. This is what I had in mind in my foundation posts, and is another way to explain why reality is, to some extent, unknowable. I can’t stop stressing this point, because it is of massive importance for both science and philosophy. You can see some examples on the importance for science in the Edge links at the top of this post: the common denominator is that scientists use ECs to build useful models, but then get carried away, and for example, start thinking that tigers are characterised by some absolute and objective “tigerness”, whereas this “tigerness” doesn’t really exist, and is only the direct result of how our brains work. Philosophers make the same mistake, and of course, following Plato, may even do it in an explicit and systematic way. Instead, while we are in the business of understanding reality, we should always be aware that “understanding reality” is the process of finding, defining and later exploiting useful simplifications; we are not, in any way or form, identifying and isolating the true essence of real objects, and we are emphatically not finding out what is more real than the real thing. Falling for the Essentialism Fallacy generates all sorts of mistakes, and some of them can explain the worst atrocities of human history.

Essentialist horrors:

I shouldn’t really need to spend too many words on this, as Dawkins’ article makes the case convincingly, but in case you don’t wish to read it, I will reiterate here, in my own words. Let’s start with what should be a scientific subject: life. When does life start and finish? At first sight, it seems pretty obvious: in everyday circumstances, we have little trouble in deciding whether a person is dead or alive, but the trouble is life (as all EC that exist because they are extremely useful simplifications) doesn’t have an essence, its boundaries are blurred, and it is impossible to define them precisely. If I pull a hair out of my head, chances are that the bulb will still be attached. This is made of living cells, so we could conclude that it’s alive, but is it really? Those cells will certainly die outside my body, and never had a chance of “living” independently. My blood is full of living cells, but who considers a drop of blood alive? What about cultured cells on a Petri dish in a lab? They are most certainly alive, but how exactly are they different from the drop of blood? They aren’t: who said it must be impossible to keep alive the white cells found in a drop blood on a Petri dish?
The examples above are trivial, and I used them specifically because troublesome moral implications may otherwise obfuscate my point. The basic idea is that once we start looking at the area where life starts and ends, we don’t find clear-cut dividing lines, and therefore we can’t isolate and define life in any objective way. The concept of “life” becomes meaningless when you require an objective definition. It is meaningful only if one accepts (or better: ignores) the impossibility of a precise definition. The Essentialism Fallacy thus finds fertile ground to show all its nastiness. Most people ignores the impossibility of defining life, and would readily use the concept even where it is indeed uncertain, namely at the beginning and end of life boundaries. This leads to mistakes, horrors and absurdities, where in the name of life we may prolong the agony of quasi-dead bodies (in some cases it’s certainly equivalent of “saving” a drop of blood by cultivating it on a petri dish), or arbitrarily decide that a zygote is a “person” giving to a single cell a disproportionate importance, and often forgetting about the fully formed person that carries it. As you can see, these latter cases are not trivial “armchair philosophy” subjects, they are real and do matter to all of us, but still, the general attitude is to approach them in essentialist terms, even if it should be clear to all that doing so is not only conceptually wrong, but also dangerous and counter-productive.
Not convinced? Think of race: we refer to people as Caucasian, Black, Brown and in a 1000 different ways, based on their external appearance. But it is a scientific fact that it’s impossible to establish clear boundaries of ethnicity, we are all “mixed” to some extent. In this case the blurred edges are so wide that they probably spread across all the possible ethnicity-space. And yet, we even base our policies on such ungrounded classifications. I am not saying that we shouldn’t: in some cases, even this absurd way of defining people could be useful, for better or for worse. The problem is that most of us are happily oblivious of the fact that ethnicity-based definitions are amongst crudest simplifications possible, and should be treated as such. Instead, many human beings are happy to think of races as defined by one essence or the other, a process that sustains (I wouldn’t go so far as saying that it generates it) racism and all the horrors that follow it.
The same dangers apply to most moral considerations: we judge people and events based on broad and objectively undefinable classes, and then make decisions as if these classes were real. This generates an awful lot of harmful mistakes, and (following the usual pattern) we mostly don’t even notice. What I’m trying to say is that the Essentialism Fallacy is not an obscure and irrelevant intellectual trick: it is the source of some of the most consequential errors ever made. And it’s everywhere, it affects all of us, including scholars, scientists and philosophers, but what’s worse, it is ubiquitous amongst clerics, politicians and citizens.

The exceptions:

This is where my own thoughts become intriguing (to me, at least). What I have expounded above is a conceptual argument, it itself deals with symbols, and it is worth exploring it because it is (hopefully) a useful simplification. Therefore, one prediction is obvious: it can’t be an absolute truth, exceptions must exist. And they do. The most notable is maths, but in general, ideas themselves may not be susceptible to Essentialist errors. Let me clarify: if I am trying to build an understanding of how having a particular idea changes and influences my own thoughts, the subject of my reasoning is already made of ECs. This means that for once, my reasoning can deal with its subject by, at least theoretically, dealing with the real subject itself, and not just a symbol of it. In theory, if I’m thinking about ECs, I’m thinking about the only kind of constructs that really have (and are defined by) an essence. The result is that when thinking about concepts, one can (at least theoretically) establish absolute truths, and this is  radically different from “finding useful simplifications”. The typical example is Mathematics: because all of it applies exclusively to abstract concepts, there are plenty of absolute truths to be found. Two plus two does equal four and there are no exceptions. The same applies to the process of evaluating different and competing theories (or systems of “useful simplifications”): I can, without doubt, conclude that the flat earth idea is less precise that the approximation of the world as a sphere, both are wrong, but the latter is less wrong. The same applies with creationism and evolution: there can be no doubt that the theory of evolution is a better approximation of the truth than creationism. In both cases, I can claim that the conclusion is objective because it pertains concepts that are themselves made of equivalence classes.

If this isn’t confusing enough, the consequences of this line of thought are even more puzzling. Consider the glass mentioned above: it is man-made, and is the result of EC-driven thoughts; it was made in a particular form, shape and material because its designer did have an idea of what a glass should be: at least a part of what all drinking glasses are is the result of essentialist reasoning. The first consequence is that the most useful EC that I can apply to the actual glass is indeed closely related to the original idea of what a glass is. The second, and more intriguing consequence is that, in some sense, giving weight to the “glassness” class, and consequently less importance to the actual object, in this case is less of a fallacy (but not quite objectively right: remember, we are dealing with blurred boundaries). Because the glass was created with the intent of instantiating a token of the conceptual “glass” class, one could say that it does indeed have an essential side. This is true, but understanding manufactured objects exclusively in terms of their (intended) essence is still denying (or brushing aside) their own peculiar physicality. The end result is another blurred edge: designed objects do have, to some variable extent, an essence, and are therefore somewhat less vulnerable to the fallacy, but they are still physical constructs, and as such they have also some qualities that can never be fully described in essentialist terms. The more conceptual an object is (I’m thinking, for example of software, but books and fiction apply as well), the more susceptible to essentialist analysis it will be.

Conclusions

The consequences of this line of thought are difficult to grasp, I expect to keep mumbling about them for quite a while. For my own purposes, the following observation is paramount: if you conclude that I drank some water because I was thirsty, you may be right or wrong, but the underlying question is meaningful. On the other hand, if you conclude that the glass fell because of gravity, you are already prey of the Essentialism Fallacy: was it gravity or the curvature of space-time? This latter question is meaningless, because both options are “useful simplifications” and not reality itself. One could and should reformulate the question, and ask “which account of the fall is more useful?” instead. It may be a subtle difference, but it’s very important when one tries to understand reality: the epistemological limits of what can really be understood, and reflexively, a correct understanding of what “understanding reality” really means, are of paramount importance to all scientific endeavours.

Therefore, the general conclusion is that all intellectual efforts should be well aware of the Essentialism Fallacy, and should adjust their methods and claims according to the varying degree to which the fallacy applies to their specific domain. Failing to do so may (and frequently does) lead to catastrophic errors, and some of them have indeed contributed to the darkest moments of human history.

Tagged with: , , , , ,
Posted in Philosophy, Science, Stupidity
13 comments on “Sources of error: Essentialism Fallacy
  1. I suggest you sweep “essentialism” into the dustbin as it appears to be a useless and unnecessarily troubling concept.

    The concept of a glass to hold water evolved from practical experience. “I can hold a little water on this rock.” “I can hold more water if the rock is concave or indented”. “I can hold a lot of water in this clay pitcher”. Etc. This is where the “concept” of the glass of water came from.

    “Thinking about” the glass of water is not the same as actually “having a” glass of water.

    I’m not sure what you refer to by “equivalence classes”, but all of thinking is relevant only so far as it improves the possibility of obtaining the water we need to quench our thirst.

    You can’t blame “modeling itself” for impractical actions. You can only blame “inappropriate” or “inaccurate” modeling for your lack of water.

    As to when life begins, most “paradoxes” are literally “doublespeak”. The practical question is the moral implication of abortion. Morality is about benefits and harms, ideally achieving the best good and least harm for everyone. Although the embryo is a form of human life, it is not a person, that is, someone capable of experiencing benefit or harm. Therefore it does not become morally relevant until the fetus reaches sufficient maturity to experience itself within its environment.

    Morality must not become inaccessible to everyday human beings. The introduction of complex but extraneous constructs does not help.

  2. “I’ll grab a glass and fill it at the tap, because I know what glasses and taps are for. This requires symbolic reasoning,”

    It does not require “symbolic” reasoning as you describe it. The model is based in experience. Once shown the glass and the tap, and how they work together to obtain water to drink, it becomes unnecessary to define the glass or the water in any symbolic way. It is only when you wish to communicate your need to someone else that a common set of symbols must be employed.

    “ECs are fantastically powerful cognitive tools, because they allow to brush aside all the trivial details, and consider only what is relevant.”

    If I may suggest a far simpler model, there are two primary tools: (a) generalization and (b) discrimination. Generalization seeks to consolidate details into categories. Discrimination seeks to correct generalizations that are inappropriate to the given context.

    “…it is terribly easy to forget that the symbol is not the real object…”

    Really? You’ve tried to drink from the glass you’re thinking of rather than the one in your hand? I’d say it is terribly difficult to confuse the symbol with the real object.

    But I’ve heard this “problem” before. I think it is imaginary. If you have a case, I’d like to examine it.

    “…whereas this “tigerness” doesn’t really exist, and is only the direct result of how our brains work. ”

    To say that “tigerness” does not exist as a material object in the real world is pretty trivial. To say that it is a “result of how our brains work” is also trivial. These things are not disputed by actual people in the real world. Why “philosophers” should dispute them can only be attributed to the philosopher’s attraction to dispute, for dispute’s sake.

    But “tigerness” does exist as a useful concept for dealing with the real tigers we encounter in the real world. The fact that a hungry tiger will eat your children is something useful to know about tigers if you happen to have any living near you. Therefore “tigerness” cannot be dismissed.

    “When does life start and finish?”

    That depends upon the subject (a cell versus a person versus a virus, etc) and the context. Things become much clearer as the context of the question are specified.

    “The concept of “life” becomes meaningless when you require an objective definition.”

    Quite the opposite, lacking an objective definition, any word becomes meaningless (by definition!).

    “…arbitrarily decide that a zygote is a “person” … ”

    And why would anyone “arbitrarily” decide anything? We can explore the meaning of “person” in real and objective terms. And we’ve a history of experience to draw from. We know that a person ceases to exist when the brain and nervous system cease the functions necessary to support that entity. Similarly, we can say that a person does not come into existence until the fetus nervous system can support those functions. We can get pretty detailed these days before we have to draw the final lines.

    “Because the glass was created with the intent of instantiating a token of the conceptual “glass” class, one could say that it does indeed have an essential side.”

    The prehistoric man put his face in the water to drink. Then put his hand in the water to drink. When his hand was flat, it did not work as well as when it was cupped. So he found or fashioned other things that were cupped to hold the water. The concept was born when he discovered “cupness” as a generalization for efficient water holding.

    Concepts are born from reality. Whether some concepts became instinctual and hereditary rather than learned is probably up for grabs.

    • Sergio Graziosi says:

      Marvin,
      Apologies for the late reply: my day job drains my intellectual reserves quite regularly, leaving me only the week end for writing.
      As you may know, I do appreciate when my views are challenged, so thanks for taking the time!
      Unfortunately however, I fear that this time round we will find very little common ground to build on: we clearly come from very distant background and we probably have a language barrier to overcome as well. But let’s start from what we agree on:

      Morality must not become inaccessible to everyday human beings. The introduction of complex but extraneous constructs does not help.

      On this, I think we are 100% on the same page, but we clearly disagree on whether the essentialist fallacy is a necessary concept (my view) or an overly-complex and unnecessary sophism (your point, I would guess). As it happens, I feel that I’ve been circling around the problem of essentialism for a long time, and now that I’ve managed to put it down in what I thought was intelligible terms, I find myself more and more convinced that the problem needs to be very well understood, not brushed aside as irrelevant (more on this below).

      Another thing that we agree on is the role of reality in creating concepts:

      If I may suggest a far simpler model, there are two primary tools: (a) generalization and (b) discrimination. Generalization seeks to consolidate details into categories. Discrimination seeks to correct generalizations that are inappropriate to the given context. […]
      The prehistoric man put his face in the water to drink. Then put his hand in the water to drink. When his hand was flat, it did not work as well as when it was cupped. So he found or fashioned other things that were cupped to hold the water. The concept was born when he discovered “cupness” as a generalization for efficient water holding.
      Concepts are born from reality.

      I must have explained myself very poorly, because what you are describing here as “categories” first, and “concepts” later on, are exactly what I’ve called “equivalence classes”. I use the latter term for precision’s sake, as the mathematical definition is perfectly adherent to what I’m trying to explain. What we disagree on, and this may just be a linguistic/semantic issue, is on symbolic reasoning. You write:

      “I’ll grab a glass and fill it at the tap, because I know what glasses and taps are for. This requires symbolic reasoning,”
      It does not require “symbolic” reasoning as you describe it. The model is based in experience. Once shown the glass and the tap, and how they work together to obtain water to drink, it becomes unnecessary to define the glass or the water in any symbolic way. It is only when you wish to communicate your need to someone else that a common set of symbols must be employed.

      And I can only disagree strongly with this statement. Please bear in mind that I’m talking about cognitive process, and I try to keep my reasoning as close as possible to what (cognitive) neuroscience tells us. There is a very specific reason for my quoting Gary Marcus in the main article: he provides a strong case for arguing that our cognitive abilities rely on symbol manipulation, he covers both the evolutionary and developmental perspectives and goes as far as discussing the limits of symbolic manipulation (when compared with computer system) that seem to apply to humans. But if my word and that of a prominent scientist are not enough (and I do apologise for quoting an article that is behind a paywall), I’ll try to add the weight of good old fashioned philosophy. The following citation comes from:
      Churchland, P. S. (2013). Touching a nerve: The self as brain. WW Norton & Company (Page 34) (do read it, it’s a very special book!).

      Frist, consider the brain circuitry organised to generate a neural model of the world outside the brain. Processes in this neural organization model events in roughly the same way that the features of the map model the features of the environment. […]
      Caution: Before getting too cozy with the map analogy, let me be clear about where it breaks down. When I consult a map, there is the map in my hand and, quite separately there is me. The map in my hand and I are not one. In the case of the brain, there is just the brain. My brain does what brains do; there is no separate thing, me, that reads my brain’s maps.

      Now, I’m quoting Patricia Churchland for two reasons: first, the map (ordinary or in-the-brain) needs symbols to represent the real things. You agree that one creates “concepts” from experience, and I would argue that these concepts are ultimately the symbols (another synonym, in this context, of “equivalence classes”) that the brain manipulates. Second, if our brains use symbols to “do what brains do”, we will, by definition, find it difficult to grasp that: a) what brains do is manipulating symbols, and b) this comes with some limitations (inherent sources of bias). We’ll find it difficult to grasp all this because “there is no separate thing, me, that reads my brain’s maps”.

      When you reach for the glass with the intention to fill it at the tap, your brain is allowing you to do the right thing by putting together the appropriate symbols/concepts and thus directing your action in a useful way. I can’t and will not waver from this vision. Now, my original point is about exploring the limits of cognition itself: because cognition uses symbols, and can’t use anything else, its powers have intrinsic limits. It doesn’t matter how good and useful the symbols that we manipulate are: they are not, and never will be 100% adherent to reality. And sometimes (at the edges of their discrimination power) the error or approximation that they necessarily introduce will become relevant, and unfortunately, this does not uncommonly lead to death, killings, and plenty of human suffering.
      Therefore, it is paramount to accept and recognise the limits of our cognitive tools: this is useful because by definition we can’t predict what sorts of mistakes we’ll make if we ignore the problem and treat all of the real world as a collection of instances of our own concepts. It is guaranteed that doing so will generate mistakes, and therefore it is a good idea to remember that our ideas/concepts/symbols/equivalence classes are what they are: more or less accurate, more or less useful approximations. We can’t avoid using them, and we should seek to maximise their usefulness, but we should also remember how they may lead us astray.

      This whole argument may be difficult to grasp, and regretfully complex, but I can’t ignore it for this reason alone. I haven’t chosen how our brains/minds work, I can only cope with it.

    • Hi Sergio,

      Sorry if I came across too heavy-handed in the first note, but I was running on a 25oz Bud Lime-A-Rita and the brakes were off.

      Regarding the brain, I agree that there is a lot of logic going on there. But symbol manipulation is preceded by a more direct manipulation of sensory memory. We replay actual empirical experiences during problem-solving, imagination, and dreaming. The experience of the water, the hand, the thirst, the satisfaction, etc. tie our symbols to reality in a significant way.

      I’m not well-read. I find it difficult to distinguish essentialism from equivalence classes. All I’m able to discern from your article is that both deal with the qualities by which we define things. They seem to be of the same semantic stuff. Therefore there should be a similar likelihood that either will prove useful or produce errors.

      As I said, I don’t believe there is a problem distinguishing what is happening in the head from what is happening in the real world. The real problem is using concepts appropriately and in the correct context in which they apply.

      Take the “free will vs. determinism” problem, for example. In a law enforcement context, we would not hold someone responsible for what he was forced to do at gun point. Determinism suggests that all personal actions are predetermined by preceding events. Therefore, some people mistakenly conclude (a) that no one can be held responsible for anything, or, (b) that the idea of cause and effect is somehow false. But both conclusions are wrong, of course. And yet philosophers will waste hours on this koan, leaving many young people in confusion.

      So I’m just as frustrated over describing misunderstandings as “confusing concepts with the things they represent in the real world” which I believe is seldom if ever the source of error. Specifically, I am suggesting that the following sentences give no real guidance as to how to actually solve real world problems: (1) “We’ll find it difficult to grasp all this because “there is no separate thing, me, that reads my brain’s maps” or (2) “And sometimes (at the edges of their discrimination power) the error or approximation that they necessarily introduce will become relevant, and unfortunately, this does not uncommonly lead to death, killings, and plenty of human suffering.”

      The fact that we use our limited, biased minds to solve problems cannot be said to be the source of “death, killings, and plenty of human suffering”, because, since it is a constant, it offers no cause or resolution of anything. This “fact” is interesting, but necessarily irrelevant (and “sophist”).

      Since we know it is the only tool we are likely to have to solve problems, we must look for ways to improve our use of it, work together to get a better handle on the real-world issues, and to perform cooperative cross-checking of our assumptions, and other means to achieve better results.

  3. Sergio Graziosi says:

    Marvin,

    Since we know [cognition] is the only tool we are likely to have to solve problems, we must look for ways to improve our use of it, work together to get a better handle on the real-world issues, and to perform cooperative cross-checking of our assumptions, and other means to achieve better results.

    This is exactly my point. I’m trying to show where the only intellectual tools we have available predictably and invariably introduce errors, and that’s the first stage necessary to improve them. If failures are predictable, we can implement pretty powerful countermeasures.
    Examples of these failures abound, I didn’t linger on them because they look obvious to me (thanks again for showing that they are not obvious).
    Someone may think that “He behaves like that because”:
    – he’s white/a nigger/paki/any racial definition you could think of, offensive or not
    – of mental illness
    – he’s a male
    My point is that all these “explanations” have some (big or small) explanatory power. And I mean this in the very empiric sense that they can be used to make (somewhat) useful predictions. Only all of them rely on categories that don’t exist in the real world, there are no white men, mental illness is just a social convention (on which we can never agree on) and male/female is not a clear-cut dichotomy, it allows all sorts of physical, genetic and psychological ambiguous mixtures.

    Now, the above should be somewhat obvious, but we all have the tendency of forgetting (the more reliable a category is, the more easily we may forget) that all these categories are just handy approximations. Once you forget this, you can go on and say “homosexuality is an aberration” or “all Jews are evil”, etcetera. I think that you would argue that the way to deal with these terrible mistakes is to refine our categories, and I agree with this. But I’m also looking at the larger picture, and realise that All categories allow for such mistakes. Using better classifications minimises the mistakes they produce, and needs to be done, but at the same time, realising that these classifications are in fact Not Real makes it impossible to sustain that “homosexuality is an aberration” or “all Jews are evil”, it addresses the source of all these problems, not each single problem individually.

    It’s true that all this is of little use out there in everyday life, but it is very important (because it is usually ignored) for both science (in particular biology, medicine, psychology, social, economics and similar “liquid” sciences) and philosophy. That’s where new knowledge is produced, and both disciplines specifically look for the “hard to tell” situations, where the commonly used categories break down, and it’s therefore very important that the people that engage in these activities understand the intrinsic limitation of the “categorising” approach. Unfortunately, many scientists (following the positivist naïvety) and, perhaps more worryingly, many philosophers (following Platonic Idealism) not only overlook the problem, but squarely deny its existence, and this slows down and sometimes spectacularly derails our collective intellectual progress. Not a good thing!

    In terms of our own discussion (thanks again, I do appreciate your input!) the last paragraph offers a way to understand our differences: you seem concerned with the everyday implications, the usefulness on the street. But I am exploring the very limits of knowledge (in my own humble inner world; I write publicly for many reasons, including: 1. it forces me to think hard before writing down anything and 2. I may get hints, corrections and valuable insights from people like you) and therefore my ideal audience (if any) is more specifically intellectual.

    I’m sorry if I’ve been snotty: being a blogger yourself I’m sure you understand. The danger of being drawn in hopeless circular discussions is always present and I didn’t know how to handle this one. I’m glad you helped me out!

    • Another term that may be helpful is “prejudice”. A prejudice is an inappropriate generalization that cannot be supported by the facts. The worst case of prejudice is taking a few examples of bad behavior and presuming they apply to everyone of that race, or gender, or sexual orientation. Discrimination based in prejudice is morally wrong (not to mention unscientific).

      Racial slavery in the U.S. was “justified” by a prejudice that black skinned persons were something less than human, and therefore could be treated like farm animals: bought and sold, forced to work the cotton fields, etc. The fact that African slaves were not Christians also came into play. Some considered slavery to be doing them a favor, which was a false ideation.

      So, yes, I’m totally in agreement with you that prejudice generally leads to social and personal harm and injury on a large scale.

      And yes, I certainly agree that prejudice is an error in thinking, an error that humans are prone to, because they need to categorize things to make dealing with reality a little simpler.

      Science is normally self-correcting. A prejudice in science may be challenged with more science. Other scientists may challenge existing data by doing their own research to support or modify or discard the hypothesis of other scientists.

      A certain humility and a certain skepticism are essential to obtaining the best truth. But I think that’s where you are coming from as well. There’s more agreement than disagreement between us.

  4. […] making. At best, we would produce a theory, and the real world is likely to have multiple ways to escape the boundaries of any single […]

  5. […] book “Antifragile: Things That Gain from Disorder” is that the great big world is essentially (?!?) unpredictable, and that most attempts to tame it are eventually destined to failure. Therefore, in […]

  6. […] a definition that will help make sense of the world, as such it will be symbolic, and I’ve clarified already that I don’t expect any symbol to have perfect equivalence with anything in the real world. […]

  7. I like your post, especially the sentiment that essentialism cannot be discarded and maybe we don’t necessarily want to discard it as much as better understand it. However, I always have something to nitpick, so let me put on my troll mask.

    Although you start talking about essentialism, at some point I feel like you throw away its, well, essence. Most of your discussion seems to be about categorical perception and dichotomy. This is a very well studied subject, and I think most people agree with you that the root of a lot of the need to categorize is in the mind’s way of understanding, or more basic yet, in perception (not necessarily clear that these can be easily distinguished; see what I did there?). In particular, I usually hear characterization attributed (at least by philosophers) to the self-other dichotomy. This also suggests routes to transcend categorical perception, like some kinds of Buddhism.

    The reason I say that categorization is not the same as essentialism, is because I feel like essentialism is categorization where there is a simple defining property that is internal to the objects and in some way eternal.I also feel like essentialism requires us to imagine a prototypical representative element of the class that in some way embodies all the essences and nothing else (this is what essentialism meant for Plato and Aristotle, at least). Categorical perception more generally, however, doesn’t require this. In particular (to go to the domain of math), I can create a category of shapes that are “squares or circles” (again, unfortunately I am building categories on top of categories), this category does not have an easy essentialist interpretation. Since there is no such thing as a square circle, there is no perfect form which completely captures the essence of this category. In this way, I think general categorization is easier to reconcile with fuzziness.

    I agree with in the belief that categorical perception is probably impossible to transcend, even if some gurus claim otherwise. As such, I will not dig too deep here for fear of muddle headiness or emptiness.

    Finally, now that we are talking about categorical perception, I want to discuss the notion of natural categories, because I believe such things do exist. For me, categorization is fundamentally linked to the “distance” from which we are viewing a task. For many continuous processes, there are quick phase transitions, which from far away look like discrete steps, this makes it natural (at least for me) to separate the two phases into two categories. Of course, if we zoom in (and try to study the phase-transition point) then these categories become useless. I think that the essentialism fallacy is not so much that we categorize things (as you describe) but that we forget that the best (or most natural) categories to use are problem dependent.

  8. Sergio Graziosi says:

    Thanks Artem!
    Please keep your trolling hat on, I enjoy being challenged. You are right, I do somewhat conflate essentialism and categorisation. I find it difficult to separate the two because there can’t be any essentialism without categorisation, and striving to generate better, more precise/useful categories certainly feels like “trying to distil the essential features of something” and thus nudges us toward essentialism.

    You forced me to re-read myself, and I’m glad you did, for I couldn’t find claims that I’m no longer prepared to defend.
    So, in particular:

    I feel like essentialism is categorization where there is a simple defining property that is internal to the objects and in some way eternal.

    I think you are over-simplifying essentialism. The defining property does not need to be simple or unique, if we define the essence of something as the combination of more essential qualities (which could themselves be defined by other essences) we can still fall for the essentialism fallacy, it requires more brainpower, but it still happens.

    I also feel like essentialism requires us to imagine a prototypical representative element of the class that in some way embodies all the essences and nothing else

    How about “essentialism requires us to imagine that the prototypical representative element of the class is somehow more (or equally) real than the instantiated object, and it somehow transcends both the object and the observer”?
    What I’m claiming is that the essentialism fallacy consists of thinking that categories exist in absolute terms, and are not merely an artefact of symbolic reasoning: if you think that categories existed before cognition I would call you an essentialist.
    We can still think of reality before cognition and apply categories to what existed back then, in fact, that’s the only way we can think of anything. This however does not in any way or form mean that therefore prototypical representative elements existed before some substrate for representation came into existence.

    I think that the essentialism fallacy is not so much that we categorize things (as you describe) but that we forget that the best (or most natural) categories to use are problem dependent.

    This may be just my inability to clearly communicate my thoughts/intuitions, do my clarifications above explain better?
    Of course natural categories exist, in our heads!

    • The concept does “exist” within the brain separate from all the examples of the concept out there in the real world. If you melt plastic and pour it into a mold to make a chair, then you have “something to sit on”, which is a definition of a chair. If you melt the same plastic and pour it into the mold of a table, then you have “something to set the food on”.

      The infant caveman learns that he needs “something to sit on” when he tires of standing and falls on his butt. He may notice some adults sitting on rocks. So he finds a rock his size and sits on it, turning it into “something to sit on”.

      The monkey in the tree faces the same problem as the human on the ground. If his arms tire, he gives them a rest by sitting on a branch. The concept of “something to sit on” does not require speech at all.

      But humans have speech and give words to things. Today, “chair” means “something to sit on that is more comfortable than a rock”.

      That a person needs “something to sit on” is part of being a person. It is not part of the rock. Nevertheless, the specific height and smoothness of the rock may be pre-requisites for it being “something to sit on”, and height and smoothness are qualities of the rock, because they remain constant whether we choose to sit on it, or choose to build a wall with it.

  9. […] interpretation of their inner working becomes relatively easy, and thus it becomes possible (not utterly wrong) to say that they “really” do operate on symbolic representations. However, this is […]

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s

Follow me on Twitter

All original content published on this blog is licensed under a Creative Commons Attribution-NonCommercial 4.0 International License.
Creative Commons Licence
Please feel free to re-use and adapt. I would appreciate if you'll let me know about any reuse, you may do so via twitter or the comments section. Thanks!

%d bloggers like this: