What the hell is “Information” anyway?

DNA, ideas, knowledge, books, computations, schedules, job descriptions, money(!), bank accounts, music, culture, beliefs and every last thing that has some importance in our lives has something to do with “information”, but still, all my attempts to find a no-nonsense, unique definition of information that can be directly applied to all of the above have failed. Result: I will try to make up my own definition; but it ain’t easy, so I’ll start my quest by doing two things:
1. Use this post to put some order in my thoughts.
2. Call for help: hopefully, this post may be thought-provoking enough to get some smarter and more knowledgeable people to challenge it. If this happens, I expect to learn valuable lessons.

Hence, the all important disclaimer: what follows is work-in-progress, unlike most of my posts, that usually aim to reach some definite (but never definitive) conclusion, this current effort is intended to be fluid, I expect my conclusions to change as I learn more.

Why now? Two reasons, first of all, this is long overdue. I’ve been talking about the nature of knowledge for quite some time, but clearly, knowledge is somehow linked to (or based on) information, so I can’t possibly consider my understanding sufficiently solid without providing (or adopting) a convincing definition of information, and without clarifying how it relates to knowledge. In other words, I’ve been building on top of shaky foundations, and I would like to rectify.
Second reason: my vague ideas on the subject have been simmering for ages, and this thought-provoking post by John S. Wilkins provided the catalytic powers needed to start solidifying. If you’re vaguely interested in my writings, Wilkins’ post is a must read: please click the link and make sure to read the comments, that’s where most of the good stuff is.

The problem

Information is a slippery concept, it is usually understood and/or defined on the basis of Shannon’s Information Theory, but this never satisfied me because to my eyes, Shannon’s theory is a highly abstract theory of Signal Transmission, something that is about moving information around and not really useful to define what information is. What I am trying to find is a precise conceptual definition of information, one that would be possible to apply to all the domains mentioned in the first sentence of this post, and that would therefore help to discern what they have in common; this is because I have a strong intuition that they do have something in common and that understanding what it is will be immensely useful.


What I’m after is a useful concept, not a universal truth. I am trying to find a definition that will help make sense of the world, as such it will be symbolic, and I’ve clarified already that I don’t expect any symbol to have perfect equivalence with anything in the real world. Still, the definitions of Information that I was able to find don’t satisfy me because they can only be applied to some domains (again, Wilkins’ post and discussion are perfect to illustrate this point), and I’m actively trying to find a better definition, where “better” means: more universal, a definition that applies to as many domains as possible.

Being and empiricist, I will start looking at physics, and try to build and generalise my definition from there, moving into more “abstract” domains in small incremental steps.

Step one: information in fundamental Physics

Luckily, I don’t need to invent much on this level. When you look for useful intersections between the laws of physics and Information Theory, you’ll find a neat and beautiful starting point in Landauer’s principle. In layman terms, the laws of thermodynamics tell us that whenever you do something, some of the energy you’ll use will always be irreversibly dispersed by increasing the overall entropy of the system. Entropy is closely related (almost exactly coincides with) disorder, and in the kind of world we inhabit usually takes the form of kinetic energy distributed to molecules, a quantity also known as heat. Information Theory can boil down to bits, the minimal, atomic amount of “information” (with scare-quotes as it’s Shannon’s kind of information) that can be stored or transmitted and storing or transmitting a single bit is “doing something” right? Therefore, thermodynamics tells us that doing it will always have a minimum energetic cost, dispersed in the form of increased entropy. To calculate it, one needs to observe that storing or transmitting one single bit requires to reduce the level of disorder (or uncertainty) of the medium that contains the signal (where our single bit is located) to virtually zero, so what we are doing is in fact the tiniest possible entropy pump, as it removes disorder/unpredictability from whatever we are using to host our bit. The result is that one can use the laws of thermodynamics to calculate the minimal energetic cost (the bare minimum necessary when using a perfectly efficient system) of one single bit. I will spare us the maths, and observe instead that Landauer’s principle provides a first neat and direct link between physics and (Shannon’s sort of) information. Not bad at all. But not quite enough to open the champagne.

We now have a link between bits (whatever they are) and the physical world, but no idea of what to do with these bits. We know that, whatever the medium, we can store or transmit these bits at an energetic cost, and that’s all. One problem remains open: the interesting property of signal-transmission (storage can be understood as signal transmission over time, so the  “and/or storage” expression is redundant) is that it does nothing without a receiver, and even if we can define such receiver in general-enough terms, we still have to acknowledge that the receiver has an additional need, it has to “know” how to interpret the bits that it receives. In other words, signal transmission needs a system to “decode” the incoming information, otherwise it will be a meaningless series of zeroes and ones. Of course, we may also need a sender (whatever generates the signal) and a system to encode the signal into bits, to start the whole process. Shannon’s theory does a very good job at defining these additional elements, the sender, the receiver and one or two codes used to encode and decode the signal, but this doesn’t quite make me happy, because, at least in my understanding, it does not provide a generic/universal way to describe such elements in terms of fundamental physical properties. Without that, I see no reason to believe that we’ll be able to generate one single model (a useful set of concepts) that can be applied across the wide range of domains where information is typically applied (see the non-comprehensive list on top).

Step two: reducing “decoding and receiver” into bits

(Pun intended, with apologies!)
Before proceeding, I need to better explain what I think it’s still missing. If we take the complex domains I’ve listed above, contexts such as cultural transmission, financial transactions, musical performances, literature, and the lot, they can all be easily described in terms of (optional) sender and encoding (not all signals are deliberately created as such!), transmission medium, decoding, and receiver; therefore, Shannon’s theory is good enough for most “complex” domains, but still fails to be fully generalised, because it breaks down when one moves in the other direction, if one tries to decrease the level of abstraction and map Shannon’s concepts into straightforward physics, the only reduction that is readily available applies to the signal (via Landauer’s principle), while the other elements become a puzzle instead. Not good: some more thinking is needed.

I will continue from what we’ve established: the signal itself is contained in a physical object of any kind that allows to be modified so that it can “store” one or more bits. A simple switch (even if connected to nothing) can store one bit, flicking it one way or the other is reversible, and I could agree a convention with anybody, establish a code and thus use any switch to transmit a “yes/no” message. Waveforms, magnetic fields, transistor states and capacitor charges are all based on media that can be reversibly influenced to mean one thing or the other. But what do they have in common? They all have some structural properties that can (with an unavoidable energetic cost) be changed between two or more states. No mystery here, what remains unclear is what are the minimal common features shared by all sorts of decoding and all sorts of receivers.

Note that, for the time being, sender and encoding can be left aside because of two considerations. First, not all signals have a deliberate sender that encodes. When something makes a noise by falling, the sound I may hear is a signal, but sending and encoding are, at best, accidental. Second, one could probably work backwards: if we define decoding and receiving in clear-enough terms, it’s possible that we’ll also learn something also about encoding and sending.

Once more, the open question is: how do I reduce decoding and receiver to something that can be modelled in terms of fundamental physics? Here is how: the signal is a (usually reversible) property of a given medium, a structure, and a structure that will have some effect (of any sort) on the receiver.

Not helpful? Maybe not yet.

So far we have shifted the mystery into the expressions “some effect, of any sort” and “structure”. But hey, we only need one more step, and link all of the above with the concept of catalyst. The definition of catalysy, from the Oxford Dictionary is:

Catalyst – A substance that increases the rate of a chemical reaction without itself undergoing any permanent chemical change: chlorine acts as a catalyst promoting the breakdown of ozone.

In practice, imagine that when you dissolve A and B in a glass of water (where G is the whole system, glass with water, A and B), nothing else happens: A and B remain dissolved in water. You then add a catalyst C, and this enables a chemical reaction: A and B, in the presence of C, combine together to form substance Q (and some thermal energy); in the process your element C remains unmodified. What happened there is that the presence of C (equivalent to signal delivery), via a catalytic mechanism (equivalent to decoding) has produced an effect on G, creating Q.

Distilled and generalised, the above becomes: signals (or transmitted information) are (usually reversible) structures that have a catalytic effect. The code is the catalytic mechanism, the receiver is the system on which the catalytic effect occurs.
But one does not need to stop here, in fact the “reversible” qualifier is unnecessary, what remains is that, information is a structure that has some effects (even when the structure does not survive such effects, and is not therefore a perfect catalyst).
If you prefer, at the ultimate abstraction level, the definition becomes: information is a structure that makes a difference.

Phew, so what? And more importantly: does this generalisation really apply? I think I’d be able to apply it to all the domains I’ve mentioned above (and more), but for brevity’s sake, I won’t. Instead, I would like you (the reader) to challenge me by proposing an example where my definition doesn’t apply (if you can!). [Or, alternatively, tell me why this isn’t news at all]

Before concluding, I’ll address the “so what?” question instead.
The explicit aim of all this is to generate some useful conceptualisation, or, in other words, produce some way to understand reality that allows us to easily discern patterns that can then be recruited to generate predictions. In this context, what I’m proposing is immediately useful as it makes an otherwise problematic generalisation straightforward. The seminal intuition is a famous one, it is the hope that it must be possible to extend Darwinism to other fields, or the seemingly reasonable expectation that natural selection operates on all forms of information, not “just” genetic material. In terms of what I’ve outlined so far, this intuition becomes self-explanatory.

We start with a structure that makes a difference, something that has measurable effects on its environment, but not just on the basis of its basic ingredients: the effects occur specifically because of how these ingredients (or components) are combined (assembled) to form a specific structure. The transformation that said structure generates can have three possible effects on the structure itself: it can have no effect whatsoever (this is unlikely but possible, if it happens, we talk of perfect catalysis) or it could make the structure either more or less likely to persist. In some rare cases, the effect of the structure will indeed be that of making it more likely that the structure (let’s call it sA) will remain intact for longer, and therefore, the probability of finding said structure at time N will be higher when sA is present at time zero. The consequence is that, assuming that new instances of sA can appear out of pure chance, the probability of finding an instance of sA increases with time. A structure that has the effect of making itself durable will become more and more frequent, until this effect is reversed. This is the basis of the accumulation of information that we can observe on planet Earth. Furthermore, a structure (sB) that increases the probability of instantiating another “copy” of sB, will favour the creation of more and more instances of sB. This is classic natural selection, of the “selfish gene” (modern synthesis) sort. But if we accept the idea that all information is a structure that makes a difference, then both mechanisms will apply to ideas, knowledge, books, computations, websites, and much, much more.

In other words, the definition of information offered above, based on fundamental physical principles, makes it absolutely clear why and how natural selection operates universally, it applies to everything we intuitively recognise as “information-based”. It is therefore possible to expect that there will be some rules that actually do apply to all such fields. I am not saying that natural selection operates on the design of internal combustion engines in exactly the same way as it operates on genes; all I am saying is that it should be possible to discern and distil some common patterns, some general rules that, albeit instantiated in specific ways, apply to widely different domains. If true, that’s certainly useful enough for me. If false, I’ll be glad if you could show me why.

Tagged with: , , , , , ,
Posted in Evolution, Philosophy, Science
18 comments on “What the hell is “Information” anyway?
  1. ihtio says:

    A very lovely post!

    I have been thinking about the term “information” for some time now. I am now much more inclined to seeing “information” as a useless concept, when generalized. For each one domain (physics, psychology, biology, etc.) we can introduce a specific definition of “information”. However these definitions are very different. Problems arise when people try to integrate two or more domains (computer science, evolution, psychology) with the linking concept being “information”. Then people understand such attempts through the lenses of their scientific disciplines, and often no one knows exactly what the discussion is about.

    I applaud your efforts, regardless of the effects, as I see such endeavors are of great importance to all of science.

    I will take your definition of information as “information is a structure that has some effects”.

    Problems that I see with such an understanding are:
    – “information” has been equated with “cause”. We can easily say that cause is a structure that has some effects.
    – me hitting you in the face surely makes a “difference”, the structure “my fist” can be regarded as information according to your definition (damage to your face is the effect of me hitting you), but how is this useful in any way?
    – information defined in this way doesn’t push us forward into thinking about what about this information is, or what does it inform us about?
    – how could we differentiate between information and non-information? Is this even possible? If not, then “information” means “all that exists”,
    – what would processing of information mean?
    – can information be created? Is information always conserved (like energy)? We can create and destroy structures, so it seems that information is also impermanent.
    – a structure can have various effects, depending on the receiver. How can we think about information in this way?

    I would very much like to continue this discussion. In fact, I’m thinking hard about writing a blog post about this very issue. However, instead of trying to devise YAIC (yet another information concept), I would like to critic the very idea of “information” and the usefulness of the term.

  2. Sergio Graziosi says:

    Thank you Ihtio, your reply is exactly the sort of challenge I was hoping to receive. I’ll try to provide some answers, although I doubt I’ll be able to systematically address all your questions.
    I’ll start from the the very general: what I’m trying to do is to reduce Shannon’s account to something more fundamental, or, if you prefer, less abstract. As explained in the post, the puzzling (for me) parts of classic information theory are about code and receiver (at least, if we accept my reasons to leave sender and encoding aside).
    What I’ve done is:
    – Taken the definition that is usually considered to belong to Bateson “a difference that makes a difference”, and adapted it in a way that allows to untangle the issues I’m considering. [See this essay (by Aaron Sloman) for a brief description of the problems with the common and probably incorrect understanding of Bateson’s definition, it’s also one reason why I’ve decided to leave Bateson out of my discussion]
    – My modification is necessary, because the idea of “structure” is specific enough to link basic thermodynamics to the ideas of receiver and decoding.
    The result is that we can keep all the useful parts of standard information theory, and have added some guidance on how to match the concepts of “signal”, “receiver” and “decoding” to elements of whichever system we may be studying. Now, this operation may or may not be useful, interesting or appropriate for plenty of situations, and crucially, it doesn’t add nothing at all to subjects that could be described in Shannon’s terms already (pretty much all domains where information is already considered important). That’s a way to say that my little addition is just that: small. But it does provide a straightforward way to describe why Darwinistic explanations seem to make sense on a very wide range of domains, and not only on DNA-based biology/genetics.

    In this sense, I am directly challenging your own hunch, namely that the idea of information is domain-specific and that we shouldn’t even try to generalise it.
    So, I’ll dive back into the detail:

    – “information” has been equated with “cause”. We can easily say that cause is a structure that has some effects.
    – me hitting you in the face surely makes a “difference”, the structure “my fist” can be regarded as information according to your definition (damage to your face is the effect of me hitting you), but how is this useful in any way?

    Well, no. There are two tricky bits hidden in this objection(s). First of all, the idea of “cause” is a concept that bugs me and I struggle to find a way to define it in a way that is useful and not counter-productive. Almost nothing has a single cause, so I tend to think in terms of mechanisms instead of cause-effect, precisely because it allows to stop being inclined to identify “the (single?) cause” of something. In your own example, you can say that the damage you made to my face was caused by the punch, the structure of your hand (of you whole body, gravity, etc), but also by the fact that I didn’t expect you to punch me (I don’t even know you! 😉 ), that you are not pathetically weak, etc. So, in this case, “cause” can be used to describe pretty much everything that preceded and somehow influenced your punching, and yes, it isn’t very useful.
    Second: Solman’s essay (link above) beautifully explains the problem you are referring to, but starting with the original definition “a difference that makes a difference”. In this case, yes, you end up with something that is general enough to be meaningless and unhelpful. In my case, I’ve specified that the first “difference” is a “structure”. Mixing up all of your atoms will prevent you from punching me, right? The important point is that we can describe our hypothetical encounter in terms of information-transfer, if we wish.
    Which leads me to one thing we agree on: in such a case, doing so is not “useful in any way”. I agree, but don’t see this conclusion as a problem: I have some definitions that allow us to interpret our observations and model them in a particular way. Fine: depending on what we are doing, this may be useful or not, I’ve provided at least one reason to explain why the particular definitions I’ve adopted can be useful (generalising Darwinism), and finding other situations where they are not has no impact on this. Sure, you can find plenty of contexts where modelling what happens in terms of information (or information transfer) has little or no explanatory power, but you can also find situations where it is a useful approach. One doesn’t exclude the other.

    – information defined in this way doesn’t push us forward into thinking about what about this information is, or what does it inform us about?

    Aah! Are you a mind reader? This is amongst the various problems that I plan to tackle. How do you move from information, signal transmission and signal processing to semantics? Short answer: I don’t know! Plenty of (illustrious) people think that it’s impossible, but I do hope they are wrong. My post above does not try to do so, though: it moves in the opposite direction, towards fundamental physics and away from abstract concepts and semantics. But I do hope it will help: to me it helps because it allows me to consider Shannon’s theory as a legitimate tool, I’ve shown that it employs concepts that I could, if in need, track down to physical entities. As explained in the introduction: I’m trying to verify if my foundations are solid. This is necessary for me to have the confidence to try addressing semantics, but that is a long route, and the key points will be discussed in proper papers, if I’ll manage to meet the expectations of my peer reviewers.

    – what would processing of information mean?

    Look at standard information theory, computer science, etc. Once again: I’ve tried to move from conceptual to physical, not from conceptual to even more conceptual. I haven’t changed anything on the standard definitions of information processing, etc. Turing machines are safe ;-).

    – can information be created? Is information always conserved (like energy)? We can create and destroy structures, so it seems that information is also impermanent.

    Indeed, we agree: yes, information can be created, and no, it’s not necessarily conserved.

    – a structure can have various effects, depending on the receiver. How can we think about information in this way?

    Yes, exactly as information will have different effects, and even different meanings, depending on who receives it. It seems obvious to me, even a confirmation that I’m not talking nonsense, so I think it’s likely that I’m not getting your point here. Sorry!

    I’m proposing a way to further generalise (expand the possible domains of applicability) of Shannon’s information theory, and provide at least one reason to explain why this may be useful. You suggest to ditch the idea/concept of information altogether (I’ve considered and discarded this possibility myself). I did what I could to clarify and respond to your objections: did it work? If not, why not?

    • ihtio says:

      Thanks for the comprehensive reply, Sergio. Your clarifications are extremely helpful. My reply will be much shorter as I think I grasped your ideas more fully.

      It certainly is interesting too see work that tries to explain how information can evolve. Maybe memetics could use your contribution.

      Information / cause is a vague concept.
      I agree that the term “cause” can be confusing. We can’t always precisely identify what was the cause of some event. When we define “information” as “a structure that has some effects” we find ourselves in a similar situation. What structure (or what relevant part of a larger structure) should be considered in a given situation? If I accidentally drop an apple, was the relevant structure the Earth or the apple, or maybe the structure of ? If you say that the term “cause” is confusing, then I must point that I feel similarly about information defined as a causal structure.

      Information has effects.
      When a computer program processes some simple information (bits and bytes), we tend to think that the program is some sort of an “agent”, an actor that acts on data / information. A sophisticated computer program can react differently when provided the same information, or can completely ignore it. When information is defined as a structure (in this case of bytes) that has some effects, we are asked to see the “acting force” in the information itself. This blurs the distinction between information / data and programs.

      Our intuitions about information.
      “Information” has many connotations. We think of it as related to data, meaning, semantics, language, signals, communication. If we say that information is a structure that has some effects, then we have to say that an apple is a portion of information that we transfer into our guts, and we probably even process this information. This is very counter-intuitive. I don’t think we should use a term that has all these connotations in this new, maybe peculiar way. Why not devise a new term, if it is so different?

      Usefulness of a general concept of “information”.
      When I said that I don’t think how such a general concept could be useful I meant something like this: You say that your conceptualization is useful in some context and in some other contexts may not be so useful. Fine. It is perfectly understandable. My objection should rather be that your conceptualization of information is too general and it forces us to view too many things as information. Every physical object is information, many collections of objects are information. Your conceptualization forces upon us a perspective through which we have to tackle transfer and processing of various ordinary objects. Scientific concepts are useful because they restrict the world, or highlight specific aspects of the world so we can differentiate between these aspects and all-that-is-not-that. When everything is information, we no longer have a useful distinction between information and everything-that-is-not-information.
      To put it simply: a concept that is too general can be vague, and because too many things fall under the definition, it looses its powers of describing the world. For example, we have good reasons to say that everything has some effects. Why not say that “thing is a structure that has some effects”?

      Information and semantics.
      It is one of the most interesting problems to integrate information, physical structure and semantics, meaning. My personal ideas regarding this revolve around seeing the effects of structures as meaning (using your terminology). That is, a structure is information and the effect of this structure is the meaning.

      Information is about something.
      We often talk about information about something. For example information about temperature in a room. I don’t know if this problem can be removed from semantics. However I can hardly see how can a structure be about something.

      Structures don’t have effects.
      Events have effects. Water doesn’t have evaporation as an effect, the event of boiling the water has such effect. My hand doesn’t have the damage to your face as an effect, the event of close encounter between my fist and your face have as an effect this damage. Sun light doesn’t have photosynthesis as an effect, but the event of shining on a living leaf has photosynthesis as an effect.

      I think I would have to see explanation of various information-related concepts, mechanisms, processes from various fields (physics, cognitive science, computer science, etc.) done with your definition to be able to appreciate your ideas more fully and to understand them better. As of now, I am not convinced that such a general definition of information is possible and / or useful.

  3. Sergio Graziosi says:

    Ihtio, quick answer this time.
    I think most of your objections are sensible, but nevertheless should not apply. Yes, what I’m saying is so general that one can question its possible application. That is true. But still, most of your objections can be answered by using classic information theory, all I’m doing is helping to do the last deductive step: form IT to physical.
    Also: I agree that saying that “this is what information is” is probably too strong, but I don’t feel bold enough to try proposing a new word, something like “prenfromation = a structure that makes a difference” and I don’t want to, because in classic IT we talk about signal transmission, and I’ve never been satisfied with “information is about signal transmission” types of explanations, they are just too vague.

    In this sense, I see where your scepticism comes from (and I agree with you remaining sceptic, I’d do the same), and sympathise (I really do). I’ll try to provide some practical examples of how to map my own little addendum, like in:
    (domain) -> (some theory of that domain) -> IT -> my very small additional step; but
    I can’t promise when, as mental energy is the one scarce resource here, sorry!

    So yes, you first apple/earth objection is sensible, but why does it apply? What can we learn by thinking about it in terms of information? We can, but why should we? I am proposing to work in the other direction: when we are already talking of information, then I suggest that it will be possible to find the relevant structure that makes a difference.
    As I’ve said, this post was meant to be preliminary, so with some luck, one day I’ll find something useful to add…
    Thanks for the discussion, you really did help to make that “one day” more likely to happen.

  4. Hi there! I’ve finally read your post, here are a couple of thoughts:

    1) Your core intuition is in accordance with what I wrote in one of my comments on Artem’s post about pseudoscience. There I’ve proposed the discerning ability as the basis of all epistemology.

    2) “information is a structure that makes a difference.” This definition misses an important point, and doesn’t actually distinguish information from a broader class of phenomena that are selected in the selfish gene way. The key concept that is lacking here, from my point of view, is the representational nature of all information. I am going to point you towards this fairly recent article: http://cogprints.org/7961/1/Vigo_Information_Sciences.pdf that introduces the representational approach which might help, it seems to me, develop your definition (you would have to put it into your own words). The approach is summed up actually in such a straightforward place as the wikipedia article about information: http://en.wikipedia.org/wiki/Information#As_representation_and_complexity ))

    Consider this:
    Let’s imagine a simple abstract physical system. Let at t_0 that system be in state 1.0 (x.y means state x, caused by previous state y) at t_1 it became 1.1, at t_2 it’s state changed from 1.1 to 2.1 on the next step t_3 it became 2.2. In this system state 2 can be caused by either state 1 or 2 and stores information about it. At the same time it is not unreasonable that 2.1 and 2.2 are different states, because there is no way to store information about cause immaterially. But still to the left of the dot is the state itself, and to the right is the part of the state which is at the same time information (it is incomplete, by the way, because it doesn’t store the previous cause), because it represents another state.

    • ihtio says:

      Alexander Yartsev,

      You write that information should be representational in nature. What then “0100” and “0101” represent? In my opinion, they don’t. At least, if you don’t have a decoder that could somehow interpret these strings of bits.

      The rest of your post only confirms that every person thinks of “information” in his/her own way, yielding the concept not really practical in general. Useful only when narrowed down to a particular domain, to “count” or “measure” something. But certainly “information” is not useful as a general concept. It’s more of a framework, an abstraction that has to be “instantiated” before use.

    • Sergio Graziosi says:

      Hi Alexander, thanks for jumping-in, I’m glad you did and should apologise for my slow reply.

      I think we have a problem with a vocabulary that is too restricted: ihtio is probably right when suggesting (to me) that an additional concept should be introduced, so to distinguish meaning from information.
      When talking about meaning, (philosophical) intentionality, or as you say the “representational nature of all information”, we are excluding the purely mechanical, all the signal transformation cascades that are described in molecular/cellular biology and so forth (I think we agree). On the other hand, you are relying on the mysterious ability of the receiver to understand what the incoming information is about, and thus open up to thorny issues such as the framing problem, intrinsic/extrinsic intentionality and whatnot. Vigo’s proposal looks quite interesting, but I’m not sure it helps overcoming the philosophical puzzles.

      Back to my own idea: I don’t see how your point 2) applies. All I’m saying is that “a structure that makes a difference” is always present whenever we intuitively recognise that Information is a concept that applies to the system we are looking at. This system could be strictly physical, such as intracellular signal transduction, synaptic transmission, genetic transmission, electronic circuitry, etc, or it could be “mental” (directly or indirectly), in this second case, the representational nature of information enters the picture (and introduces plenty of unsolved riddles) but still, I am arguing that you can always find a “structure that makes the difference”.

      Bridging the gap between the strictly physical and the “mental”/intentional/semantic layers isn’t something we can do here (but hey, I am trying, so stay tuned!).

      I also have the bad feeling that I may be missing your point, so do correct me if I’m responding to a straw man…

  5. Thanks for answering, I have one question and another paper for you to consider:

    1) If not seen as representational or in terms of intentionality, how would information differ from ordinary causality in your thinking, why can’t we call “structure that makes the difference” a cause? In my point of view the signaling cascades are seen as information transfer by us when put into a certain information-theoretical framework, but they are in actuality an ordinary cause-effect process. To see cell signalling as information processing just proves to be beneficial for our understanding (modelling, simulation, etc.).

    2) Are you aware of the IIT and have you read the Tononi et al 2014 article which introduces his approach to conscious information processing? I find this approach very elucidating. What do you think about it from your perspective?

    This topic in general is very interesting. As you can see, I don’t believe that what you are trying to achieve is possible, but it’s very engaging to think about these issues. All-in-all I believe information is mental, and physical only as much as mental is physical (IIT fits here very well).

    • Sergio Graziosi says:

      Your question 1) is getting closer to the bone: as ihtio noted, practically every cause has a “structural component”, so yes your objection does apply, and I don’t have a very good answer, only a tentative one.

      The idea is: take three RNA bases in a row, UAG. That’s a stop codon, if a ribosome reaches it within the right reading frame, the translation stops there. Shuffle exactly the same molecules around, say GAU, and protein synthesis continues, adding an Aspartic acid. The cause of the difference is the structure, therefore the structure (e.g. the order of the RNA bases) carries information.
      Another example: shower me with 100 bricks from high enough and you’ll kill me. If you invert the order in which the bricks are released, you’ll get the same effect. In this case, the order doesn’t carry information. If you shuffle the constituent part of the bricks, reduce them to sand, and shower me with that, I may survive or suffocate, so still there is information, carried by the shape of the brick, not the order (and you can say that the shape of the single RNA bases carries information as well).
      Now, the objection here is that for every cause, there is a structure, so at some level of approximation, and using my own definitions, you will always find that information = cause. I see this as a problem, but I can’t make myself care about it: in some cases, such as the RNA example, thinking in terms of information makes sense, and that’s when the effect of a particular structure is dramatic, while all the other properties of the entity we’re looking at remain the same (say, chemical-physical properties). In other cases (the bricks), you have to change the structure dramatically to get an equally different effect, and in those, thinking about information doesn’t help much. For me, it’s like relativity versus Newtonian physics, you can use relativity in all the contexts where classic physics is appropriate, but in many cases, why would you? Same here: you can think of the “information delivered” if you bury me with falling bricks, but why would you? And by the way, thinking in representational/intentional terms, by throwing bricks at me you would deliver quite a strong message ;-).
      I hope I’m making myself understood: information is just another way to model what’s happening. When the structure counts 99.9% and the ingredients are almost irrelevant, you think of information (a book contains the same information of its electronic version), when the situation is the inverse, you forget about information and use the other properties.

      I am very much aware of IIT 3.0, and I do see its appeal, but I also wish it was marketed differently; as it stands, it has much more to do with how to represent information processing and integration than with what it’s supposed to be about: consciousness. While reading Vigo’s paper I could not avoid to relate it to IIT and think: eh, Vigo’s attempt is in the right frame, I wish Tononi could see why.
      Intentional information is mental “and physical only as much as mental is physical” we agree on that. I’m trying to push the boundaries and see what comes before intentional information, trying to explore the domain between structures that make differences and the mental. I think this is necessary because IIT does not fit well enough: it does not explain how a structure becomes “about something else”.

      It’s a long and madly ambitious project, and I’m not even officially started, but I’m sure not getting bored!

      Hope some of the above makes sense to you…

  6. Answering ihtio: “What then “0100″ and “0101″ represent? In my opinion, they don’t. ” I agree, of course they don’t intrinsically. And therefore they are not information in themselves.

    “The rest of your post only confirms that every person thinks of “information” in his/her own way, yielding the concept not really practical in general. ” I wouldn’t agree with that and what follows. I think we should work on the notion of information to make it practical, and not bemoan information theory being imperfect or there being several information theories.

    • ihtio says:

      Apologies for my responding after such a long hiatus!

      Our confusion concerning binary strings stems from the fact that originally information theory was conceived to enable stable communication, as Sergio noticed in his post. The idea of information presupposes an entity that is able to interpret the signals. I’ll get back to that at the end of my response.

      I do not bemoan information theory, because it is about information per se, but about how to calculate information. For this purpose information theory performs perfectly. The problem is with the notion of information, not with the measure of information.

      The idea of grounding information in physics if of course neat and promising. You may have a chain of atoms and flip them as you please, identifying some of the states as zeroes and ones. You can easily calculate how much information there is. You could even transmit this information, by moving the structure or whatever.
      Now, the decoding part: the information is the cause of something happening, or the interpretation is the effect of the structure. So far so good. I’m loving it!

      When I digest a cookie, I’m basically processing (or interpreting) information. Photosynthesis is processing of information: information = electromagnetic field + water + some other stuff => interpretation = energy stored in sugard, etc. When someone hits me in the face, we observe transmission of information.
      The problem here is that the concept of information, under this interpretation, has no bounds. Everything on some level can be construed as information.

      That’s why my critique is as follows: the method of measuring is fine. However the concept of information isn’t fine. You either have many interpretations or you have an interpretation that is applicable to everything and is still very subjective (in the case of chemicals A, B in the glass of water, after adding chemical C the level of the water rose a little bit — is it not information? Or is it not relevant for someone?).
      Instead of looking for a general idea, maybe we could as well do with concepts that are suitable for particular domains (molecular biology, chemical reactions, fighting). See also: http://en.wikipedia.org/wiki/Family_resemblance

  7. Yeah, it kind of makes sense, but I’ll have to think about it a bit more, to see if the intuitive distinction in my head is clear enough to refute the possible middle ground you are proposing. I’ll get in touch if I have something on my hands.

    You say “When the structure counts 99.9% and the ingredients are almost irrelevant”… so you would agree that information is a subtype of causality?

    I agree with you on the framing of IIT. I think that the identity they are proposing between experience and irreducible information is such a stretch. It would just suffice to say that conscioussness might possess such processing properties instead. The theory is still better then all the previous attempts to say something constructive about conscioussness.

    • Sergio Graziosi says:

      Alexander: please do come back when you have some more, I need challenges like yours to understand whether my own intuitions do lead somewhere, if they require more scaffolding and/or if they make any sense at all. Also: watch this place if you’re interested in consciousness, I hope I’ll have some surprises.

      You say “When the structure counts 99.9% and the ingredients are almost irrelevant”… so you would agree that information is a subtype of causality?

      Tentatively, and with all the caveats that the term causality compels me to use, I may risk a “Yes”, but in all honesty, I don’t know: it’s my turn to think some more!

      IIT: I liked V2 so much more! Version 3.0 introduces some changes (Exclusion axiom) that are there apparently to defend the indefensible, and that’s never a good sign.

  8. Also, my email is alexyartsev@gmail.com, if it is convenient, you might wanna drop me a line and we’ll continue this discussion there!

  9. […] above seems tightly related to the main intuition that I’ve tried to express in my post about Information. The mechanism that Friston describes in his paper applies to dynamic systems in general, and can […]

  10. […] the big picture, I can see a pattern emerging by the action of natural selection: information (as structures that make a difference) keeps accumulating. It does so by aggregating in clusters of ever-growing superstructures: […]

  11. […] precisely why it’s a useful concept. Furthermore, it is entirely possible and appropriate to describe information in terms of underlying […]

  12. […] between Shannon’s Information and Ecological Information. (See also my attempt to link structure and dynamics to SI.) II. Showing how information is filtered/compressed in order to extract EI from raw sensory […]

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Google photo

You are commenting using your Google account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s

This site uses Akismet to reduce spam. Learn how your comment data is processed.

Follow me on Twitter

All original content published on this blog is licensed under a Creative Commons Attribution-NonCommercial 4.0 International License.
Creative Commons Licence
Please feel free to re-use and adapt. I would appreciate if you'll let me know about any reuse, you may do so via twitter or the comments section. Thanks!

%d bloggers like this: