Tuesday, May 13, 2008

Self-Information

Good morning class. In The Problem of Design - Part 2, a link to wikipedia regarding self-information was referenced as a potential "measuring stick" for information in nature. Let's explore this, shall we?

We'll start off first with an equation.

(collective shudder from the class)

No! No! No! Class! There is nothing to fear from equations. Equations are our friends. OK, I lied. Equations aren't our friends. They actually have no mind or personality whatsoever (like some politicians and others who shall remain nameless for legal reasons). Equations are merely mathematical representations of the real world and are used every day by almost every one.

Um, sir, the equation...

Oh yes. Pardon my rambling: I(x) = log2[1/Pr(x)] (for units in bits)

Expressionless stares from the class.

Translation: The amount of information of event x is inversely proportional to the probability of event x.

More silent stares...

Translation of translation: the more unlikely event x is, the more information it contains.

"Ah"

Before we look at some examples, let's look at logarithms, first. I believe the class should know exponents by now, such as 10^2 = 100 and 2^4 = 16, right?

Silent bobble heads bobbing...

Well, logarithms are similar to exponents, but the reverse. At a more general level, take the expression A^B = C: A is multiplied by itself B times to get C. With me so far?

More silent bobbing...

Good. Now for logarithms, the expression goes logA(C) = B. In other words, we are trying to calculate B, the amount of times we must multiply A by itself to get C.

"Ooooooh! Aaahhhh!"

Excellent! Now onto the examples from the article. For tossing a fair coin, the probability of heads/tails is 50% or 0.5 or 1/2 [Pr(heads) = 0.5] and 1/0.5 = 2. Therefore, when tossing a fair coin, the chances of it landing on heads contains I(head) = log2(2) = 1 bit of information.

For tossing a single six-sided die, the probability of getting a specific number (say six) is Pr(six) = 1/6. Therefore, the amount of information in tossing a six from a single six-sided die is I(six) = log2(6) = 2.585 bits.

For tossing two six-sided dice independently, the probability of a specific two-number combination (say snake-eyes or two ones) is Pr(snake eyes) = 1/6 x 1/6 = 1/36. Therefore, the amount of information in rolling snake-eyes from independently tossing two six-sided dice is I(snake eyes) = log2(36) = 5.170 bits.

Thus, as the probabilities get smaller (1/2, 1/6, 1/36), the amount of information gets larger (1 bit, 2.585 bits, 5.17 bits, respectively).

Now class, can anyone see a problem with using self-information as a "measuring stick" for information in nature?

A curious and bright student raises a hand.

Yes, the student with the inquisitiveness in the back...

Stands up sheepishly. "Um, is it because self-information deals only with events and not objects?"

YES! Very good! Determining the amount of information in an event is all fine-and-dandy, but what it really required is the amount of information contained in any event OR object. Also, an object is not necessarily an event. If you do equate objects with events, then on what objective grounds do you do so? IOW, what probability do you assign to an object and how do you obtain it without any subjectivity?

To sum up, self-information is a good start, but not sufficient as a generic "measuring stick" for information in objects. Class dismissed.

"Um, any homework, sir?"

For the keener, yes. For the rest of you, I hear Game 3 of the Pens-Flyers East Finals is on TV tonight. I expect a full report tomorrow. Class dismissed!

5 comments:

  1. How many bits of information are in this question? Naively, each location in the question could be any of say 50 or so different characters...assuming each character were equally likely, calculating the information would be trivial, and would be about 5.6 bits per location. But in English, not every location is equally likely to have every value. Consequently, other methods of estimating information are needed.

    For example, Thomas cover estimates an entropy of 1.3 bits per letter in A convergent gambling estimate of the entropy of english (and please don't share this reference with wikedpedia.)

    It would be interesting to devise an appropriate strategy to estimate the entropy of DNA sequences, if it has not already been done. (It probably has been done, I just haven't researched it yet).

    ReplyDelete
  2. William, again I must stress I am an amateur when it comes to Information Theory. While I can easily grasp the concept of self-information, it is a little more difficult for me (at this time) to grasp the concept of information entropy.

    At wikipedia, information entropy is defined as "...a measure of the uncertainty associated with a random variable" (emphasis mine). My question is why measure uncertainty/improbability unless, like self-information, it is related (likely inversely) to the amount of information. If this is the case, then how does information entropy differ from self-information?

    Even once I do eventually grasp the concept of information entropy, I don't think it is necessarily applicable (in its current form) to measuring the amount of information in natural objects. I'll go into more detail in a future post regarding how I envision the information "measuring stick" to look like.

    No worries on the wiki thing. I am a "user" not an editor/"chosen one". However, wiki can be a good starting point - nothing more and nothing less.

    ReplyDelete
  3. "Even once I do eventually grasp the concept of information entropy, I don't think it is necessarily applicable (in its current form) to measuring the amount of information in natural objects."

    You could be right, though my intuition says entropy is useful, even if problematic due to the a priori need to either know, assume, or define probabilities.

    I look forward to your next post in this series, and regarding your comprehension, I suspect you're being too modest.

    I'll recommend another more mathematical book later if necessary, but a book that attempts to describe the concepts in more laymen terms, one that I find useful and only sometimes annoying is "An Introduction to Information Theory: Symbols, Signals, and Noise" by John R. Pierce. It also has a chapter relating entropy in information theory to entropy in physics. The book resorts to math about the level you've used in this entry. A good book will be much better than the articles at Wikipedia (which for non-controversial articles aren't in general bad, but an encyclopedia serves a different purpose than a book on the subject).

    By the way, I was once more of a very minor information theory (IT) guru, and am refreshing my memory, in part thanks to your proding.

    That being said, don't be afraid to correct me or assume I am wrong, as IT is something of a long shelved hobby.

    ReplyDelete
  4. And this Information Theory Primer looks useful if you're going to where I think you're going...

    ReplyDelete
  5. William, thanks for the Primer link, which I have to now squeeze in time to read it in between work, family and reviewing my former thesis supervisor's comments on a journal paper - that I submitted to him A YEAR AND A HALF AGO! (I only say that in jest and not in frustration; well, partially anyway). ;)

    You realise you just delayed "resolution of the cliffhanger", eh?

    And thanks for the plug. Much appreciated.

    ReplyDelete