Thursday, October 23, 2008

Information Theory Post at PT

I'd like to think that I have a reputation of being fair to both sides of the ID/evolution debate, giving credit and criticisms where due.

That said, I enjoyed a recent post at Panda's Thumb by PvM entitled "Information Content of DNA" (see my blog roll at the right). I found the post to be interesting and fair. I would have preferred more details (i.e. equations, reference papers) to support the claims made, but I'm sure someone will point me in the right direction.

As some of you know, I believe Information Theory is a key area of interest/research in biology and could help shed light on the ID/evolution debate. The tricky part is determining exactly what constitutes information in biology, something the PT post commented on.

3 comments:

  1. This is too easy.

    PvM wrote: "However, information theory shows that random sequences have the lowest information content..."

    Wrong.

    In information theory, constant sequences have the lowest information content.

    Purely random sequences have the highest information content.

    I'm not sure where he's studying information theory.

    ReplyDelete
  2. Biologists and math don't mix well. A certain sock, Pimp Van Pickle, is mopping the floor with PvM. But hopefully PvM will provide his reference about the two reigions of DNA sequences (coding and non-coding) having different entropies. If true, it is a piece of information that is worth thinking about.

    ReplyDelete
  3. PvM @ PT: "Depends on the definition of information you use, you are correct in a Kolmogorov sense but wrong in a Shannon sense and I believe the person was describing the Shannon sense."

    Based on my short wikipedia research, it would appear PvM is referring to Kolmogorov complexity theory:

    "In algorithmic information theory (a subfield of computer science), the Kolmogorov complexity (also known as descriptive complexity, Kolmogorov-Chaitin complexity, stochastic complexity, algorithmic entropy, or program-size complexity) of an object such as a piece of text is a measure of the computational resources needed to specify the object."

    Based on this wikipedia article, it would appear that PvM has it wrong. IT as developed by Shannon clearly implies the more random an event is, the less likely it is to occur (i.e. lower probability) and thus the more information it contains.

    ReplyDelete