filler
^CSE454^ [01] >>

Information etc.

Introduction

CSE454 2002 : This document is online at   http://www.csse.monash.edu.au/~lloyd/tilde/CSC4/CSE454/   and contains hyper-links to other resources - Lloyd Allison ©.

 
filler
<< [02] >>

Information

Defn: The information in learning of an event `A' of probability P(A) is I(A) = -log2P(A) bits.


filler
<< [03] >>

Entropy

Entropy is the average information in a probability distribution over a sample (data) space S.
Defn:

   H = - SUMv in S {P(v) log2 P(v)}

filler
<< [04] >>

e.g. Fair coin: P(head) = P(tail) = 1/2;
H = -1/2 log2(1/2) - 1/2 log2(1/2)
= 1/2 + 1/2 = 1 bit

e.g. Biased coin: P(head) = 3/4, P(tail) = 1/4;
H = -3/4 log2(3/4) - 1/4 log2(1/4)
= 3/4{2 - log23} + 2/4
= 2 - 3/4 log23 < 1 bit


filler
<< [05] >>

Theorem H1

if {pi}i=1..N and {qi}i=1..N are probability distributions then

     N
  SUM { - pi log2(qi) }
   i=1
is minimised when qi=pi.


filler
<< [06] >>

Kullback Leibler distance

Defn: The K-L distance (also relative entropy) of prob' dist'n {qi} from {pi} is

     N         pi
  SUM pi log2( -- )  >= 0
   i=1         qi
NB. >=0 by H1. Not necessarily symmetric. (Use an integral for continuous prob' dist'ns.)

filler
<< [07] >>

Coding

We work with prefix codes: No code word is a prefix of another code word.

      Sender     ----------->   Receiver

                         *
 source     ----->  (0|1)   ----->   source
 symbols*   encode          decode   symbols*

Average code-word length, i.e. message length, is SUMv in S{P(v) |code(v)|}.
Cannot be less than entropy H. This lower bound is achieved if |code(v)| = -log2P(v).

filler


<< [08] >>
Shannon (1948) Mathematical Theory of Communication
1. Any uniquely decodable code for S has avg length >= H
2. There exists a prefix code for S with avg length <=H+1.

Huffman (1952)
algorithm to construct code

Arithmetic coding
gets arbitrarily close to coding lower bound

compression = modelling + coding,   and coding is "solved"

filler
<< [09] >>
probability, message length and coding

filler
<< [10] >>

Minimum Message Length (MML)

If we want to learn (infer) a hypothesis (theory | model) or parameter estimate(s), H, from data D

devise code for shortest expected two-part message
(i) H and then (ii) D|H

Note trade-off: complexity of H   v. complexity of D|H, both measured in bits, prevents over-fitting;

this issue includes precision used to state continuous-valued parameters.


filler
<< [11] >>
. . . MML

Sender-receiver paradigm keeps us "honest"

      Sender     ----------->   Receiver
  1. Sender and receiver get together and agree on priors, code-book, algorithms etc. Can discuss expected data, but have no "real" data.
  2. Sender and receiver are separated.
  3. Sender gets real data.
  4. Sender uses code-book, algorithms etc. to encode data.
  5. Receiver gets encoded data.
  6. Receiver uses code-book, algorithms etc. to decode data.

No hidden information passed "under the table". Safe: Cannot make a hypothesis look better than it really is.
Data-compression techniques can be used to design a "good" code if an optimal code is not known.

filler


<< [12] >>

Estimators

An estimator is a function from the sample (data) space, S, to the hypothesis (model, parameter) space H.

  e: S ---> H
e.g

different estimators may give different results in different situations.

filler


<< [13] >>

Prediction

Prediction is different from inference.

Inference (learning) is about finding an explanation (hypothesis, model, parameter est') for the data, prediction is about predicting future data and an explanation might or might not help.

The best that can be done in prediction is to use an average over all hypotheses weighted by their posterior probabilities.


filler
<< [14] >>

filler
<< [15]

© L. Allison, School of Computer Science and Software Engineering, Monash University, Australia 3168.
Created with "vi (IRIX)",   charset=iso-8859-1