Lecture04 Summary
• Applications for the rules with more than two symbols (e.g. three symbols; x1, x2, x3)
1. Chain Rule
H(x1, x2, x3) = H(x1) + H(x2|x1) + H(x3|x1, x2)
2. Conditional Mutual Information
I(X, Y, Z) = H(X|Z) − H(X|Y, Z)

Z is considered Side Information.

• Using Probability Distribution to derive mutual information.
• Partial Mutual Information
• Directed graphs is a convenient and useful way to represent information channel where the symbols (input & outputs) are shown as nodes and the connecting directed arrows indicate transitional probabilities.
For example,

• Steps to determining information lost or gained.
1. Compute entropy H(X).
2. Compute Mutual information.
3. Compute Information Loss.
H(X|Y) = H(X) − I(X; Y)
And H(X|Y) ≠ 0 implies that there is information loss through the information channel.
• Hard Decision
If there is no ambiguity (transitional probability ≈ 1 )then the occurence of the output symbol is pretty sure. The decision resulting in such output symbols is called hard decision (the output symbols are therefore called hard decisions).
• Soft Decision
If there is ambiguity (transitional probability ≈ 0 )then the occurence of the output symbol is not definite. The decision resulting in such output symbols is called soft decision (the output symbols are therefore called soft decisions).
• Both Hard & Soft Decision Models results in information loss,
I(X; Y)Soft > I(X; Y)Hard
H(X|Y)Soft < H(X|Y)Hard
and Hard decision losses more information than soft decisions. Thus, Quantization is information lossy.
• Mechanics of Information Loss
There is less information in Y than in the cartesian product or set X × η (i.e, H(Y) < H(C) = H(X, N)), but more information in Y than information in either X alone or η alone (H(Y) > H(X) & H(Y) > H(N)).
Thus noise information has been added to information.