Mechanics of Information Loss (contd.)

As a continuation of studying the mechanics of information loss starting from Lecture04 and Lecture05, recall that in our example of the Automatic Repeat reQuest, ARQ system The DMS X emit symbols x0, x1; treating both as a single symbol ⟨x0, x1⟩ into an encoder E emitting encoded sequence ⟨x0, x1, Π⟩ where parity Π ≜ x0x1. The total information for the DMS is therefore H(C) = H(x0, x1, Π) = 2H(X).

Furthermore the erasure channel is used 3× to transmit the 3-bit code resulting in a coded information rate of 2H(X) × 1 ∕ 3. The erasure channel is represented in directed graph as Y3 erasure channel Note that p(xi) and p(Π) for the encoder will be different.

Since most possible outputs are "error detected and re-transmit", there are 4 possible outcomes for each of the 4 cases. Therefore, Decodable Outcome = 4 × 4 = 16. Probabilities for these decodable outcomes was Ps = 0.896.

Therefore, for all the remaining outcomes (which the sink requests for a re-transmit), the Probabilities for the re-transmit will be The question then is, for perfect transmission (i.e., without wrong symbol transmission)

What is the information rate penalty?

In other words,

How often, on the average must a triplet be transmitted (to successfully deliver the information)?

To answer this let us represent the transmission process graphically. Consider three states: ◯ Transmission state, ◯ Decode state and ◯ Accept state such that, probability to transmit, P = 1. This is shown as . For respective decoding case, the probability to decode Ps = 0.896. Thus . Also, for each case there are four possibilities but except for one the remaining possibilities are requested as re-transmit by the sink. The probability to re-transmit is Pr = 1 − Ps or .

This is called a Markov Process. Let us define Then by definition of Expected Values Thus for our example Pr = 0.104 ⇒ T = 1.11607. Thus the expected number of times one of the triplet symbols ⟨x0, x1, Π⟩ have to be transmitted is ≈12% increase in number of transmission (TPs + 0.12 = 1.12).

Let us also define Effective Information Rate Thus for our case Recall that I(X; Y) = 0.77676 thus [If I(X; Y) is maximized w.r.t p(xi) at the source to channel capacity and . Then, transmission with arbitrary information loss is possible (eg. Turbo Codes).]

Referring to the above directed graph:  and Thus This extra information (H(X) < H(Y)) is due to noise information. In other words, some kind of coding is necessary for above inequality.

Recall that information may be classified in terms of quality as useful and useless information. Using this classification

• Noise information is regarded useless by a communication engineer
• Noise information is regarded useful by a cryptographer.