Turbo Code Project Page

[ Contents | Intro | Encoding | Decoding | Performance | Wireless | Multi-media | Conclusions ]


It is proposed that an iterative decoding scheme should be used. The decoding algorithm is similar to Viterbi algorithm in the sense that it produces soft outputs. While the Viterbi algorithm outputs either 0 or 1 for each estimated bit, the turbo code decoding algorithm outputs a continuous value of each bit estimate. While the goal of the Viterbi decoder is to minimize the code word error by finding a maximum likelihood estimate of transmitted code word, the soft output decoding attempts to minimize bit error by estimating the posterior probabilities of individual bits of the code word. We called the decoding algorithm Software Decision Viterbi Decoding.

The turbo decoder consists of M elementary decoders - one for each encoder in turbo encoding part. Each elementary decoder uses the Software Decision Viterbi Decoding to produce a software decision for each received bit. After an iteration of the decoding process, every elementary decoder shares its soft decision output with the other M - 1 elementary decoders.

In theory, as the number of these iterations approaches infinity, the estimate at the output of decoder will approach the maximum a posteriori (MAP) solution.

An Example of Decoding of Turbo Code

We make use of the previous example of encoding of Turbo code to illustrate the decoding of the Turbo Code. The encoded output is

[1 1 0 0 1 1 1 0 1 0 1 0 1 1 0]

Suppose the channel is noisy, some bits are corrupted after transmission, the received bits becomes

[1 1 0 1 1 1 1 0 1 0 0 0 1 1 1] (bits underlined are corrupted)

The Overall Schematic

The turbo-code decoder can be described in the following diagram:

Assuming zero decoder delay in the turbo-decoder, the decoder 1 computes a soft-output from the systematic data (x0), code information of encoder 1 (y1) and a-priori information (La2). From this output, the systematic data (x0) and a-priori information (La2) are subtracted. The result is multiplied by the scaling factor called channel reliability Lc to compensate the distortion. The result is uncorrelated with xk and is denoted as Le1, for extrinsic data from decoder 1.

Decoder 2 takes as input the interleaved version of Le1 (the a-priori information La1), the code information of second encoder (y2) and the interleaved version of systematic data (a(xk)). Decoder 2 generates a soft output, from which the systematic data (Lca(x0)) and a-priori information (La1) was subtracted. The result is multiplied by the scaling factor called channel reliability Lc to compensate the distortion. The extrinsic data from decoder 2 (Le2) is interleaved to produce La2, which is fed back to decoder 1. And the iterative process continues.

Soft Output Decoding Algorithm (SOVA)

Step 1: Form the Trellis

Inside the decoder, Soft-Output Viterbi Algorithm (SOVA) is used to determine the result with maximum likelihood. The process SOVA is similar to that Viterbi algorithm. A trellis is formed first.

The trellis is to shows how the codes are outputted by encoder. The bits in each node represents the states of the encoder. Each line represents a transition on receipt of codes from encoder The encoder output (decoder input) for all combination of states and input of the encoders are summarized as follows.

State 1 State 2 Input Output Next
State 1
State 2
Decoder Input
0 0 0 0 0 0 00
0 0 1 1 1 0 11
0 1 0 0 1 0 00
0 1 1 1 0 0 11
1 0 0 1 1 1 01
1 0 1 0 0 1 10
1 1 0 1 0 1 01
1 1 1 0 1 1 10

Step 2: Determine the Accumulated Maximum Likelihood for Each State

From the above table, each state has its own input bit pattern. When bit streams are inputted to decoder, they can be compared to the input (x0 x1) to see whether they are matched. The result is represent by likelihood of input:

Li = -1 if no bits are matched
  0 if 1 bit are matched
  1 if all 2 bits are matched

The overall likelihood of a transition is sum of likelihood of input and a-prior likelihood information (Lp).

L = Li + Lp

Lp = 0.5 x a-prior information Lan if x0 are 1
  -0.5 x a-prior information Lan if x0 are 0

The algorithm is as follows: Start from state 00, the overall likelihood of each transition are evaluated. The overall likelihood of each node is obtained by the maximum accumulated likelihood. With this algorithm, the following will be obtained.

Step 3: Find the Surviving Path

The surviving path is thus derived by tracing back from last stage (stage 5) to the first one (stage 0) and is the results of hard-output Viterbi Algorithm.

[1 0 1 0 1]

Step 4: Determine the Soft Output

To find the soft-output, non-surviving paths are considered. We consider the non-surviving path the one that is product by making different decision in one of the stage in tracing back. For example, when tracing back from (last stage) stage 5, one of the non-surviving path is found by tracing back to state 00 instead of state 01 from stage 5 to stage 4. Following the arrows, bit 1 is also 1. We found that bit 1 will have the following results when decision is changed in different stages:

Stage 1 2 3 4 5    
Bit 1 X X 0 1 1   where X means cannot trace back to state 00 in stage 0
Delta - - 3 1 2    

Here we define a function delta to describe the tendency to have non-surviving path. It is the difference in overall likelihood when a different decision in a particular the stage.

The soft-output of bit 1 is evaluated by the following formula:

bit_value x min(delta which make bit 1 change to value other than bit_value)

bit_value = 1 for bit output = 1
  -1 for bit output = 0

In bit one, the decision only changes when a state changes in stage 3. The minimum delta is 3. Thus the soft-output of the bit one is 3.

Repeat it for bit two (originally bit two = 0),

Stage 2 3 4 5    
Bit 1 X 1 1 1   where X means cannot trace back to state 00 in stage 0
Delta - 3 1 2    

From the above results, in bit two, the decision changes when a state changes in stage 3, 4 and 5. The minimum delta is 1. Thus the soft-output of the bit two is -1.

Using this algorithm, the soft-output will become

[3 -1 1 -1 2]

Feeding Data to Another Decoder

After the soft-output is evaluated by SOVA decoder, the data will be passed to the second decoder for further decoding. Before passing to data to the second decoder, two processes are performed to the decoder output:

the a-prior information (La2) and the systematic data (x0) are subtracted
d1 3 -1 1 -1 2
La2 0 0 0 0 0
x0 1 1 1 -1 1
Le1 2 -2 0 0 1
the result is multiplied by the scaling factor called channel reliability Lc. The reason of the factor is because the SOVA algorithm suffers a major distortion which is caused by over-optimistic soft outputs. The factor is used to compensate this distortion.

Lc = mean(Le1) x 2 / var(Le1)

Iterative Decoding

The data from first decoder LcLe1, together with the systematic data a(xk) and code information from of second encoder (y2), are then fed into the second decoder for decoding, the decoding algorithm is the same as the first one.

After decoding, the output of second decoder is processed in the same way and fed back to the first decoder. The process continues. The number of iterations depends on the designer. Usually, the larger the iteration, the more accurate the data but the longer the time its takes for decoding.

Decision of Output

After iterations of decoding, the decoding results is the sign of the soft-output of the last decoder. Take the example, for the results of first decoder, the output become:

decoder output 3 -1 1 -1 2
result 1 0 1 0 1

which is the same as the input bit stream u. ie, the error can be recovered.


Back to Top
All rights reserved.
Web page created and maintained by Kenny Chan
For problems or questions regarding this web, please contact [ProjectEmail].
Last updated: December 21, 1997.