Turbo Code Project
Page |

[ Contents | Intro | Encoding | Decoding | Performance | Wireless | Multi-media | Conclusions ]

It is proposed that an iterative decoding scheme should be used. The decoding algorithm is similar to Viterbi algorithm in the sense that it produces soft outputs. While the Viterbi algorithm outputs either 0 or 1 for each estimated bit, the turbo code decoding algorithm outputs a continuous value of each bit estimate. While the goal of the Viterbi decoder is to minimize the code word error by finding a maximum likelihood estimate of transmitted code word, the soft output decoding attempts to minimize bit error by estimating the posterior probabilities of individual bits of the code word. We called the decoding algorithm Software Decision Viterbi Decoding.

The turbo decoder consists of *M* elementary decoders - one for each
encoder in turbo encoding part. Each elementary decoder uses the Software
Decision Viterbi Decoding to produce a software decision for each received bit.
After an iteration of the decoding process, every elementary decoder shares its
soft decision output with the other *M* - 1 elementary decoders.

In theory, as the number of these iterations approaches infinity, the estimate at the output of decoder will approach the maximum a posteriori (MAP) solution.

We make use of the previous example of encoding of Turbo code to illustrate the decoding of the Turbo Code. The encoded output is

[1 1 0 0 1 1 1 0 1 0 1 0 1 1 0]

Suppose the channel is noisy, some bits are corrupted after transmission, the received bits becomes

[1 1 0

1 1 1 0 1 010 1 10] (1bits underlined are corrupted)

The turbo-code decoder can be described in the following diagram:

Assuming zero decoder delay in the turbo-decoder, the decoder 1 computes a
soft-output from the systematic data (*x*_{0}), code
information of encoder 1 (*y*_{1}) and a-priori
information (*La*_{2}). From this output, the systematic
data (*x*_{0}) and a-priori information
(*La*_{2}) are subtracted. The result is multiplied by
the scaling factor called channel reliability *L*_{c} to
compensate the distortion. The result is uncorrelated with
*x*_{k} and is denoted as
*Le*_{1}, for extrinsic data from decoder 1.

Decoder 2 takes as input the interleaved version of
*Le*_{1} (the a-priori information La1), the code
information of second encoder (*y*_{2}) and the
interleaved version of systematic data (*a*(*x*_{k})). Decoder 2
generates a soft output, from which the systematic data
(*L*_{c}*a*(*x*_{0})) and a-priori
information (*La*_{1}) was subtracted. The result is
multiplied by the scaling factor called channel reliability
*L*_{c} to compensate the distortion. The extrinsic data
from decoder 2 (*Le*_{2}) is interleaved to produce
*La*_{2}, which is fed back to decoder 1. And the
iterative process continues.

Inside the decoder, Soft-Output Viterbi Algorithm (SOVA) is used to determine the result with maximum likelihood. The process SOVA is similar to that Viterbi algorithm. A trellis is formed first.

The trellis is to shows how the codes are outputted by encoder. The bits in each node represents the states of the encoder. Each line represents a transition on receipt of codes from encoder The encoder output (decoder input) for all combination of states and input of the encoders are summarized as follows.

State 1 State 2 Input Output Next

State 1Next

State 2Decoder Input 0 0 0 0 0 0 00 0 0 1 1 1 0 11 0 1 0 0 1 0 00 0 1 1 1 0 0 11 1 0 0 1 1 1 01 1 0 1 0 0 1 10 1 1 0 1 0 1 01 1 1 1 0 1 1 10

From the above table, each state has its own input bit pattern. When bit
streams are inputted to decoder, they can be compared to the input
(*x*_{0}* x*_{1}) to see whether
they are matched. The result is represent by likelihood of input:

L_{i} = |
-1 | if no bits are matched |

0 | if 1 bit are matched | |

1 | if all 2 bits are matched |

The overall likelihood of a transition is sum of likelihood of input and
a-prior likelihood information (*L*_{p}).

*L* = *L*_{i} + *L*_{p}

L_{p} = |
0.5 x a-prior information L_{an} |
if x_{0} are 1 |

-0.5 x a-prior information L_{an} |
if x_{0} are 0 |

The algorithm is as follows: Start from state 00, the overall likelihood of each transition are evaluated. The overall likelihood of each node is obtained by the maximum accumulated likelihood. With this algorithm, the following will be obtained.

The surviving path is thus derived by tracing back from last stage (stage 5) to the first one (stage 0) and is the results of hard-output Viterbi Algorithm.

[1 0 1 0 1]

To find the soft-output, non-surviving paths are considered. We consider the non-surviving path the one that is product by making different decision in one of the stage in tracing back. For example, when tracing back from (last stage) stage 5, one of the non-surviving path is found by tracing back to state 00 instead of state 01 from stage 5 to stage 4. Following the arrows, bit 1 is also 1. We found that bit 1 will have the following results when decision is changed in different stages:

Stage | 1 | 2 | 3 | 4 | 5 | ||

Bit 1 | X | X | 0 | 1 | 1 | where X means cannot trace back to state 00 in stage 0 | |

Delta | - | - | 3 | 1 | 2 |

Here we define a function delta to describe the tendency to have non-surviving path. It is the difference in overall likelihood when a different decision in a particular the stage.

The soft-output of bit 1 is evaluated by the following formula:

bit_valuexmin(deltawhich make bit 1 change to value other thanbit_value)

bit_value=1 for bit output = 1 -1 for bit output = 0

In bit one, the decision only changes when a state changes in stage 3. The minimum delta is 3. Thus the soft-output of the bit one is 3.

Repeat it for bit two (originally bit two = 0),

Stage | 2 | 3 | 4 | 5 | ||

Bit 1 | X | 1 | 1 | 1 | where X means cannot trace back to state 00 in stage 0 | |

Delta | - | 3 | 1 | 2 |

From the above results, in bit two, the decision changes when a state changes in stage 3, 4 and 5. The minimum delta is 1. Thus the soft-output of the bit two is -1.

Using this algorithm, the soft-output will become

[3 -1 1 -1 2]

After the soft-output is evaluated by SOVA decoder, the data will be passed to the second decoder for further decoding. Before passing to data to the second decoder, two processes are performed to the decoder output:

the a-prior information (La_{2}) and the
systematic data (x_{0}) are
subtracted |

d_{1}3 -1 1 -1 2 La_{2}0 0 0 0 0 x_{0}1 1 1 -1 1 Le_{1}2 -2 0 0 1

the result is multiplied by the scaling factor called channel
reliability L_{c}. The reason of the factor is
because the SOVA algorithm suffers a major distortion which is caused by
over-optimistic soft outputs. The factor is used to compensate this
distortion. |

L_{c}= mean(Le_{1}) x 2 / var(Le_{1})

The data from first decoder
*L*_{c}*L*_{e1}, together with
the systematic data *a*(*x*_{k}) and code
information from of second encoder (*y*_{2}), are then
fed into the second decoder for decoding, the decoding algorithm is the same as
the first one.

After decoding, the output of second decoder is processed in the same way and fed back to the first decoder. The process continues. The number of iterations depends on the designer. Usually, the larger the iteration, the more accurate the data but the longer the time its takes for decoding.

After iterations of decoding, the decoding results is the sign of the soft-output of the last decoder. Take the example, for the results of first decoder, the output become:

decoder output3 -1 1 -1 2 result1 0 1 0 1

which is the same as the input bit stream *u*. ie, the error can be
recovered.

Web page created and maintained by Kenny Chan

For problems or questions regarding this web, please contact [ProjectEmail].

Last updated: December 21, 1997.