The paper "Optimal Decoding of Linear Codes for Minimizing Symbol Error Rate", which is the source of the BCJR decoding algorithm, is my current object to understand. The following is the notes of this paper. Some of them are the original words from the paper.
Abstract
The general problem of estimating the a posteriori probabilities (APP) of the states and transitions of a Markov source observed through a discrete memoryless channel (DMC) is considered. The decoding of linear block and convolutional codes to minimize symbol error probability is shown to be a special case of this problem.
Introduction
Viterbi algorithm → maximum-likelihood decoding method → minimize the probability of word error for convolutional codes not the symbol (or bit) error
the author: L. R. BAHL, J. COCKE, F. JELINEK, AND J. RAVIV → BCJR algorithm
The General Problem
source → discrete-time finite-state Markov process → $M$ distinct states → indexed by the integer $m, m = 0,1,\cdots,M - 1$
time $t$: source state $S_t$ → output $X_t$
time $t$ to $t‘$: source sequence $\boldsymbol{S}_t^{t‘} = S_t,S_{t+1},\cdots,S_{t‘}$ → output sequence $\boldsymbol{X}_t^{t‘} = X_t,X_{t+1},\cdots,X_{t‘}$
state transitions → transition probabilities
\[p_t(m|m‘) = \Pr \{S_t = m | S_{t-1} = m‘\}\]
output by the probabilities ($X \in$ some finite discrete
alphabet)
\[q_t(X|m‘,m) = \Pr\{X_t = X | S_{t-1} = m‘; S_t = m\}\]
The Note of the Paper "Optimal Decoding of Linear Codes for Minimizing Symbol Error Rate"(1)
原文:http://www.cnblogs.com/xiyi2013/p/3766540.html