Rethinking the Minimum Distance: Channels With Varying Sampling Rate and Iterative Decoding of LDPC Codes
Lara Dolecek
EECS Department, University of California, Berkeley
Technical Report No. UCB/EECS-2007-168
December 19, 2007
http://www2.eecs.berkeley.edu/Pubs/TechRpts/2007/EECS-2007-168.pdf
In this dissertation we develop novel coding theoretic approaches for two problems relevant to modern communication systems. In the first part we address the issue of reliable communication under varying sampling rate, while in the second part we focus on the analytic understanding of the performance of low density parity check (LDPC) codes in the low bit error rate (BER) region. The underlying theme in both of these somewhat non-standard yet relevant problems is that the notion of a fundamental performance metric, typically taken to be the minimum distance of an additive error correcting code, needs to be rethought when the standard assumptions on the communication no longer hold.
We first investigate the problem of overcoming inadequate timing recovery from a coding theoretic perspective. We show how to systematically thin first order Reed-Muller codes, as a representative example of highly structured additive error correcting codes with good minimum distance properties, such that the resulting thinned code is immune to additive errors and a synchronization error. The method heavily relies on the novel run-length properties also developed here.
We also propose and study number theoretic constructions of sets of strings immune to multiple repetitions, also shown to have good cardinalities. These constructions are used to develop a prefixing-based method to improve the immunity of an arbitrary code to repetition errors. This judiciously chosen prefix carries asymptotically negligible redundancy. We also provide a low complexity message passing decoding algorithm that can handle both repetitions and additive errors.
In the second part, we study the performance of iteratively decoded LDPC codes in the low BER regime. Due to the prior limited analytical tools available to address and guarantee their performance in this regime, deployment of these powerful codes has still not met the original promise.
In order to gain a better understanding of the low BER performance of LDPC codes, we introduce the notion of a combinatorial object that we call an absorbing set. This object is viewed as a stable point of the bit-flipping algorithm, an algorithm that can be viewed as an asymptotic 1-bit approximation to many message passing decoding algorithms. Since absorbing sets are fixed points of the message passing algorithms, the decoder can get stuck in an absorbing set that is not a codeword. In particular, if there are absorbing sets smaller that the minimum distance of the code, the decoder is likely to converge to these objects. As a result, under iterative decoding, the low BER performance will be dominated by the number and the size of dominant absorbing sets, rather that the number of minimum distance codewords and the minimum distance itself, which is considered to be the performance metric under the maximum likelihood decoding and the key property of a code.
As a case study, we analyze the minimal absorbing sets of high-rate array-based LDPC codes. We provide a comprehensive analytic description of the minimal absorbing sets for this family of codes. In this study, we demonstrate the existence of absorbing sets whose weight is strictly smaller than the minimum distance of the code. These minimal absorbing sets, rather than minimum distance codewords, are also experimentally verified to dominate the low BER performance, further confirming the importance of our theoretical contributions.
Advisors: Venkat Anantharam
BibTeX citation:
@phdthesis{Dolecek:EECS-2007-168, Author= {Dolecek, Lara}, Title= {Rethinking the Minimum Distance: Channels With Varying Sampling Rate and Iterative Decoding of LDPC Codes}, School= {EECS Department, University of California, Berkeley}, Year= {2007}, Month= {Dec}, Url= {http://www2.eecs.berkeley.edu/Pubs/TechRpts/2007/EECS-2007-168.html}, Number= {UCB/EECS-2007-168}, Abstract= {In this dissertation we develop novel coding theoretic approaches for two problems relevant to modern communication systems. In the first part we address the issue of reliable communication under varying sampling rate, while in the second part we focus on the analytic understanding of the performance of low density parity check (LDPC) codes in the low bit error rate (BER) region. The underlying theme in both of these somewhat non-standard yet relevant problems is that the notion of a fundamental performance metric, typically taken to be the minimum distance of an additive error correcting code, needs to be rethought when the standard assumptions on the communication no longer hold. We first investigate the problem of overcoming inadequate timing recovery from a coding theoretic perspective. We show how to systematically thin first order Reed-Muller codes, as a representative example of highly structured additive error correcting codes with good minimum distance properties, such that the resulting thinned code is immune to additive errors and a synchronization error. The method heavily relies on the novel run-length properties also developed here. We also propose and study number theoretic constructions of sets of strings immune to multiple repetitions, also shown to have good cardinalities. These constructions are used to develop a prefixing-based method to improve the immunity of an arbitrary code to repetition errors. This judiciously chosen prefix carries asymptotically negligible redundancy. We also provide a low complexity message passing decoding algorithm that can handle both repetitions and additive errors. In the second part, we study the performance of iteratively decoded LDPC codes in the low BER regime. Due to the prior limited analytical tools available to address and guarantee their performance in this regime, deployment of these powerful codes has still not met the original promise. In order to gain a better understanding of the low BER performance of LDPC codes, we introduce the notion of a combinatorial object that we call an absorbing set. This object is viewed as a stable point of the bit-flipping algorithm, an algorithm that can be viewed as an asymptotic 1-bit approximation to many message passing decoding algorithms. Since absorbing sets are fixed points of the message passing algorithms, the decoder can get stuck in an absorbing set that is not a codeword. In particular, if there are absorbing sets smaller that the minimum distance of the code, the decoder is likely to converge to these objects. As a result, under iterative decoding, the low BER performance will be dominated by the number and the size of dominant absorbing sets, rather that the number of minimum distance codewords and the minimum distance itself, which is considered to be the performance metric under the maximum likelihood decoding and the key property of a code. As a case study, we analyze the minimal absorbing sets of high-rate array-based LDPC codes. We provide a comprehensive analytic description of the minimal absorbing sets for this family of codes. In this study, we demonstrate the existence of absorbing sets whose weight is strictly smaller than the minimum distance of the code. These minimal absorbing sets, rather than minimum distance codewords, are also experimentally verified to dominate the low BER performance, further confirming the importance of our theoretical contributions.}, }
EndNote citation:
%0 Thesis %A Dolecek, Lara %T Rethinking the Minimum Distance: Channels With Varying Sampling Rate and Iterative Decoding of LDPC Codes %I EECS Department, University of California, Berkeley %D 2007 %8 December 19 %@ UCB/EECS-2007-168 %U http://www2.eecs.berkeley.edu/Pubs/TechRpts/2007/EECS-2007-168.html %F Dolecek:EECS-2007-168