**Are you an EPFL student looking for a semester project?**

Work with us on data science and visualisation projects, and deploy your project as an app on top of GraphSearch.

Publication# Displacement Convexity in Spatially Coupled Scalar Recursions

Rafah El-Khatib, Nicolas Macris, Rüdiger Urbanke

*IEEE-INST ELECTRICAL ELECTRONICS ENGINEERS INC, *2019

Journal paper

Journal paper

Abstract

We introduce a technique for the analysis of general spatially coupled systems that are governed by scalar recursions. Such systems can be expressed in variational form in terms of a potential function. We show, under mild conditions, that the potential function is displacement convex and that the minimizers are given by the fixed points (FPs) of the recursions. Furthermore, we give the conditions on the system such that the minimizing FP is unique up to translation along the spatial direction. The condition matches with that of Kudekar et al. [20] for the existence of spatial FPs. Displacement convexity applies to a wide range of spatially coupled recursions appearing in coding theory, compressive sensing, random constraint satisfaction problems, as well as statistical-mechanics models. We illustrate it with applications to low-density parity-check (LDPC) and generalized LDPC codes used for the transmission on the binary erasure channel or general binary memoryless symmetric channels within the Gaussian reciprocal channel approximation as well as compressive sensing.

Official source

This page is automatically generated and may contain information that is not correct, complete, up-to-date, or relevant to your search query. The same applies to every other page on this website. Please make sure to verify the information with EPFL's official sources.

Related concepts

Loading

Related publications

Loading

Related publications (11)

Loading

Loading

Loading

Related concepts (11)

System

A system is a group of interacting or interrelated elements that act according to a set of rules to form a unified whole. A system, surrounded and influenced by its environment, is described by its

Low-density parity-check code

In information theory, a low-density parity-check (LDPC) code is a linear error correcting code, a method of transmitting a message over a noisy transmission channel. An LDPC code is constructed usi

Binary erasure channel

In coding theory and information theory, a binary erasure channel (BEC) is a communications channel model. A transmitter sends a bit (a zero or a one), and the receiver either receives the bit corre

For the past 70 years or so, coding theorists have been aiming at designing transmission schemes with efficient encoding and decoding algorithms that achieve the capacity of various noisy channels. It was not until the '90s that graph-based codes, such as low-density parity-check (LDPC) codes, and their associated low-complexity iterative decoding algorithms were discovered and studied in depth. Although these schemes are efficient, they are not, in general, capacity-achieving. More specifically, these codes perform well up to some algorithmic threshold on the channel parameter, which is lower than the optimal threshold. The gap between the algorithmic and optimal thresholds was finally closed by spatial coupling. In the context of coding, the belief propagation algorithm on spatially coupled codes yields capacity-achieving low-complexity transmission schemes. The reason behind the optimal performance of spatially coupled codes is

`seeding'' perfect information on the replicas at the boundaries of the coupling chain. This extra information makes decoding easier near the boundaries, and this effect is then propagated into the coupling chain upon iterations of the decoding algorithm. Spatial coupling was also applied to various other problems that are governed by low-complexity message-passing algorithms, such as random constraint satisfaction problems, compressive sensing, and statistical physics. Each system has an associated algorithmic threshold and an optimal threshold. As with coding, once the underlying graphs are spatially coupled, the algorithms for these systems exhibit optimal performance. In this thesis, we analyze the performance of iterative low-complexity message-passing algorithms on general spatially coupled systems, and we specialize our results in coding theory applications. To do this, we express the evolution of the state of the system (along iterations of the algorithm) in a variational form, in terms of the so-called potential functional, in the continuum limit approximation. This thesis consists of two parts. In the first part, we consider the dynamic phase of the message-passing algorithm, in which iterations of the algorithm modify the state of the spatially coupled system. Assuming that the boundaries of the coupled chain are appropriately `

seeded'', we find a closed-form analytical formula for the velocity with which the extra information propagates into the chain. We apply this result to coupled irregular LDPC code-ensembles with transmission over general BMS channels and to coupled general scalar systems. We perform numerical simulations for several applications and show that our formula gives values that match the empirical, observed velocity. This confirms that the continuum limit is an approximation well-suited to the derivation of the formula. In the second part of this thesis, we consider the static phase of the message-passing algorithm, when it can no longer modify the state of the system. We introduce a novel proof technique that employs displacement convexity, a mathematical tool from optimal transport, to prove that the potential functional is strictly displacement convex under an alternative structure in the space of probability measures. We hence establish the uniqueness of the state to which the spatially coupled system converges, and we characterize it. We apply this result to the (l,r)-regular Gallager ensemble with transmission over the BEC and to coupled general scalar systems.This thesis addresses the topic of Low-Density Parity-Check (LDPC) code analysis, both asymptotically and for finite block lengths. Since in general it is a difficult problem to analyze individual code instances, ensemble averages are studied by following Gallager's original idea and results of Luby et al. Often, one can relate the insights gained by studying ensemble averages to statements regarding individual codes by proving that most elements in the ensemble behave "close" to the average. One important such average is the average weight distribution, another is the average pseudo-codeword distribution, where pseudo-codewords play the same role under iterative decoding as codewords do for maximum likelihood decoding. One of the main contributions of this thesis is the calculation of such averages for fully irregular LDPC code ensembles. Much of classical coding theory is aimed at the construction of codes with large minimum distance. This is so since away from capacity accurate bounds on the performance of a code can be given in terms of its minimum distance (or more generally, in terms of its weight distribution). Therefore, it is of interest to investigate what role the minimum distance plays for iteratively decoded LDPC code ensembles. In particular, sequences of capacity-achieving LDPC code ensembles of increasing length when transmission takes place over the Binary Erasure Channel (BEC) are investigated. It is shown that, under certain technical conditions, the minimum distance of such ensembles grows sub-linearly in the block length, a result which is somewhat surprising from a classical point of view. Specific attention is also given to the design of LDPC code ensembles, where an inherent trade-off is observed between achieving large minimum distance (which is relevant for the so called error-floor regime) and achieving a large threshold (which in the limit of long block lengths determines the worst channel on which transmission can be accomplished reliably). Most results on iterative coding systems to date address their asymptotic performance, i.e., their performance when the block length tends to infinity. For small and moderate block lengths the behavior of a code can deviate significantly from its asymptotic limit. It is therefore of high practical value to be able to analyze the finite-length performance of LDPC code ensembles. In this thesis, an exact such analysis is presented for iteratively decoded LDPC code ensembles over the BEC. In particular, expressions for the exact average bit and block erasure probabilities are computed by solving a set of recursions. Such an analysis is the starting point for a finite-length optimization, a topic which is slated for future work. The methods used in this thesis include a combinatorial approach (familiar to the coding society) as well as the powerful techniques developed in the statistical physics community, for example, the replica method or the mean-field approximation. Although the statistical physics techniques are ideally suited for the analysis of iterative coding systems, they are to date only accessible to a relatively small community. Therefore, an overview of these techniques is first presented using a language familiar to the coding theory community. Next, in order to highlight the differences and common points between the combinatorial and the statistical physics approaches, both techniques are applied to the weight distribution problem. It turns out that for regular ensembles, both methods yield the same result, while for irregular ensembles, they do differ in general.

This work deals with coding systems based on sparse graph codes. The key issue we address is the relationship between iterative (in particular belief propagation) and maximum a posteriori decoding. We show that between the two there is a fundamental connection, which is reminiscent of the Maxwell construction in thermodynamics. The main objects we consider are EXIT-like functions. EXIT functions were originally introduced as handy tools for the design of iterative coding systems. It gradually became clear that EXIT functions possess several fundamental properties. Many of these properties, however, apply only to the erasure case. This motivates us to introduce GEXIT functions that coincide with EXIT functions over the erasure channel. In many aspects, GEXIT functions over general memoryless output-symmetric channels play the same role as EXIT functions do over the erasure channel. In particular, GEXIT functions are characterized by the general area theorem. As a first consequence, we demonstrate that in order for the rate of an ensemble of codes to approach the capacity under belief propagation decoding, the GEXIT functions of the component codes have to be matched perfectly. This statement was previously known as the matching condition for the erasure case. We then use these GEXIT functions to show that in the limit of large blocklengths a fundamental connection appears between belief propagation and maximum a posteriori decoding. A decoding algorithm, which we call Maxwell decoder, provides an operational interpretation of this relationship for the erasure case. Both the algorithm and the analysis of the decoder are the translation of the Maxwell construction from statistical mechanics to the context of probabilistic decoding. We take the first steps to extend this construction to general memoryless output-symmetric channels. More exactly, a general upper bound on the maximum a posteriori threshold for sparse graph codes is given. It is conjectured that the fundamental connection between belief propagation and maximum a posteriori decoding carries over to the general case.