**Are you an EPFL student looking for a semester project?**

Work with us on data science and visualisation projects, and deploy your project as an app on top of GraphSearch.

Publication# On the Construction of Polar Codes

Seyed Hamed Hassani, Ramtin Pedarsani, Emre Telatar

*Ieee Service Center, 445 Hoes Lane, Po Box 1331, Piscataway, Nj 08855-1331 Usa, *2011

Conference paper

Conference paper

Abstract

We consider the problem of efficiently constructing polar codes over binary memoryless symmetric (BMS) channels. The complexity of designing polar codes via an exact evaluation of the polarized channels to find which ones are "good" appears to be exponential in the block length. In [3], Tal and Vardy show that if instead the evaluation if performed approximately, the construction has only linear complexity. In this paper, we follow this approach and present a framework where the algorithms of [3] and new related algorithms can be analyzed for complexity and accuracy. We provide numerical and analytical results on the efficiency of such algorithms, in particular we show that one can find all the "good" channels (except a vanishing fraction) with almost linear complexity in block-length (except a polylogarithmic factor).

Official source

This page is automatically generated and may contain information that is not correct, complete, up-to-date, or relevant to your search query. The same applies to every other page on this website. Please make sure to verify the information with EPFL's official sources.

Related concepts

Loading

Related publications

Loading

Related concepts (9)

Polar code (coding theory)

In information theory, a polar code is a linear block error-correcting code. The code construction is based on a multiple recursive concatenation of a short kernel code which transforms the physical

Time complexity

In computer science, the time complexity is the computational complexity that describes the amount of computer time it takes to run an algorithm. Time complexity is commonly estimated by counting the

Algorithm

In mathematics and computer science, an algorithm (ˈælɡərɪðəm) is a finite sequence of rigorous instructions, typically used to solve a class of specific problems or to perform a computation. Algo

Related publications (9)

Loading

Loading

Loading

The two central topics of information theory are the compression and the transmission of data. Shannon, in his seminal work, formalized both these problems and determined their fundamental limits. Since then the main goal of coding theory has been to find practical schemes that approach these limits. Polar codes, recently invented by Arikan, are the first "practical" codes that are known to achieve the capacity for a large class of channels. Their code construction is based on a phenomenon called "channel polarization". The encoding as well as the decoding operation of polar codes can be implemented with O(N log N) complexity, where N is the blocklength of the code. We show that polar codes are suitable not only for channel coding but also achieve optimal performance for several other important problems in information theory. The first problem we consider is lossy source compression. We construct polar codes that asymptotically approach Shannon's rate-distortion bound for a large class of sources. We achieve this performance by designing polar codes according to the "test channel", which naturally appears in Shannon's formulation of the rate-distortion function. The encoding operation combines the successive cancellation algorithm of Arikan with a crucial new ingredient called "randomized rounding". As for channel coding, both the encoding as well as the decoding operation can be implemented with O(N log N) complexity. This is the first known "practical" scheme that approaches the optimal rate-distortion trade-off. We also construct polar codes that achieve the optimal performance for the Wyner-Ziv and the Gelfand-Pinsker problems. Both these problems can be tackled using "nested" codes and polar codes are naturally suited for this purpose. We further show that polar codes achieve the capacity of asymmetric channels, multi-terminal scenarios like multiple access channels, and degraded broadcast channels. For each of these problems, our constructions are the first known "practical" schemes that approach the optimal performance. The original polar codes of Arikan achieve a block error probability decaying exponentially in the square root of the block length. For source coding, the gap between the achieved distortion and the limiting distortion also vanishes exponentially in the square root of the blocklength. We explore other polar-like code constructions with better rates of decay. With this generalization, we show that close to exponential decays can be obtained for both channel and source coding. The new constructions mimic the recursive construction of Arikan and, hence, they inherit the same encoding and decoding complexity. We also propose algorithms based on message-passing to improve the finite length performance of polar codes. In the final two chapters of this thesis we address two important problems in graphical models related to communications. The first problem is in the area of low-density parity-check codes (LDPC). For practical lengths, LDPC codes using message-passing decoding are still the codes to beat. The current analysis, using density evolution, evaluates the performance of these algorithms on a tree. The tree assumption corresponds to using an infinite length code. But in practice, the codes are of finite length. We analyze the message-passing algorithms for this scenario. The absence of tree assumption introduces correlations between various messages. We show that despite this correlation, the prediction of the tree analysis is accurate. The second problem we consider is related to code division multiple access (CDMA) communication using random spreading. The current analysis mainly focuses on the information theoretic limits, i.e., using Gaussian input distribution. However in practice we use modulation schemes like binary phase-shift keying (BPSK), which is far from being Gaussian. The effects of the modulation scheme cannot be analyzed using traditional tools which are based on spectrum of large random matrices. We follow a new approach using tools developed for random spin systems in statistical mechanics. We prove a tight upper bound on the capacity of the system when the user input is BPSK. We also show that the capacity depends only on the power of the spreading sequences and is independent of their exact distribution.

The invention of Fountain codes is a major advance in the field of error correcting codes. The goal of this work is to study and develop algorithms for source and channel coding using a family of Fountain codes known as Raptor codes. From an asymptotic point of view, the best currently known sum-product decoding algorithm for non binary alphabets has a high complexity that limits its use in practice. For binary channels, sum-product decoding algorithms have been extensively studied and are known to perform well. In the first part of this work, we develop a decoding algorithm for binary codes on non-binary channels based on a combination of sum-product and maximum-likelihood decoding. We apply this algorithm to Raptor codes on both symmetric and non-symmetric channels. Our algorithm shows the best performance in terms of complexity and error rate per symbol for blocks of finite length for symmetric channels. Then, we examine the performance of Raptor codes under sum-product decoding when the transmission is taking place on piecewise stationary memoryless channels and on channels with memory corrupted by noise. We develop algorithms for joint estimation and detection while simultaneously employing expectation maximization to estimate the noise, and sum-product algorithm to correct errors. We also develop a hard decision algorithm for Raptor codes on piecewise stationary memoryless channels. Finally, we generalize our joint LT estimation-decoding algorithms for Markov-modulated channels. In the third part of this work, we develop compression algorithms using Raptor codes. More specifically we introduce a lossless text compression algorithm, obtaining in this way competitive results compared to the existing classical approaches. Moreover, we propose distributed source coding algorithms based on the paradigm proposed by Slepian and Wolf.

Sofien Bouaziz, Bailin Deng, Mario Moacir Deuss, Alexandre Kaspar, Mark Pauly, Yuliy Schwartzburg

In architectural design, surface shapes are commonly subject to geometric constraints imposed by material, fabrication or assembly. Rationalization algorithms can convert a freeform design into a form feasible for production, but often require design modifications that might not comply with the design intent. In addition, they only offer limited support for exploring alternative feasible shapes, due to the high complexity of the optimization algorithm. We address these shortcomings and present a computational framework for interactive shape exploration of discrete geometric structures in the context of freeform architectural design. Our method is formulated as a mesh optimization subject to shape constraints. Our formulation can enforce soft constraints and hard constraints at the same time, and handles equality constraints and inequality constraints in a unified way. We propose a novel numerical solver that splits the optimization into a sequence of simple subproblems that can be solved efficiently and accurately. Based on this algorithm, we develop a system that allows the user to explore designs satisfying geometric constraints. Our system offers full control over the exploration process, by providing direct access to the specification of the design space. At the same time, the complexity of the underlying optimization is hidden from the user, who communicates with the system through intuitive interfaces.