**Are you an EPFL student looking for a semester project?**

Work with us on data science and visualisation projects, and deploy your project as an app on top of GraphSearch.

Publication# Coding for Communications and Secrecy

Abstract

Shannon, in his landmark 1948 paper, developed a framework for characterizing the fundamental limits of information transmission. Among other results, he showed that reliable communication over a channel is possible at any rate below its capacity. In 2008, Arikan discovered polar codes; the only class of explicitly constructed low-complexity codes that achieve the capacity of any binary-input memoryless symmetric-output channel. Arikan's polar transform turns independent copies of a noisy channel into a collection of synthetic almost-noiseless and almost-useless channels. Polar codes are realized by sending data bits over the almost-noiseless channels and recovering them by using a low-complexity successive-cancellation (SC) decoder, at the receiver. In the first part of this thesis, we study polar codes for communications. When the underlying channel is an erasure channel, we show that almost all correlation coefficients between the erasure events of the synthetic channels decay rapidly. Hence, the sum of the erasure probabilities of the information-carrying channels is a tight estimate of the block-error probability of polar codes when used for communication over the erasure channel. We study SC list (SCL) decoding, a method for boosting the performance of short polar codes. We prove that the method has a numerically stable formulation in log-likelihood ratios. In hardware, this formulation increases the decoding throughput by 53% and reduces the decoder's size about 33%. We present empirical results on the trade-off between the length of the CRC and the performance gains in a CRC-aided version of the list decoder. We also make numerical comparisons of the performance of long polar codes under SC decoding with that of short polar codes under SCL decoding. Shannon's framework also quantifies the secrecy of communications. Wyner, in 1975, proposed a model for communications in the presence of an eavesdropper. It was shown that, at rates below the secrecy capacity, there exist reliable communication schemes in which the amount of information leaked to the eavesdropper decays exponentially in the block-length of the code. In the second part of this thesis, we study the rate of this decay. We derive the exact exponential decay rate of the ensemble-average of the information leaked to the eavesdropper in Wyner's model when a randomly constructed code is used for secure communications. For codes sampled from the ensemble of i.i.d. random codes, we show that the previously known lower bound to the exponent is exact. Our ensemble-optimal exponent for random constant-composition codes improves the lower bound extant in the literature. Finally, we show that random linear codes have the same secrecy power as i.i.d. random codes. The key to securing messages against an eavesdropper is to exploit the randomness of her communication channel so that the statistics of her observation resembles that of a pure noise process for any sent message. We study the effect of feedback on this approximation and show that it does not reduce the minimum entropy rate required to approximate a given process. However, we give examples where variable-length schemes achieve much larger exponents in this approximation in the presence of feedback than the exponents in systems without feedback. Upper-bounding the best exponent that block codes attain, we conclude that variable-length coding is necessary for achieving the improved exponents.

Official source

This page is automatically generated and may contain information that is not correct, complete, up-to-date, or relevant to your search query. The same applies to every other page on this website. Please make sure to verify the information with EPFL's official sources.

Related concepts

Loading

Related publications

Loading

Related concepts (38)

Communication channel

A communication channel refers either to a physical transmission medium such as a wire, or to a logical connection over a multiplexed medium such as a radio channel in telecommunications and compute

Secrecy

Secrecy is the practice of hiding information from certain individuals or groups who do not have the "need to know", perhaps while sharing it with other individuals. That which is kept hidden is kno

Code

In communications and information processing, code is a system of rules to convert information—such as a letter, word, sound, image, or gesture—into another form, sometimes shortened or secret, for c

Related publications (108)

Loading

Loading

Loading

Shannon in his seminal work \cite{paper:shannon} formalized the framework on the problem of digital communication of information and storage. He quantified the fundamental limits of compression and transmission rates. The quantity \textit{channel capacity} was coined to give an operational meaning on the ultimate limit of information transfer between two entities, transmitter and receiver. By using a random coding argument, he showed the achievability of reliable communication on any noisy channel as long as the rate is less than the channel capacity. More precisely, he argued that, using a fixed composition random codebook at the sender and a maximum likelihood decoder in the receiver, one can achieve almost zero error communication of information. Shannon's result was a promise on the achievability on the amount of data one can transfer over a medium. Achieving those promises has spurred scientific interest since 1948. Significant effort has been spent to find practical codes which gets close to the Shannon limits. For some class of channels, codes which are capacity achieving has been found since then. Low Density Parity Check Codes (LDPC) is one such capacity achieving family of codes for the class of binary erasure channels. Recently, Arikan proposed a family of codes known as Polar codes which are found to be capacity achieving for binary input memoryless channels. Slowly but steadily the longstanding problem of achieving Shannon limit is thus getting settled. In order to realize a practical system which can achieve the Shannon limits, two important things are necessary. One of them is an efficient capacity achieving code sequence and the other is an implementable decoding rule. Both these require the knowledge of the probabilistic law of the channel through which communication take place. At the transmitter one needs to know the channel capacity whereas at the receiver a perfect knowledge of the channel is necessitated to build optimum decoder. The question of whether a decoder can be designed without knowing the underlying channel, yet we can do a reliable communication arises here. This is the subject of discussion in this report. Consider a situation where a communication system is to be designed without explicit knowledge about the channel. Here, neither the transmitter nor the receiver knows the exact channel law. Suppose, the possible list of channels is made available upfront to both entities (transmitter and receiver). Can we devise a reliable communication scheme, irrespective of the actual channel picked without both transmitter and receiver being aware of it. This is the central theme in what is known\footnote{Also referred as \textit{coding for class of channels} as discussed in \cite{paper:blackwell}, \cite{book:Wolfowitz}.} as \textit{coding for compound channels}\cite{book:Csiszar}. Designing a reliable communication system for compound channels essentially involve building a codebook at the transmitter and a decoding rule at the receiver. The encoder-decoder pair together must guarantee reliable communication over a set of channels. In the general setting, no feedback is assumed from the receiver to the transmitter. Thus, a single coding strategy (codebook) must be devised (and fixed) upfront prior to transmission. At the receiver end, the decoding strategy must be devised independent of any knowledge of the channel. Of course, the codebook devised at the sender is perfectly known to the receiver. The answer to the question on whether such a communication system can be build is on the affirmative. In order to talk about the existence of reliable communication strategies, one must first talk about the maximum possible rate, the capacity. The highest achievable rate in a compound setting is known as the \textit{compound capacity} or capacity of the family. This is analogous to the famous Shannon capacity being the ultimate limit on achievable rate for any single channel. The capacity of compound set of memoryless channels has been studied by Blackwell et al in \cite{paper:blackwell} and that of linear dispersive channels is investigated by Root and Varaiya in \cite{paper:root:varaiya}. Lapidoth and Telatar looked at the compound capacity for families of channels with memory, more specifically the finite state channels \cite{paper:lapidoth-telatar1998}. As a special case, they have also derived the compound channel capacity of a class of Gilbert-Elliot channels. A decoder which operates without the knowledge of the channel in this setup is called a universal decoder. It is known that Maximum Mutual Information (MMI) decoders proposed by Goppa are universal. MMI decoders compute the empirical mutual between a received codeword against the codebook and find the best matching word as the true estimate. The complexity of MMI decoder remain fixed even if we were to find structured codes. This motivate us to ask the question whether we can build a universal decoder which offer better structure. Decoding rules which brings additive nature were considered in the literature as a potential scheme. Our work in this has been driven by this line of thoughts. In this work, we focus on class of discrete memoryless channels (DMC) and more specifically binary memoryless channels. We show that it is possible to build a linear universal decoder for any compound binary memoryless channels. The recent introduction of Polar codes motivated us to look into their suitability to the compound channel setup. We have carried out a preliminary investigation in this direction. While it is not clear whether Polar codes are universal under the optimum universal decoding, we find that they are universal for certain restricted classes of compound BMC sets.

2009In any communication system, there exist two dimensions through which the information at the source becomes distorted before reaching the destination: the noisy channel and time. Messages transmitted through a noisy channel are susceptible to modification in their content, due to the action of the noise of the channel. Claude E. Shannon, in his seminal paper of 1948 "A Mathematical Theory of Communication",
introduces the bit as a unit of measure of information, and he lays down the theoretical foundations needed to understand the problem of sending bits reliably through a noisy channel. The distortion measure, which he used to quantify reliability, is the error probability. In his paper, Shannon shows that any channel is characterized by a number that he calls capacity: It represents the highest transmission rate that
can be used to communicate information with, through this same channel, while guaranteeing a negligible error probability. Whereas, even if the messages are sent through a perfect channel, the time
they take to reach their destination causes the receiver to acquire a distorted view of the status of the source that generated these messages. For instance, take the case of a monitor interested in the status of a distant process. A sender observes this process and, to keep the monitor up-to-date, sends updates to it. However, if, at any time t, the last received update at the monitor was generated at time u(t),
then the information at the receiver reflects the status of the process at time u(t), not at time t. Hence, the monitor has a distorted version of reality. In fact, it has an obsolete version with an age of t-u(t). The concept of age as a distortion measure in communication systems was first used in 2011 by Kaul et al., in order to assess the performance of a given vehicular network. The aim of the authors was to come up with a transmission scheme that would minimize an age-related metric: the average age. Since then, a growing body of works has used this metric to evaluate the performance of multiple communication
systems. The drive behind this interest lies in the importance that status-update applications are gaining in today's life (in vehicular networks, warehouse and environment surveillance, news feed,etc.). In this thesis, we choose age as a distortion measure and derive expressions for the average age and the average peak-age (another age-related metric) for different communication systems. Therefore, we divide this dissertation into two parts: In the first part, we assume that the the updates are transmitted through a noiseless channel that has a random service time. In the second part, we consider a special category of noisy channels, namely the erasure channel. In the first part of this thesis, in order to compute the age-related metrics, we employ queue-theoretic concepts. We study and compare the performance of various transmission schemes under different settings.We show that the optimal transmission scheme when the monitor is interested in a single source loses its optimality when another source of higher priority shares the system. In the second part of this thesis, we introduce, in our age calculations, the distortion caused by the erasure channel on the transmitted updates. In order to combat the erasures of the channel, we first consider two flavors of the hybrid automatic repeat request (HARQ). Finally, we focus on the optimal average age that could be achieved over an erasure channel.

The year 2016, in which I am writing these words, marks the centenary of Claude Shannon, the father of information theory. In his landmark 1948 paper "A Mathematical Theory of Communication", Shannon established the largest rate at which reliable communication is possible, and he referred to it as the channel capacity. Since then, researchers have focused on the design of practical coding schemes that could approach such a limit. The road to channel capacity has been almost 70 years long and, after many ideas, occasional detours, and some rediscoveries, it has culminated in the description of low-complexity and provably capacity-achieving coding schemes, namely, polar codes and iterative codes based on sparse graphs. However, next-generation communication systems require an unprecedented performance improvement and the number of transmission settings relevant in applications is rapidly increasing. Hence, although Shannon's limit seems finally close at hand, new challenges are just around the corner. In this thesis, we trace a road that goes from polar to Reed-Muller codes and, by doing so, we investigate three main topics: unified scaling, non-standard channels, and capacity via symmetry. First, we consider unified scaling. A coding scheme is capacity-achieving when, for any rate smaller than capacity, the error probability tends to 0 as the block length becomes increasingly larger. However, the practitioner is often interested in more specific questions such as, "How much do we need to increase the block length in order to halve the gap between rate and capacity?". We focus our analysis on polar codes and develop a unified framework to rigorously analyze the scaling of the main parameters, i.e., block length, rate, error probability, and channel quality. Furthermore, in light of the recent success of a list decoding algorithm for polar codes, we provide scaling results on the performance of list decoders. Next, we deal with non-standard channels. When we say that a coding scheme achieves capacity, we typically consider binary memoryless symmetric channels. However, practical transmission scenarios often involve more complicated settings. For example, the downlink of a cellular system is modeled as a broadcast channel, and the communication on fiber links is inherently asymmetric. We propose provably optimal low-complexity solutions for these settings. In particular, we present a polar coding scheme that achieves the best known rate region for the broadcast channel, and we describe three paradigms to achieve the capacity of asymmetric channels. To do so, we develop general coding "primitives", such as the chaining construction that has already proved to be useful in a variety of communication problems. Finally, we show how to achieve capacity via symmetry. In the early days of coding theory, a popular paradigm consisted in exploiting the structure of algebraic codes to devise practical decoding algorithms. However, proving the optimality of such coding schemes remained an elusive goal. In particular, the conjecture that Reed-Muller codes achieve capacity dates back to the 1960s. We solve this open problem by showing that Reed-Muller codes and, in general, codes with sufficient symmetry are capacity-achieving over erasure channels under optimal MAP decoding. As the proof does not rely on the precise structure of the codes, we are able to show that symmetry alone guarantees optimal performance.