**Are you an EPFL student looking for a semester project?**

Work with us on data science and visualisation projects, and deploy your project as an app on top of GraphSearch.

Concept# Channel capacity

Summary

Channel capacity, in electrical engineering, computer science, and information theory, is the tight upper bound on the rate at which information can be reliably transmitted over a communication channel.
Following the terms of the noisy-channel coding theorem, the channel capacity of a given channel is the highest information rate (in units of information per unit time) that can be achieved with arbitrarily small error probability.
Information theory, developed by Claude E. Shannon in 1948, defines the notion of channel capacity and provides a mathematical model by which it may be computed. The key result states that the capacity of the channel, as defined above, is given by the maximum of the mutual information between the input and output of the channel, where the maximization is with respect to the input distribution.
The notion of channel capacity has been central to the development of modern wireline and wireless communication systems, with the advent of novel error correction coding mechanisms that have resulted in achieving performance very close to the limits promised by channel capacity.
The basic mathematical model for a communication system is the following:
where:
is the message to be transmitted;
is the channel input symbol ( is a sequence of symbols) taken in an alphabet ;
is the channel output symbol ( is a sequence of symbols) taken in an alphabet ;
is the estimate of the transmitted message;
is the encoding function for a block of length ;
is the noisy channel, which is modeled by a conditional probability distribution; and,
is the decoding function for a block of length .
Let and be modeled as random variables. Furthermore, let be the conditional probability distribution function of given , which is an inherent fixed property of the communication channel. Then the choice of the marginal distribution completely determines the joint distribution due to the identity
which, in turn, induces a mutual information . The channel capacity is defined as
where the supremum is taken over all possible choices of .

Official source

This page is automatically generated and may contain information that is not correct, complete, up-to-date, or relevant to your search query. The same applies to every other page on this website. Please make sure to verify the information with EPFL's official sources.

Related publications (9)

Related people (11)

Related MOOCs (2)

Related concepts (30)

Related courses (10)

Related units (2)

Information, Calcul, Communication: Introduction à la pensée informatique

Dans une première partie, nous étudierons d’abord comment résoudre de manière très concrète un problème au moyen d’un algorithme, ce qui nous amènera dans un second temps à une des grandes questions d

Information, Calcul, Communication: Introduction à la pensée informatique

Dans une première partie, nous étudierons d’abord comment résoudre de manière très concrète un problème au moyen d’un algorithme, ce qui nous amènera dans un second temps à une des grandes questions d

Channel capacity

Channel capacity, in electrical engineering, computer science, and information theory, is the tight upper bound on the rate at which information can be reliably transmitted over a communication channel. Following the terms of the noisy-channel coding theorem, the channel capacity of a given channel is the highest information rate (in units of information per unit time) that can be achieved with arbitrarily small error probability. Information theory, developed by Claude E.

Error correction code

In computing, telecommunication, information theory, and coding theory, forward error correction (FEC) or channel coding is a technique used for controlling errors in data transmission over unreliable or noisy communication channels. The central idea is that the sender encodes the message in a redundant way, most often by using an error correction code or error correcting code (ECC). The redundancy allows the receiver not only to detect errors that may occur anywhere in the message, but often to correct a limited number of errors.

Bit rate

In telecommunications and computing, bit rate (bitrate or as a variable R) is the number of bits that are conveyed or processed per unit of time. The bit rate is expressed in the unit bit per second (symbol: bit/s), often in conjunction with an SI prefix such as kilo (1 kbit/s = 1,000 bit/s), mega (1 Mbit/s = 1,000 kbit/s), giga (1 Gbit/s = 1,000 Mbit/s) or tera (1 Tbit/s = 1,000 Gbit/s). The non-standard abbreviation bps is often used to replace the standard symbol bit/s, so that, for example, 1 Mbps is used to mean one million bits per second.

Related lectures (82)

The increase in wireless data traffic continues and is a product of several factors. First, new technologies and capabilities enable new use cases for which new products emerge. Then, with the growing user adoption over time, the data traffic is further increased. As a result, the actual growth numbers vary year over year, but the trend is sustained. People nowadays also expect ubiquitous availability of mobile connectivity and use cloud services or stream music and video. Moreover, an increasing number of machines are being connected too. While mobile network operators (MNOs) deploy outdoor cellular networks, a significant portion of the mobile data traffic is consumed or originates indoors or while traveling in trains. To this end, MNOs constantly upgrade their networks to satisfy the capacity demand. Additionally, new and wider frequency bands are made available by regulators. However, the mid-band frequencies between 3 GHz and 6 GHz poorly enter buildings, and outdoor cells on millimeter-wave (mmWave) frequencies (24 GHz to 100 GHz) are practically unusable indoors and inside modern railways with their metallic hull and coated windows acting as Faraday cages. The increasing network densification with local cell sites (also inside shielded structures) can provide a partial solution. However, the required fiber and roll-out process is very costly and takes time. Moreover, providing sufficient capacity for the increasing data traffic demand inside buildings and trains is difficult where an optical fiber backhaul is too costly or impossible.State-of-the-art solutions are analyzed for solving the indoor capacity challenge. However, we found that these rely on or impact the existing outdoor cellular network and thus cannot provide additional capacity to the network that is used indoors. As a solution, we propose the mmWave bridge, an amplify-and-forward out-of-band repeater concept. It is a radio access technology (RAT) transparent and cost-efficient method to fronthaul mobile cells. Moreover, it provides wireless data capacity inside buildings, vehicles, or concealed areas outdoors, where low signal levels drastically limit the achievable capacity or prevent communication with distant base stations. The cellular signal of a base station is fronthauled over newly available mmWave frequencies outdoor without interfering with the existing cellular network. Mid-band frequencies are used indoors to provide sufficient coverage beyond rooms and, at the same time, benefit from the outdoor-to-indoor attenuation reducing possible interference.The benefits of the mmWave bridge are described, and we discuss the corresponding challenges and solutions. Moreover, a hardware prototype was developed based on commercial off-the-shelf components. We have tested our prototype in three use cases to demonstrate the functionality and compatibility with commercial infrastructure and mobile terminals and present the measurement results. We can show that the entire capacity of a mobile cell can be fronthauled over distances in relation to the mmWave frequency propagation and signal power. Finally, the amplify-and-forward out-of-band repeater concept is generalized regarding the fronthaul carrier frequency. The use of beamforming antennas on RAT-transparent repeaters without access to in-band beam control is investigated, and a solution with a minimal impact of a few percent in throughput reduction is presented.

Secret Key Generation: Polar CodingCOM-622: Topics in information-theoretic cryptography

Explores secret key generation using polar coding for short blocklengths, discussing key capacity, rate-leakage pairs, and practical implementation.

Achievable Rate & CapacityEE-543: Advanced wireless receivers

Explores achievable rate, channel capacity, spectral efficiency, and fading channels in wireless communication systems.

Information Theory: Source Coding & Channel CodingCOM-404: Information theory and coding

Covers the fundamentals of information theory, focusing on source coding and channel coding.

EE-342: Systèmes de télécommunications

Maîtriser les notions de base d¿un système de transmission de l¿information et identifier les critères déterminants pour la planification d¿un système de télécommunication.
Évaluer les performances d¿

EE-543: Advanced wireless receivers

Students extend their knowledge on wireless communication systems to spread-spectrum communication and to multi-antenna systems. They also learn about the basic information theoretic concepts, about c

COM-404: Information theory and coding

The mathematical principles of communication that govern the compression and transmission of data and the design of efficient methods of doing so.

We experimentally solve the problem of maximizing capacity under a total supply power constraint in a massively parallel submarine cable context, i.e., for a spatially uncoupled system in which fiber Kerr nonlinearity is not a dominant limitation. By using multi-layer neural networks trained with extensive measurement data acquired from a 12-span 744-km optical fiber link as an accurate digital twin of the true optical system, we experimentally maximize fiber capacity with respect to the transmit signal's spectral power distribution based on a gradient-descent algorithm. By observing convergence to approximately the same maximum capacity and power distribution for almost arbitrary initial conditions, we conjecture that the capacity surface is a concave function of the transmit signal power distribution. We then demonstrate that eliminating gain flattening filters (GFFs) from the optical amplifiers results in substantial capacity gains per Watt of electrical supply power compared to a conventional system that contains GFFs.

The beginning of 21st century provided us with many answers about how to reach the channel capacity. Polarization and spatial coupling are two techniques for achieving the capacity of binary memoryless symmetric channels under low-complexity decoding algorithms. Recent results prove that another way to achieve capacity is via symmetry, which is the case of the Reed-Muller and extended Bose-Chaudhuri-Hocquenghem (BCH) codes. However, this proof holds only for erasure channel and maximum a posteriori decoding, which is computationally intractable for the general channels.In the first part of this thesis, we talk about the performance improvements that an automorphism group of the code brings on board. We propose two decoding algorithms for the Reed-Muller codes, which are invariant under a large group of permutations and are expected to benefit the most. The former is based on plugging the codeword permutations in successive cancellation decoding, and the latter utilizes the code representation as the evaluations of Boolean monomials. However, despite the performance improvements, it is clear that the decoding complexity grows quickly and becomes impractical for moderate-length codes. In the second part of this thesis, we provide an explanation for this observation. We use the Boolean polynomial representation of the code in order to show that polar-like decoding of sufficiently symmetric codes asymptotically needs an exponential complexity. The automorphism groups of the Reed-Muller and eBCH codes limit the efficiency of their polar-like decoding for long codes, hence we either should focus on short lengths or find another way. We demonstrate that asymptotically same restrictions (although with a slower convergence) hold for more relaxed condition that we call partial symmetry. The developed framework also enables us to prove that the automorphism group of polar codes cannot include a large affine subgroup.In the last part of this thesis, we address a completely different problem. A device-independent quantum key distribution (DIQKD) aims to provide private communication between parties and has the security guarantees that come mostly from quantum physics, without making potentially unrealistic assumptions about the nature of the communication devices. After the quantum part of the DIQKD protocol, the parties share a secret key that is not perfectly correlated. In order to synchronize, some information needs to be revealed publicly, which makes this formulation equivalent to the asymmetric Slepian-Wolf problem that can be solved using binary linear error-correction codes. As any amount of the revealed information reduces the key secrecy, the utilized code should operate close to the finite-length limits. The channel in consideration is non-standard and, due to its experimental nature, it can actually slightly differ from the considered models. In order to solve this problem, we designed a simple scheme using universal SC-LDPC codes and used in the first successful experimental demonstration of DIQKD protocol.