**Are you an EPFL student looking for a semester project?**

Work with us on data science and visualisation projects, and deploy your project as an app on top of GraphSearch.

Concept# Code word

Summary

In communication, a code word is an element of a standardized code or protocol. Each code word is assembled in accordance with the specific rules of the code and assigned a unique meaning. Code words are typically used for reasons of reliability, clarity, brevity, or secrecy.
See also

- Code word (figure of speech)
- Coded set
- Commercial code (communications)
- Compartmentalization (information security)
- Duress code
- Error correction and detection
- Marine VHF radio
- Password
- Safeword
- Spelling alphabet

Official source

This page is automatically generated and may contain information that is not correct, complete, up-to-date, or relevant to your search query. The same applies to every other page on this website. Please make sure to verify the information with EPFL's official sources.

Related publications

Loading

Related people

Loading

Related units

Loading

Related concepts

Loading

Related courses

Loading

Related lectures

Loading

Related concepts (1)

Related publications (19)

Related people (3)

Error detection and correction

In information theory and coding theory with applications in computer science and telecommunication, error detection and correction (EDAC) or error control are techniques that enable reliable delive

Loading

Loading

Loading

Related units (3)

Related courses (4)

COM-406: Foundations of Data Science

We discuss a set of topics that are important for the understanding of modern data science but that are typically not taught in an introductory ML course. In particular we discuss fundamental ideas and techniques that come from probability, information theory as well as signal processing.

EE-543: Advanced wireless receivers

Students extend their knowledge on wireless communication systems to spread-spectrum communication and to multi-antenna systems. They also learn about the basic information theoretic concepts, about channel coding, and bit-interleaved coded modulation.

COM-404: Information theory and coding

The mathematical principles of communication that govern the compression and transmission of data and the design of efficient methods of doing so.

Related lectures (12)

Raj Kumar Krishna Kumar, Amir Hesam Salavati, Mohammad Amin Shokrollahi

We consider ensembles of binary linear error correcting codes, obtained by sampling each column of the generator matrix G or parity check matrix H independently from the set of all binary vectors of weight d (of appropriate dimension). We investigate the circumstances under which the mutual information between a randomly chosen codeword and the vector obtained after its transmission over a binary input memoryless symmetric channel (BIMSC) C is exactly n times the capacity of C, where n is the length of the code. For several channels such as the binary symmetric channel (BSC) and the binary-input additive white Gaussian noise (AWGN) channel, we prove that the probability of this event has a threshold behaviour, depending on whether n/k is smaller than a certain quantity (that depends on the particular channel C and d), where k is the number of source bits. To show this, we prove a generalization of the following well-known theorem: the expectation of the size of the right kernel of G has a phase transition from 1 to infinity, depending on whether or not n/k is smaller than a certain quantity depending on the chosen ensemble.

This thesis addresses the topic of Low-Density Parity-Check (LDPC) code analysis, both asymptotically and for finite block lengths. Since in general it is a difficult problem to analyze individual code instances, ensemble averages are studied by following Gallager's original idea and results of Luby et al. Often, one can relate the insights gained by studying ensemble averages to statements regarding individual codes by proving that most elements in the ensemble behave "close" to the average. One important such average is the average weight distribution, another is the average pseudo-codeword distribution, where pseudo-codewords play the same role under iterative decoding as codewords do for maximum likelihood decoding. One of the main contributions of this thesis is the calculation of such averages for fully irregular LDPC code ensembles. Much of classical coding theory is aimed at the construction of codes with large minimum distance. This is so since away from capacity accurate bounds on the performance of a code can be given in terms of its minimum distance (or more generally, in terms of its weight distribution). Therefore, it is of interest to investigate what role the minimum distance plays for iteratively decoded LDPC code ensembles. In particular, sequences of capacity-achieving LDPC code ensembles of increasing length when transmission takes place over the Binary Erasure Channel (BEC) are investigated. It is shown that, under certain technical conditions, the minimum distance of such ensembles grows sub-linearly in the block length, a result which is somewhat surprising from a classical point of view. Specific attention is also given to the design of LDPC code ensembles, where an inherent trade-off is observed between achieving large minimum distance (which is relevant for the so called error-floor regime) and achieving a large threshold (which in the limit of long block lengths determines the worst channel on which transmission can be accomplished reliably). Most results on iterative coding systems to date address their asymptotic performance, i.e., their performance when the block length tends to infinity. For small and moderate block lengths the behavior of a code can deviate significantly from its asymptotic limit. It is therefore of high practical value to be able to analyze the finite-length performance of LDPC code ensembles. In this thesis, an exact such analysis is presented for iteratively decoded LDPC code ensembles over the BEC. In particular, expressions for the exact average bit and block erasure probabilities are computed by solving a set of recursions. Such an analysis is the starting point for a finite-length optimization, a topic which is slated for future work. The methods used in this thesis include a combinatorial approach (familiar to the coding society) as well as the powerful techniques developed in the statistical physics community, for example, the replica method or the mean-field approximation. Although the statistical physics techniques are ideally suited for the analysis of iterative coding systems, they are to date only accessible to a relatively small community. Therefore, an overview of these techniques is first presented using a language familiar to the coding theory community. Next, in order to highlight the differences and common points between the combinatorial and the statistical physics approaches, both techniques are applied to the weight distribution problem. It turns out that for regular ensembles, both methods yield the same result, while for irregular ensembles, they do differ in general.

Nikolaos Makriyannis, Bertrand Meyer

Given a code C is an element of F-2(n) and a word c is an element of C, a witness of c is a subset W subset of {, 1 ... , n} of coordinate positions such that c differs from any other codeword c' is an element of C on the indices in W. If any codeword posseses a witness of given length w, C is called a w-witness code. This paper gives new constructions of large w-witness codes and proves with a numerical method that their sizes are maximal for certain values of n and w. Our technique is in the spirit of Delsarte's linear programming bound on the size of classical codes and relies on the Lovasz theta number, semidefinite programming, and reduction through symmetry.