**Are you an EPFL student looking for a semester project?**

Work with us on data science and visualisation projects, and deploy your project as an app on top of GraphSearch.

Concept# Quantization (signal processing)

Summary

Quantization, in mathematics and digital signal processing, is the process of mapping input values from a large set (often a continuous set) to output values in a (countable) smaller set, often with a finite number of elements. Rounding and truncation are typical examples of quantization processes. Quantization is involved to some degree in nearly all digital signal processing, as the process of representing a signal in digital form ordinarily involves rounding. Quantization also forms the core of essentially all lossy compression algorithms.
The difference between an input value and its quantized value (such as round-off error) is referred to as quantization error. A device or algorithmic function that performs quantization is called a quantizer. An analog-to-digital converter is an example of a quantizer.
Example
For example, rounding a real number x to the nearest integer value forms a very basic type of quantizer – a uniform one. A typical (mid-tread

This page is automatically generated and may contain information that is not correct, complete, up-to-date, or relevant to your search query. The same applies to every other page on this website. Please make sure to verify the information with EPFL's official sources.

Related publications

Loading

Related people

Loading

Related units

Loading

Related concepts

Loading

Related courses

Loading

Related lectures

Loading

Related concepts (51)

Signal-to-noise ratio

Signal-to-noise ratio (SNR or S/N) is a measure used in science and engineering that compares the level of a desired signal to the level of background noise. SNR is defined as the ratio of signal powe

Data compression

In information theory, data compression, source coding, or bit-rate reduction is the process of encoding information using fewer bits than the original representation.

Sampling (signal processing)

In signal processing, sampling is the reduction of a continuous-time signal to a discrete-time signal. A common example is the conversion of a sound wave to a sequence of "samples".
A sample is a val

Related courses (51)

Related publications (100)

EE-512: Applied biomedical signal processing

The goal of this course is twofold: (1) to introduce physiological basis, signal acquisition solutions (sensors) and state-of-the-art signal processing techniques, and (2) to propose concrete examples of applications for vital sign monitoring and diagnosis purposes.

EE-350: Signal processing

Dans ce cours, nous présentons les méthodes de base du traitement des signaux.

COM-406: Foundations of Data Science

We discuss a set of topics that are important for the understanding of modern data science but that are typically not taught in an introductory ML course. In particular we discuss fundamental ideas and techniques that come from probability, information theory as well as signal processing.

Loading

Loading

Loading

Related units (12)

Related people (20)

Related lectures (144)

Christina Fragouli, Ayan Sengupta, I-Hsiang Wang

Quantize-Map-and-Forward (QMF) relaying has been shown to achieve the optimal diversity-multiplexing trade-off (DMT) for arbitrary slow fading full-duplex networks as well as for the single-relay half-duplex network. A key reason for the DMT-optimality of QMF is that quantizing at the noise level suffices to achieve the cut-set bound approximately to within an additive gap, without the requirement of any instantaneous channel state information (CSI). However, DMT only captures the high SNR performance and potentially, limited CSI at the relay can help improve the performance in moderate SNR regimes. In this work we propose an optimization framework for QMF relaying over slow fading channels. Focusing on vector Gaussian quantizers, we optimize the outage probability for the full-duplex single relay by finding the best quantization level according to the available CSI at the relays. For the half-duplex relay channel, we find jointly optimal quantizer distortions and relay schedules using the same framework. Also, for the $N$-relay diamond network, we derive an universal quantizer that uses only the information of network topology. The universal quantizer sharpens the additive approximation gap of QMF from the conventional $\Theta(N)$ bits/s/Hz to $\Theta(\log(N))$ bits/s/Hz. Analytical solutions to channel-aware optimal quantizers for two-relay and symmetric $N$-relay diamond networks are also derived. In addition, we prove that suitable hybridizations of our optimized QMF schemes with Decode-Forward (DF) or Dynamic DF protocols provides significant finite SNR gains over the individual schemes.

2013Over the past few decades we have been experiencing an explosion of information generated by large networks of sensors and other data sources. Much of this data is intrinsically structured, such as traffic evolution in a transportation network, temperature values in different geographical locations, information diffusion in social networks, functional activities in the brain, or 3D meshes in computer graphics. The representation, analysis, and compression of such data is a challenging task and requires the development of new tools that can identify and properly exploit the data structure. In this thesis, we formulate the processing and analysis of structured data using the emerging framework of graph signal processing. Graphs are generic data representation forms, suitable for modeling the geometric structure of signals that live on topologically complicated domains. The vertices of the graph represent the discrete data domain, and the edge weights capture the pairwise relationships between the vertices. A graph signal is then defined as a function that assigns a real value to each vertex. Graph signal processing is a useful framework for handling efficiently such data as it takes into consideration both the signal and the graph structure. In this work, we develop new methods and study several important problems related to the representation and structure-aware processing of graph signals in both centralized and distributed settings. We focus in particular in the theory of sparse graph signal representation and its applications and we bring some insights towards better understanding the interplay between graphs and signals on graphs. First, we study a novel yet natural application of the graph signal processing framework for the representation of 3D point cloud sequences. We exploit graph-based transform signal representations for addressing the challenging problem of compression of data that is characterized by dynamic 3D positions and color attributes. Next, we depart from graph-based transform signal representations to design new overcomplete representations, or dictionaries, which are adapted to specific classes of graph signals. In particular, we address the problem of sparse representation of graph signals residing on weighted graphs by learning graph structured dictionaries that incorporate the intrinsic geometric structure of the irregular data domain and are adapted to the characteristics of the signals. Then, we move to the efficient processing of graph signals in distributed scenarios, such as sensor or camera networks, which brings important constraints in terms of communication and computation in realistic settings. In particular, we study the effect of quantization in the distributed processing of graph signals that are represented by graph spectral dictionaries and we show that the impact of the quantization depends on the graph geometry and on the structure of the spectral dictionaries. Finally, we focus on a widely used graph process, the problem of distributed average consensus in a sensor network where sensors exchange quantized information with their neighbors. We propose a novel quantization scheme that depends on the graph topology and exploits the increasing correlation between the values exchanged by the sensors throughout the iterations of the consensus algorithm.

Christina Fragouli, Ayan Sengupta, I-Hsiang Wang

Quantize-Map-and-Forward (QMF) relaying has been shown to achieve the optimal diversity-multiplexing trade-off (DMT) for the slow fading single-antenna relay channel, regardless of whether the relay is full-duplex or half-duplex. A key reason for the DMT optimality is that quantizing at the noise level suffices to achieve the cut-set bound approximately and hence the relay does not need any instantaneous channel state information (CSI). However, DMT only captures the high SNR performance and potentially, limited CSI at the relay can help improve the performance in moderate SNR regimes. In this work we propose an optimization framework for QMF relaying in slow fading relay channels. Focusing on vector Gaussian quantizers, for the full-duplex relay we optimize the outage probability by finding the best quantization level according to the available CSI at the relay and the channel statistics. For the half-duplex relay we find good relay schedules using the same framework. Numerical evaluations show the improvement of taking the CSI into account over noise-level quantization and static schedules in moderate SNR regimes.