**Are you an EPFL student looking for a semester project?**

Work with us on data science and visualisation projects, and deploy your project as an app on top of GraphSearch.

Concept# Nyquist–Shannon sampling theorem

Summary

The Nyquist–Shannon sampling theorem is an essential principle for digital signal processing linking the frequency range of a signal and the sample rate required to avoid a type of distortion called aliasing. The theorem states that the sample rate must be at least twice the bandwidth of the signal to avoid aliasing distortion. In practice, it is used to select band-limiting filters to keep aliasing distortion below an acceptable amount when an analog signal is sampled or when sample rates are changed within a digital signal processing function.
The Nyquist–Shannon sampling theorem is a theorem in the field of signal processing which serves as a fundamental bridge between continuous-time signals and discrete-time signals. It establishes a sufficient condition for a sample rate that permits a discrete sequence of samples to capture all the information from a continuous-time signal of finite bandwidth.
Strictly speaking, the theorem only applies to a class of mathematical functions

This page is automatically generated and may contain information that is not correct, complete, up-to-date, or relevant to your search query. The same applies to every other page on this website. Please make sure to verify the information with EPFL's official sources.

Related publications

Loading

Related people

Loading

Related units

Loading

Related concepts

Loading

Related courses

Loading

Related lectures

Loading

Related publications (100)

Loading

Loading

Loading

Related people (18)

Related concepts (41)

In signal processing, sampling is the reduction of a continuous-time signal to a discrete-time signal. A common example is the conversion of a sound wave to a sequence of "samples".
A sample is a val

In signal processing and related disciplines, aliasing is the overlapping of frequency components resulting from a sample rate below the Nyquist frequency. This overlap results in distortion or arti

In electronics, an analog-to-digital converter (ADC, A/D, or A-to-D) is a system that converts an analog signal, such as a sound picked up by a microphone or light entering a digital camera, int

Related courses (64)

EE-205: Signals and systems (for EL&IC)

This class teaches the theory of linear time-invariant (LTI) systems. These systems serve both as models of physical reality (such as the wireless channel) and as engineered systems (such as electrical circuits, filters and control strategies).

EE-512: Applied biomedical signal processing

The goal of this course is twofold: (1) to introduce physiological basis, signal acquisition solutions (sensors) and state-of-the-art signal processing techniques, and (2) to propose concrete examples of applications for vital sign monitoring and diagnosis purposes.

EE-350: Signal processing

Dans ce cours, nous présentons les méthodes de base du traitement des signaux.

Related units (13)

Related lectures (139)

Sampling theory has prospered extensively in the last century. The elegant mathematics and the vast number of applications are the reasons for its popularity. The applications involved in this thesis are in signal processing and communications and call out to mathematical notions in Fourier theory, spectral analysis, basic linear algebra, spline and wavelet theory. This thesis is divided in two parts. Chapters 2 and 3 consider uniform sampling of non-bandlimited signals and Chapters 4, 5, and 6 treat different irregular sampling problems. In the first part we address the problem of sampling signals that are not bandlimited but are characterized as having a finite number of degrees of freedom per unit of time. These signals will be called signals with a finite rate of innovation. We show that these signals can be uniquely represented given a sufficient number of samples obtained using an appropriate sampling kernel. The number of samples must be greater or equal to the degrees of freedom of the signal; in other words, the sampling rate must be greater or equal to the rate of innovation of the signal. In particular, we derive sampling schemes for periodic and finite length streams of Diracs and piecewise polynomial signals using the sinc, the differentiated sinc and the Gaussian kernels. Sampling and reconstruction of piecewise bandlimited signals and filtered piecewise polynomials is also considered. We also derive local reconstruction schemes for infinite length piecewise polynomials with a finite "local" rate of innovation using compact support kernels such as splines. Numerical experiments on all of the reconstruction schemes are shown. The first topic of the second part of this thesis is the irregular sampling problem of bandlimited signals with unknown sampling instances. The locations of the irregular set of samples are found by treating the problem as a combinatorial optimization problem. Search methods for the locations are described and numerical simulations on a random set and a jittery set of locations are made. The second topic is the periodic nonuniform sampling problem of bandlimited signals. The irregular set of samples involved has a structure which is irregular yet periodic. We develop a fast scheme that reduces the complexity of the problem by exploiting the special pattern of the locations. The motivation for developing a fast scheme originates from the fact that the periodic nonuniform set was also considered in the sampling with unknown locations problem and that a fast search method for the locations was sought. Finally, the last topic is the irregular sampling of signals that are linearly and nonlinearly approximated using Fourier and wavelet bases. We present variants of the Papoulis Gerchberg algorithm which take into account the information given in the approximation of the signal. Numerical experiments are presented in the context of erasure correction.

New advances in the field of image sensors (especially in CMOS technology) tend to question the conventional methods used to acquire the image. Compressive Sensing (CS) plays a major role in this, especially to unclog the Analog to Digital Converters which are generally representing the bottleneck of this type of sensors. In addition, CS eliminates traditional compression processing stages that are performed by embedded digital signal processors dedicated to this purpose. The interest is twofold because it allows both to consistently reduce the amount of data to be converted but also to suppress digital processing performed out of the sensor chip. For the moment, regarding the use of CS in image sensors, the main route of exploration as well as the intended applications aims at reducing power consumption related to these components (i.e. ADC & DSP represent 99% of the total power consumption). More broadly, the paradigm of CS allows to question or at least to extend the Nyquist-Shannon sampling theory. This thesis shows developments in the field of image sensors demonstrating that is possible to consider alternative applications linked to CS. Indeed, advances are presented in the fields of hyperspectral imaging, super-resolution, high dynamic range, high speed and non-uniform sampling. In particular, three research axes have been deepened, aiming to design proper architectures and acquisition processes with their associated reconstruction techniques taking advantage of image sparse representations. How the on-chip implementation of Compressed Sensing can relax sensor constraints, improving the acquisition characteristics (speed, dynamic range, power consumption) ? How CS can be combined with simple analysis to provide useful image features for high level applications (adding semantic information) and improve the reconstructed image quality at a certain compression ratio ? Finally, how CS can improve physical limitations (i.e. spectral sensitivity and pixel pitch) of imaging systems without a major impact neither on the sensing strategy nor on the optical elements involved ? A CMOS image sensor has been developed and manufactured during this Ph.D. to validate concepts such as the High Dynamic Range - CS. A new design approach was employed resulting in innovative solutions for pixels addressing and conversion to perform specific acquisition in a compressed mode. On the other hand, the principle of adaptive CS combined with the non-uniform sampling has been developed. Possible implementations of this type of acquisition are proposed. Finally, preliminary works are exhibited on the use of Liquid Crystal Devices to allow hyperspectral imaging combined with spatial super-resolution. The conclusion of this study can be summarized as follows: CS must now be considered as a toolbox for defining more easily compromises between the different characteristics of the sensors: integration time, converters speed, dynamic range, resolution and digital processing resources. However, if CS relaxes some material constraints at the sensor level, it is possible that the collected data are difficult to interpret and process at the decoder side, involving massive computational resources compared to so-called conventional techniques. The application field is wide, implying that for a targeted application, an accurate characterization of the constraints concerning both the sensor (encoder), but also the decoder need to be defined.

Despite the high number of investments for data-based models in the expansion of Industry 4.0, too little effort has been made to ensure the maintenance of those models. In a data-streaming environment, data-based models are subject to concept drifts. A concept drift is a change in data distribution which will, at some point, decrease the accuracy of the model. To address this problem, various frameworks are presented in the literature, but there is no optimal methodology for implementing them. This paper presents a methodology to implement a problem-oriented complete solution to ensure the maintenance of an industrial data-based model. The final drift-handling solution is composed of a sampling decision system and an update system. The methodology begins with a concept-drift identification phase. Solutions are then pre-selected based on the identified concept drifts. Next, an optimization problem is designed to select the solution that optimizes the costs and respects the constraints. To better link the concept drift characteristics and the drift-handling solutions, a causal concept-drift classification system is proposed. The industrial implementation of such a solution is discussed and several questions are raised. This paper presents an original and detailed methodology that shows encouraging results to address the model-maintenance challenge; however, concept drift identification, and links between concept-drift charac-teristics and drift detection, require further research.