**Êtes-vous un étudiant de l'EPFL à la recherche d'un projet de semestre?**

Travaillez avec nous sur des projets en science des données et en visualisation, et déployez votre projet sous forme d'application sur GraphSearch.

Concept# Théorème d'échantillonnage

Résumé

Le théorème d'échantillonnage, dit aussi théorème de Shannon ou théorème de Nyquist-Shannon, établit les conditions qui permettent l'échantillonnage d'un signal de largeur spectrale et d'amplitude limitées.
La connaissance de plus de caractéristiques du signal permet sa description par un nombre inférieur d'échantillons, par un processus d'acquisition comprimée.
Définition
Dans le cas général, le théorème d'échantillonnage énonce que l’échantillonnage d'un signal exige un nombre d'échantillons par unité de temps supérieur au double de l'écart entre les fréquences minimale et maximale qu'il contient.
Dans le cas le plus courant, la fréquence minimale du signal est négligeable par rapport à la fréquence maximale et le théorème affirme simplement :
Réciproquement, l'échantillonnage avec des échantillons régulièrement espacés peut décrire un signal à condition qu'il ne contienne aucune fréquence supérieure à la moitié de la fréquence d'échantillonnage, dite fréquence de Nyq

Cette page est générée automatiquement et peut contenir des informations qui ne sont pas correctes, complètes, à jour ou pertinentes par rapport à votre recherche. Il en va de même pour toutes les autres pages de ce site. Veillez à vérifier les informations auprès des sources officielles de l'EPFL.

Publications associées

Chargement

Personnes associées

Chargement

Unités associées

Chargement

Concepts associés

Chargement

Cours associés

Chargement

Séances de cours associées

Chargement

Personnes associées (18)

Concepts associés (41)

Échantillonnage (signal)

L'échantillonnage consiste à prélever les valeurs d'un signal à intervalles définis, généralement réguliers. Il produit une suite de valeurs discrètes nommées échantillons.
Généralités
L

Repliement de spectre

thumb|300px|Ce graphique démontre le repliement du spectre d'un signal sinusoïdal de fréquence f = 0,9, confondu avec un signal de fréquence f = 0,1 lors d'un échantillonnage de période T = 1,0.
Le re

Convertisseur analogique-numérique

vignette|Symbole normé du convertisseur analogique numérique
Un convertisseur analogique-numérique (CAN, parfois convertisseur A/N, ou en anglais ADC pour Analog to Digital Converter ou plus simpleme

Cours associés (64)

EE-205: Signals and systems (for EL&IC)

This class teaches the theory of linear time-invariant (LTI) systems. These systems serve both as models of physical reality (such as the wireless channel) and as engineered systems (such as electrical circuits, filters and control strategies).

EE-512: Applied biomedical signal processing

The goal of this course is twofold: (1) to introduce physiological basis, signal acquisition solutions (sensors) and state-of-the-art signal processing techniques, and (2) to propose concrete examples of applications for vital sign monitoring and diagnosis purposes.

EE-350: Signal processing

Dans ce cours, nous présentons les méthodes de base du traitement des signaux.

Séances de cours associées (139)

Publications associées (100)

Chargement

Chargement

Chargement

Unités associées (13)

New advances in the field of image sensors (especially in CMOS technology) tend to question the conventional methods used to acquire the image. Compressive Sensing (CS) plays a major role in this, especially to unclog the Analog to Digital Converters which are generally representing the bottleneck of this type of sensors. In addition, CS eliminates traditional compression processing stages that are performed by embedded digital signal processors dedicated to this purpose. The interest is twofold because it allows both to consistently reduce the amount of data to be converted but also to suppress digital processing performed out of the sensor chip. For the moment, regarding the use of CS in image sensors, the main route of exploration as well as the intended applications aims at reducing power consumption related to these components (i.e. ADC & DSP represent 99% of the total power consumption). More broadly, the paradigm of CS allows to question or at least to extend the Nyquist-Shannon sampling theory. This thesis shows developments in the field of image sensors demonstrating that is possible to consider alternative applications linked to CS. Indeed, advances are presented in the fields of hyperspectral imaging, super-resolution, high dynamic range, high speed and non-uniform sampling. In particular, three research axes have been deepened, aiming to design proper architectures and acquisition processes with their associated reconstruction techniques taking advantage of image sparse representations. How the on-chip implementation of Compressed Sensing can relax sensor constraints, improving the acquisition characteristics (speed, dynamic range, power consumption) ? How CS can be combined with simple analysis to provide useful image features for high level applications (adding semantic information) and improve the reconstructed image quality at a certain compression ratio ? Finally, how CS can improve physical limitations (i.e. spectral sensitivity and pixel pitch) of imaging systems without a major impact neither on the sensing strategy nor on the optical elements involved ? A CMOS image sensor has been developed and manufactured during this Ph.D. to validate concepts such as the High Dynamic Range - CS. A new design approach was employed resulting in innovative solutions for pixels addressing and conversion to perform specific acquisition in a compressed mode. On the other hand, the principle of adaptive CS combined with the non-uniform sampling has been developed. Possible implementations of this type of acquisition are proposed. Finally, preliminary works are exhibited on the use of Liquid Crystal Devices to allow hyperspectral imaging combined with spatial super-resolution. The conclusion of this study can be summarized as follows: CS must now be considered as a toolbox for defining more easily compromises between the different characteristics of the sensors: integration time, converters speed, dynamic range, resolution and digital processing resources. However, if CS relaxes some material constraints at the sensor level, it is possible that the collected data are difficult to interpret and process at the decoder side, involving massive computational resources compared to so-called conventional techniques. The application field is wide, implying that for a targeted application, an accurate characterization of the constraints concerning both the sensor (encoder), but also the decoder need to be defined.

Despite the high number of investments for data-based models in the expansion of Industry 4.0, too little effort has been made to ensure the maintenance of those models. In a data-streaming environment, data-based models are subject to concept drifts. A concept drift is a change in data distribution which will, at some point, decrease the accuracy of the model. To address this problem, various frameworks are presented in the literature, but there is no optimal methodology for implementing them. This paper presents a methodology to implement a problem-oriented complete solution to ensure the maintenance of an industrial data-based model. The final drift-handling solution is composed of a sampling decision system and an update system. The methodology begins with a concept-drift identification phase. Solutions are then pre-selected based on the identified concept drifts. Next, an optimization problem is designed to select the solution that optimizes the costs and respects the constraints. To better link the concept drift characteristics and the drift-handling solutions, a causal concept-drift classification system is proposed. The industrial implementation of such a solution is discussed and several questions are raised. This paper presents an original and detailed methodology that shows encouraging results to address the model-maintenance challenge; however, concept drift identification, and links between concept-drift charac-teristics and drift detection, require further research.

Sampling theory has prospered extensively in the last century. The elegant mathematics and the vast number of applications are the reasons for its popularity. The applications involved in this thesis are in signal processing and communications and call out to mathematical notions in Fourier theory, spectral analysis, basic linear algebra, spline and wavelet theory. This thesis is divided in two parts. Chapters 2 and 3 consider uniform sampling of non-bandlimited signals and Chapters 4, 5, and 6 treat different irregular sampling problems. In the first part we address the problem of sampling signals that are not bandlimited but are characterized as having a finite number of degrees of freedom per unit of time. These signals will be called signals with a finite rate of innovation. We show that these signals can be uniquely represented given a sufficient number of samples obtained using an appropriate sampling kernel. The number of samples must be greater or equal to the degrees of freedom of the signal; in other words, the sampling rate must be greater or equal to the rate of innovation of the signal. In particular, we derive sampling schemes for periodic and finite length streams of Diracs and piecewise polynomial signals using the sinc, the differentiated sinc and the Gaussian kernels. Sampling and reconstruction of piecewise bandlimited signals and filtered piecewise polynomials is also considered. We also derive local reconstruction schemes for infinite length piecewise polynomials with a finite "local" rate of innovation using compact support kernels such as splines. Numerical experiments on all of the reconstruction schemes are shown. The first topic of the second part of this thesis is the irregular sampling problem of bandlimited signals with unknown sampling instances. The locations of the irregular set of samples are found by treating the problem as a combinatorial optimization problem. Search methods for the locations are described and numerical simulations on a random set and a jittery set of locations are made. The second topic is the periodic nonuniform sampling problem of bandlimited signals. The irregular set of samples involved has a structure which is irregular yet periodic. We develop a fast scheme that reduces the complexity of the problem by exploiting the special pattern of the locations. The motivation for developing a fast scheme originates from the fact that the periodic nonuniform set was also considered in the sampling with unknown locations problem and that a fast search method for the locations was sought. Finally, the last topic is the irregular sampling of signals that are linearly and nonlinearly approximated using Fourier and wavelet bases. We present variants of the Papoulis Gerchberg algorithm which take into account the information given in the approximation of the signal. Numerical experiments are presented in the context of erasure correction.