**Êtes-vous un étudiant de l'EPFL à la recherche d'un projet de semestre?**

Travaillez avec nous sur des projets en science des données et en visualisation, et déployez votre projet sous forme d'application sur GraphSearch.

Concept# Filtre de Kalman

Résumé

vignette| Concept de base du filtre de Kalman.
En statistique et en théorie du contrôle, le filtre de Kalman est un filtre à réponse impulsionnelle infinie qui estime les états d'un système dynamique à partir d'une série de mesures incomplètes ou bruitées. Le filtre a été nommé d'après le mathématicien et informaticien américain d'origine hongroise Rudolf Kálmán.
Exemples d'applications
Le filtre de Kalman est utilisé dans une large gamme de domaines technologiques (radar, vision électronique, communication…). C'est un thème majeur de l'automatique et du traitement du signal. Un exemple d'utilisation peut être la mise à disposition, en continu, d'informations telles que la position ou la vitesse d'un objet à partir d'une série d'observations relatives à sa position, incluant éventuellement des erreurs de mesures.
Par exemple, pour le cas des radars où l'on désire suivre une cible, des données sur sa position, sa vitesse et son accélération sont mesurées à chaque i

Source officielle

Cette page est générée automatiquement et peut contenir des informations qui ne sont pas correctes, complètes, à jour ou pertinentes par rapport à votre recherche. Il en va de même pour toutes les autres pages de ce site. Veillez à vérifier les informations auprès des sources officielles de l'EPFL.

Publications associées

Chargement

Personnes associées

Chargement

Unités associées

Chargement

Concepts associés

Chargement

Cours associés

Chargement

Séances de cours associées

Chargement

Personnes associées (42)

Concepts associés (64)

Théorie du contrôle

En mathématiques et en sciences de l'ingénieur, la théorie du contrôle a comme objet l'étude du comportement de systèmes dynamiques paramétrés en fonction des trajectoires de leurs paramètres.

Série temporelle

thumb|Exemple de visualisation de données montrant une tendances à moyen et long terme au réchauffement, à partir des séries temporelles de températures par pays (ici regroupés par continents, du nord

Filtre particulaire

Les filtres particulaires, aussi connus sous le nom de méthodes de Monte-Carlo séquentielles, sont des techniques sophistiquées d'estimation de modèles fondées sur la simulation.
Les filtres parti

Cours associés (71)

FIN-415: Probability and stochastic calculus

This course gives an introduction to probability theory and stochastic calculus in discrete and continuous time. We study fundamental notions and techniques necessary for applications in finance such as option pricing, hedging, optimal portfolio choice and prediction problems.

ME-422: Multivariable control

This course covers methods for the analysis and control of systems with multiple inputs and outputs, which are ubiquitous in modern technology and industry. Special emphasis will be given to discrete-time systems, due to their relevance for digital and embedded control architectures.

EE-512: Applied biomedical signal processing

The goal of this course is twofold: (1) to introduce physiological basis, signal acquisition solutions (sensors) and state-of-the-art signal processing techniques, and (2) to propose concrete examples of applications for vital sign monitoring and diagnosis purposes.

Publications associées (100)

Chargement

Chargement

Chargement

Unités associées (24)

Séances de cours associées (163)

The Global Navigation Satellite System (GNSS) refers to a constellation of satellites emitting signals from space, used to provide Position, Navigation and Timing (PNT) services to the receivers on Earth. Nowadays, the GNSS is exploited in a wide range of applications among which ground mapping, transportation, machine control, timing, surveying, defense, or aerial photogrammetry [1]. However, as we become more dependent on GNSS technology, we also become more vulnerable to its limitations [2]. The GNSS operates with satellites orbiting in Medium Earth Orbits (MEO, approximately 20200 [km] in altitude). On one hand, this great distance allows to have a large footprint of the satellite signal on Earth, but on the other hand, it requires the signal to cross a wide path in the atmosphere before reaching the receiver. This results in a decrease of signal strength, which is not an issue in open-sky condition, but becomes a limitation in deep attenuation environments, with little or no service in urban canyon, in dense cities or indoors [3]. In this contribution, a possible strategy to cope with this issue is proposed and evaluated through a simulation process. It implies the use of Low Earth Orbits (LEO) satellites as a support for the GNSS PNT service. Since nowadays no broadband big LEO constellation broadcasting navigation messages exist, in this project various LEO constellation setups have been simulated and tested. Because LEOs are much closer to Earth (200 to 1500 [km]), they move faster in the sky, passing overhead in minutes instead of hours like MEOs. This gives rise to the possibility to exploit the Doppler effect to accurately locate the user. Therefore, three different PNT algorithms, employing the Kalman filter, are implemented and the evaluation of the localization quality is made at four distinct locations on the globe. The results show that, for most of the cases, compared to the standard GPS pseudorange only PNT, the addition of LEO pseudoranges already improve the localization quality. The performance of the PNT algorithm increases even more with the exploitation of both pseudoranges and pseudorangerates (Doppler) measurements. Nevertheless, the choice of constellation parameters such as satellites altitude, total number of satellites, orbit inclination or the frequency of the emitted signal, has a great impact on the magnitude of the improvement in the localization quality. The factor that has the most significant influence seems to be the signal frequency and for that reason it must be carefully chosen: the higher the better. Moreover, it has been demonstrated that at high elevation cut-off angles (denied environments), the addition of LEO constellation is a valid option and allows us to obtain an accurate position even when the GNSS only can not provide a solution. Finally, the LEO only system seems to be a promising alternative to cope with GNSS jamming or spoofing. However, the optimal LEO parameters providing an accurate solution for all the location on the globe has not yet been identified. In conclusion, the result of this effort is the awareness that, once the optimal parameters for the LEO system have been found, it is a valid option to augment the GNSS, or even as a standalone backup, especially when the Doppler-based positioning is applied. Nevertheless, we are still far from that end and much more research needs to be done.

2020The standard Kalman filter is a powerful and widely used tool to perform prediction, filtering and smoothing in the fields of linear Gaussian state-space models. In its standard setting it has a simple recursive form which implies high computational efficiency. As the latter is essentially a least squares procedure optimality properties can be derived easily. These characteristics of the standard Kalman filter depend strongly on distributional and linearity assumptions of the model. If we consider nonlinear non-Gaussian state-space models all these properties and characteristics are no longer valid. Consequently there are different approaches on the robustification of the Kalman filter. One is based on the the ideas of minimax problems and influence curves. Others use numerical integration and Monte Carlo methods. Herein we propose a new filter by implementing a method of numerical integration, called scrambled net quadrature, which consists of a mixture of Monte Carlo methods and quasi-Monte Carlo methods, providing an integration error of order of magnitude $N^{-3/2}\log(N)^{(r-1)/2}$ in probability, where $r$ denotes dimension. We show that the point-wise bias of the posterior density estimate is of order of magnitude $N^{-3}\log(N)^{r-1}$ but grows linearly with time.

2002In this thesis, methods and models are developed and presented aiming at the estimation, restoration and transformation of the characteristics of human speech. During a first period of the thesis, a concept was developed that allows restoring prosodic voice features and reconstruct more natural sounding speech from pathological voices using a multi-resolution approach. Inspired from observations with respect to this approach, the necessity of a novel method for the separation of speech into voice source and articulation components emerged in order to improve the perceptive quality of the restored speech signal. This work subsequently represents the main part of this work and therefore is presented first in this thesis. The proposed method is evaluated on synthetic, physically modelled, healthy and pathological speech. A robust, separate representation of source and filter characteristics has applications in areas that go far beyond the reconstruction of alaryngeal speech. It is potentially useful for efficient speech coding, voice biometrics, emotional speech synthesis, remote and/or non-invasive voice disorder diagnosis, etc. A key aspect of the voice restoration method is the reliable separation of the speech signal into voice source and articulation for it is mostly the voice source that requires replacement or enhancement in alaryngeal speech. Observations during the evaluation of above method highlighted that this separation is insufficient with currently known methods. Therefore, the main part of this thesis is concerned with the modelling of voice and vocal tract and the estimation of the respective model parameters. Most methods for joint source filter estimation known today represent a compromise between model complexity, estimation feasibility and estimation efficiency. Typically, single-parametric models are used to represent the source for the sake of tractable optimization or multi-parametric models are estimated using inefficient grid searches over the entire parameter space. The novel method presented in this work proposes advances in the direction of efficiently estimating and fitting multi-parametric source and filter models to healthy and pathological speech signals, resulting in a more reliable estimation of voice source and especially vocal tract coefficients. In particular, the proposed method is exhibits a largely reduced bias in the estimated formant frequencies and bandwidths over a large variety of experimental conditions such as environmental noise, glottal jitter, fundamental frequency, voice types and glottal noise. The methods appears to be especially robust to environmental noise and improves the separation of deterministic voice source components from the articulation. Alaryngeal speakers often have great difficulty at producing intelligible, not to mention prosodic, speech. Despite great efforts and advances in surgical and rehabilitative techniques, currently known methods, devices and modes of speech rehabilitation leave pathological speakers with a lack in the ability to control key aspects of their voice. The proposed multiresolution approach presented at the end of this thesis provides alaryngeal speakers an intuitive manner to increase prosodic features in their speech by reconstructing a more intelligible, more natural and more prosodic voice. The proposed method is entirely non-invasive. Key prosodic cues are reconstructed and enhanced at different temporal scales by inducing additional volatility estimated from other, still intact, speech features. The restored voice source is thus controllable in an intuitive way by the alaryngeal speaker. Despite the above mentioned advantages there is also a weak point of the proposed joint source-filter estimation method to be mentioned. The proposed method exhibits a susceptibility to modelling errors of the glottal source. On the other hand, the proposed estimation framework appears to be well suited for future research on exactly this topic. A logical continuation of this work is the leverage the efficiency and reliability of the proposed method for the development of new, more accurate glottal source models.