**Are you an EPFL student looking for a semester project?**

Work with us on data science and visualisation projects, and deploy your project as an app on top of GraphSearch.

Concept# Kalman filter

Summary

For statistics and control theory, Kalman filtering, also known as linear quadratic estimation (LQE), is an algorithm that uses a series of measurements observed over time, including statistical noise and other inaccuracies, and produces estimates of unknown variables that tend to be more accurate than those based on a single measurement alone, by estimating a joint probability distribution over the variables for each timeframe. The filter is named after Rudolf E. Kálmán, who was one of the primary developers of its theory.
This digital filter is sometimes termed the Stratonovich–Kalman–Bucy filter because it is a special case of a more general, nonlinear filter developed somewhat earlier by the Soviet mathematician Ruslan Stratonovich. In fact, some of the special case linear filter's equations appeared in papers by Stratonovich that were published before summer 1961, when Kalman met with Stratonovich during a conference in Moscow.
Kalman filtering has numerous technological appl

Official source

This page is automatically generated and may contain information that is not correct, complete, up-to-date, or relevant to your search query. The same applies to every other page on this website. Please make sure to verify the information with EPFL's official sources.

Related publications

Loading

Related people

Loading

Related units

Loading

Related concepts

Loading

Related courses

Loading

Related lectures

Loading

Related publications (100)

Loading

Loading

Loading

Related courses (71)

Related people (42)

FIN-415: Probability and stochastic calculus

This course gives an introduction to probability theory and stochastic calculus in discrete and continuous time. We study fundamental notions and techniques necessary for applications in finance such as option pricing, hedging, optimal portfolio choice and prediction problems.

ME-422: Multivariable control

This course covers methods for the analysis and control of systems with multiple inputs and outputs, which are ubiquitous in modern technology and industry. Special emphasis will be given to discrete-time systems, due to their relevance for digital and embedded control architectures.

EE-512: Applied biomedical signal processing

The goal of this course is twofold: (1) to introduce physiological basis, signal acquisition solutions (sensors) and state-of-the-art signal processing techniques, and (2) to propose concrete examples of applications for vital sign monitoring and diagnosis purposes.

Related concepts (64)

Control theory is a field of control engineering and applied mathematics that deals with the control of dynamical systems in engineered processes and machines. The objective is to develop a model or a

In mathematics, a time series is a series of data points indexed (or listed or graphed) in time order. Most commonly, a time series is a sequence taken at successive equally spaced points in time. T

Particle filters, or sequential Monte Carlo methods, are a set of Monte Carlo algorithms used to find approximate solutions for filtering problems for nonlinear state-space systems, such as signal pro

Related units (24)

In this thesis, methods and models are developed and presented aiming at the estimation, restoration and transformation of the characteristics of human speech. During a first period of the thesis, a concept was developed that allows restoring prosodic voice features and reconstruct more natural sounding speech from pathological voices using a multi-resolution approach. Inspired from observations with respect to this approach, the necessity of a novel method for the separation of speech into voice source and articulation components emerged in order to improve the perceptive quality of the restored speech signal. This work subsequently represents the main part of this work and therefore is presented first in this thesis. The proposed method is evaluated on synthetic, physically modelled, healthy and pathological speech. A robust, separate representation of source and filter characteristics has applications in areas that go far beyond the reconstruction of alaryngeal speech. It is potentially useful for efficient speech coding, voice biometrics, emotional speech synthesis, remote and/or non-invasive voice disorder diagnosis, etc. A key aspect of the voice restoration method is the reliable separation of the speech signal into voice source and articulation for it is mostly the voice source that requires replacement or enhancement in alaryngeal speech. Observations during the evaluation of above method highlighted that this separation is insufficient with currently known methods. Therefore, the main part of this thesis is concerned with the modelling of voice and vocal tract and the estimation of the respective model parameters. Most methods for joint source filter estimation known today represent a compromise between model complexity, estimation feasibility and estimation efficiency. Typically, single-parametric models are used to represent the source for the sake of tractable optimization or multi-parametric models are estimated using inefficient grid searches over the entire parameter space. The novel method presented in this work proposes advances in the direction of efficiently estimating and fitting multi-parametric source and filter models to healthy and pathological speech signals, resulting in a more reliable estimation of voice source and especially vocal tract coefficients. In particular, the proposed method is exhibits a largely reduced bias in the estimated formant frequencies and bandwidths over a large variety of experimental conditions such as environmental noise, glottal jitter, fundamental frequency, voice types and glottal noise. The methods appears to be especially robust to environmental noise and improves the separation of deterministic voice source components from the articulation. Alaryngeal speakers often have great difficulty at producing intelligible, not to mention prosodic, speech. Despite great efforts and advances in surgical and rehabilitative techniques, currently known methods, devices and modes of speech rehabilitation leave pathological speakers with a lack in the ability to control key aspects of their voice. The proposed multiresolution approach presented at the end of this thesis provides alaryngeal speakers an intuitive manner to increase prosodic features in their speech by reconstructing a more intelligible, more natural and more prosodic voice. The proposed method is entirely non-invasive. Key prosodic cues are reconstructed and enhanced at different temporal scales by inducing additional volatility estimated from other, still intact, speech features. The restored voice source is thus controllable in an intuitive way by the alaryngeal speaker. Despite the above mentioned advantages there is also a weak point of the proposed joint source-filter estimation method to be mentioned. The proposed method exhibits a susceptibility to modelling errors of the glottal source. On the other hand, the proposed estimation framework appears to be well suited for future research on exactly this topic. A logical continuation of this work is the leverage the efficiency and reliability of the proposed method for the development of new, more accurate glottal source models.

The standard Kalman filter is a powerful and widely used tool to perform prediction, filtering and smoothing in the fields of linear Gaussian state-space models. In its standard setting it has a simple recursive form which implies high computational efficiency. As the latter is essentially a least squares procedure optimality properties can be derived easily. These characteristics of the standard Kalman filter depend strongly on distributional and linearity assumptions of the model. If we consider nonlinear non-Gaussian state-space models all these properties and characteristics are no longer valid. Consequently there are different approaches on the robustification of the Kalman filter. One is based on the the ideas of minimax problems and influence curves. Others use numerical integration and Monte Carlo methods. Herein we propose a new filter by implementing a method of numerical integration, called scrambled net quadrature, which consists of a mixture of Monte Carlo methods and quasi-Monte Carlo methods, providing an integration error of order of magnitude $N^{-3/2}\log(N)^{(r-1)/2}$ in probability, where $r$ denotes dimension. We show that the point-wise bias of the posterior density estimate is of order of magnitude $N^{-3}\log(N)^{r-1}$ but grows linearly with time.

2002The Global Navigation Satellite System (GNSS) refers to a constellation of satellites emitting signals from space, used to provide Position, Navigation and Timing (PNT) services to the receivers on Earth. Nowadays, the GNSS is exploited in a wide range of applications among which ground mapping, transportation, machine control, timing, surveying, defense, or aerial photogrammetry [1]. However, as we become more dependent on GNSS technology, we also become more vulnerable to its limitations [2]. The GNSS operates with satellites orbiting in Medium Earth Orbits (MEO, approximately 20200 [km] in altitude). On one hand, this great distance allows to have a large footprint of the satellite signal on Earth, but on the other hand, it requires the signal to cross a wide path in the atmosphere before reaching the receiver. This results in a decrease of signal strength, which is not an issue in open-sky condition, but becomes a limitation in deep attenuation environments, with little or no service in urban canyon, in dense cities or indoors [3]. In this contribution, a possible strategy to cope with this issue is proposed and evaluated through a simulation process. It implies the use of Low Earth Orbits (LEO) satellites as a support for the GNSS PNT service. Since nowadays no broadband big LEO constellation broadcasting navigation messages exist, in this project various LEO constellation setups have been simulated and tested. Because LEOs are much closer to Earth (200 to 1500 [km]), they move faster in the sky, passing overhead in minutes instead of hours like MEOs. This gives rise to the possibility to exploit the Doppler effect to accurately locate the user. Therefore, three different PNT algorithms, employing the Kalman filter, are implemented and the evaluation of the localization quality is made at four distinct locations on the globe. The results show that, for most of the cases, compared to the standard GPS pseudorange only PNT, the addition of LEO pseudoranges already improve the localization quality. The performance of the PNT algorithm increases even more with the exploitation of both pseudoranges and pseudorangerates (Doppler) measurements. Nevertheless, the choice of constellation parameters such as satellites altitude, total number of satellites, orbit inclination or the frequency of the emitted signal, has a great impact on the magnitude of the improvement in the localization quality. The factor that has the most significant influence seems to be the signal frequency and for that reason it must be carefully chosen: the higher the better. Moreover, it has been demonstrated that at high elevation cut-off angles (denied environments), the addition of LEO constellation is a valid option and allows us to obtain an accurate position even when the GNSS only can not provide a solution. Finally, the LEO only system seems to be a promising alternative to cope with GNSS jamming or spoofing. However, the optimal LEO parameters providing an accurate solution for all the location on the globe has not yet been identified. In conclusion, the result of this effort is the awareness that, once the optimal parameters for the LEO system have been found, it is a valid option to augment the GNSS, or even as a standalone backup, especially when the Doppler-based positioning is applied. Nevertheless, we are still far from that end and much more research needs to be done.

2020Related lectures (163)