Summary
Particle filters, or sequential Monte Carlo methods, are a set of Monte Carlo algorithms used to find approximate solutions for filtering problems for nonlinear state-space systems, such as signal processing and Bayesian statistical inference. The filtering problem consists of estimating the internal states in dynamical systems when partial observations are made and random perturbations are present in the sensors as well as in the dynamical system. The objective is to compute the posterior distributions of the states of a Markov process, given the noisy and partial observations. The term "particle filters" was first coined in 1996 by Pierre Del Moral about mean-field interacting particle methods used in fluid mechanics since the beginning of the 1960s. The term "Sequential Monte Carlo" was coined by Jun S. Liu and Rong Chen in 1998. Particle filtering uses a set of particles (also called samples) to represent the posterior distribution of a stochastic process given the noisy and/or partial observations. The state-space model can be nonlinear and the initial state and noise distributions can take any form required. Particle filter techniques provide a well-established methodology for generating samples from the required distribution without requiring assumptions about the state-space model or the state distributions. However, these methods do not perform well when applied to very high-dimensional systems. Particle filters update their prediction in an approximate (statistical) manner. The samples from the distribution are represented by a set of particles; each particle has a likelihood weight assigned to it that represents the probability of that particle being sampled from the probability density function. Weight disparity leading to weight collapse is a common issue encountered in these filtering algorithms. However, it can be mitigated by including a resampling step before the weights become uneven. Several adaptive resampling criteria can be used including the variance of the weights and the relative entropy concerning the uniform distribution.
About this result
This page is automatically generated and may contain information that is not correct, complete, up-to-date, or relevant to your search query. The same applies to every other page on this website. Please make sure to verify the information with EPFL's official sources.
Related courses (21)
MICRO-570: Advanced machine learning
This course will present some of the core advanced methods in the field for structure discovery, classification and non-linear regression. This is an advanced class in Machine Learning; hence, student
MATH-414: Stochastic simulation
The student who follows this course will get acquainted with computational tools used to analyze systems with uncertainty arising in engineering, physics, chemistry, and economics. Focus will be on s
ENV-548: Sensor orientation
Determination of spatial orientation (i.e. position, velocity, attitude) via integration of inertial sensors with satellite positioning. Prerequisite for many applications related to remote sensing, e
Show more
Related publications (393)
Related concepts (9)
Estimation theory
Estimation theory is a branch of statistics that deals with estimating the values of parameters based on measured empirical data that has a random component. The parameters describe an underlying physical setting in such a way that their value affects the distribution of the measured data. An estimator attempts to approximate the unknown parameters using the measurements.
Resampling (statistics)
In statistics, resampling is the creation of new samples based on one observed sample. Resampling methods are: Permutation tests (also re-randomization tests) Bootstrapping Cross validation Permutation test Permutation tests rely on resampling the original data assuming the null hypothesis. Based on the resampled data it can be concluded how likely the original data is to occur under the null hypothesis.
Kalman filter
For statistics and control theory, Kalman filtering, also known as linear quadratic estimation (LQE), is an algorithm that uses a series of measurements observed over time, including statistical noise and other inaccuracies, and produces estimates of unknown variables that tend to be more accurate than those based on a single measurement alone, by estimating a joint probability distribution over the variables for each timeframe. The filter is named after Rudolf E. Kálmán, who was one of the primary developers of its theory.
Show more
Related MOOCs (2)
Elements of Geomatics
Ce cours de base en géomatique présente les concepts et méthodes d’acquisition, de gestion et de représentation des géodonnées. Il inclut les bases de topométrie, géodésie et cartographie, avec un acc
Elements of Geomatics
Ce cours de base en géomatique présente les concepts et méthodes d’acquisition, de gestion et de représentation des géodonnées. Il inclut les bases de topométrie, géodésie et cartographie, avec un acc