Summary
Simultaneous localization and mapping (SLAM) is the computational problem of constructing or updating a map of an unknown environment while simultaneously keeping track of an agent's location within it. While this initially appears to be a chicken or the egg problem, there are several algorithms known to solve it in, at least approximately, tractable time for certain environments. Popular approximate solution methods include the particle filter, extended Kalman filter, covariance intersection, and GraphSLAM. SLAM algorithms are based on concepts in computational geometry and computer vision, and are used in robot navigation, robotic mapping and odometry for virtual reality or augmented reality. SLAM algorithms are tailored to the available resources and are not aimed at perfection but at operational compliance. Published approaches are employed in self-driving cars, unmanned aerial vehicles, autonomous underwater vehicles, planetary rovers, newer domestic robots and even inside the human body. Given a series of controls and sensor observations over discrete time steps , the SLAM problem is to compute an estimate of the agent's state and a map of the environment . All quantities are usually probabilistic, so the objective is to compute Applying Bayes' rule gives a framework for sequentially updating the location posteriors, given a map and a transition function , Similarly the map can be updated sequentially by Like many inference problems, the solutions to inferring the two variables together can be found, to a local optimum solution, by alternating updates of the two beliefs in a form of an expectation–maximization algorithm. Statistical techniques used to approximate the above equations include Kalman filters and particle filters. They provide an estimation of the posterior probability distribution for the pose of the robot and for the parameters of the map. Methods which conservatively approximate the above model using covariance intersection are able to avoid reliance on statistical independence assumptions to reduce algorithmic complexity for large-scale applications.
About this result
This page is automatically generated and may contain information that is not correct, complete, up-to-date, or relevant to your search query. The same applies to every other page on this website. Please make sure to verify the information with EPFL's official sources.
Related courses (7)
EE-623: Perception and learning from multimodal sensors
The course will cover different aspects of multimodal processing (complementarity vs redundancy; alignment and synchrony; fusion), with an emphasis on the analysis of people, behaviors and interaction
ENV-548: Sensor orientation
Determination of spatial orientation (i.e. position, velocity, attitude) via integration of inertial sensors with satellite positioning. Prerequisite for many applications related to remote sensing, e
MICRO-452: Basics of mobile robotics
The course teaches the basics of autonomous mobile robots. Both hardware (energy, locomotion, sensors) and software (signal processing, control, localization, trajectory planning, high-level control)
Show more
Related lectures (32)
Robot Localization: Positioning Systems and Odometry
Explores robot localization through positioning systems, odometry, sensor fusion, and error modeling.
Gaussian Mixture Models: Applications in Recommender SystemsMOOC: Introduction to optimization on smooth manifolds: first order methods
Explores the applications of Gaussian Mixture Models in recommender systems and real-world scenarios like the Roomba robot.
Odometry and Sensor Fusion
Explores error sources in odometry, sensor fusion, feature-based localization, and the Kalman Filter algorithm.
Show more
Related publications (404)

Airborne sensor fusion: Expected accuracy and behavior of a concurrent adjustment

Jan Skaloud, Davide Antonio Cucci, Aurélien Arnaud Brun, Kyriaki Mouzakidou

Tightly-coupled sensor orientation, i.e. the simultaneous processing of temporal (GNSS and raw inertial) and spatial (image and lidar) constraints in a common adjustment, has demonstrated significant improvement in the quality of attitude determination wit ...
2024

TSLAM: a tag-based object-centered monocular navigation system for augmented manual woodworking.

Hong-Bin Yang

TimberSLAM (TSLAM) is an object-centered, tag-based visual self-localization and mapping (SLAM) system for monocular RGB cameras. It was specifically developed to support a robust and augmented reality pipeline for close-range, noisy, and cluttered fabrica ...
2024

Modality-invariant Visual Odometry for Embodied Vision

Amir Roshan Zamir, Roman Christian Bachmann, Marius Reinhard Memmel

Effectively localizing an agent in a realistic, noisy setting is crucial for many embodied vision tasks. Visual Odometry (VO) is a practical substitute for unreliable GPS and compass sensors, especially in indoor environments. While SLAM-based methods show ...
Los Alamitos2023
Show more
Related concepts (18)
Robotics
Robotics is an interdisciplinary branch of electronics and communication, computer science and engineering. Robotics involves the design, construction, operation, and use of robots. The goal of robotics is to design machines that can help and assist humans. Robotics integrates fields of mechanical engineering, electrical engineering, information engineering, mechatronics engineering, electronics, biomedical engineering, computer engineering, control systems engineering, software engineering, mathematics, etc.
Robotic mapping
Robotic mapping is a discipline related to computer vision and cartography. The goal for an autonomous robot is to be able to construct (or use) a map (outdoor use) or floor plan (indoor use) and to localize itself and its recharging bases or beacons in it. Robotic mapping is that branch which deals with the study and application of ability to localize itself in a map / plan and sometimes to construct the map or floor plan by the autonomous robot. Evolutionarily shaped blind action may suffice to keep some animals alive.
Autonomous robot
An autonomous robot is a robot that acts without recourse to human control. The first autonomous robots environment were known as Elmer and Elsie, which were constructed in the late 1940s by W. Grey Walter. They were the first robots in history that were programmed to "think" the way biological brains do and meant to have free will. Elmer and Elsie were often labeled as tortoises because of how they were shaped and the manner in which they moved. They were capable of phototaxis which is the movement that occurs in response to light stimulus.
Show more
Related MOOCs (25)
Digital Signal Processing [retired]
The course provides a comprehensive overview of digital signal processing theory, covering discrete time, Fourier analysis, filter design, sampling, interpolation and quantization; it also includes a
Digital Signal Processing I
Basic signal processing concepts, Fourier analysis and filters. This module can be used as a starting point or a basic refresher in elementary DSP
Digital Signal Processing II
Adaptive signal processing, A/D and D/A. This module provides the basic tools for adaptive filtering and a solid mathematical framework for sampling and quantization
Show more