Image rectificationImage rectification is a transformation process used to project images onto a common image plane. This process has several degrees of freedom and there are many strategies for transforming images to the common plane. Image rectification is used in computer stereo vision to simplify the problem of finding matching points between images (i.e. the correspondence problem), and in geographic information systems to merge images taken from multiple perspectives into a common map coordinate system.
Visual odometryIn robotics and computer vision, visual odometry is the process of determining the position and orientation of a robot by analyzing the associated camera images. It has been used in a wide variety of robotic applications, such as on the Mars Exploration Rovers. In navigation, odometry is the use of data from the movement of actuators to estimate change in position over time through devices such as rotary encoders to measure wheel rotations.
ParallaxParallax is a displacement or difference in the apparent position of an object viewed along two different lines of sight and is measured by the angle or half-angle of inclination between those two lines. Due to foreshortening, nearby objects show a larger parallax than farther objects, so parallax can be used to determine distances. To measure large distances, such as the distance of a planet or a star from Earth, astronomers use the principle of parallax.
StereoscopyStereoscopy (also called stereoscopics, or stereo imaging) is a technique for creating or enhancing the illusion of depth in an image by means of stereopsis for binocular vision. The word stereoscopy derives . Any stereoscopic image is called a stereogram. Originally, stereogram referred to a pair of stereo images which could be viewed using a stereoscope. Most stereoscopic methods present a pair of two-dimensional images to the viewer. The left image is presented to the left eye and the right image is presented to the right eye.
Simultaneous localization and mappingSimultaneous localization and mapping (SLAM) is the computational problem of constructing or updating a map of an unknown environment while simultaneously keeping track of an agent's location within it. While this initially appears to be a chicken or the egg problem, there are several algorithms known to solve it in, at least approximately, tractable time for certain environments. Popular approximate solution methods include the particle filter, extended Kalman filter, covariance intersection, and GraphSLAM.
Essential matrixIn computer vision, the essential matrix is a matrix, that relates corresponding points in stereo images assuming that the cameras satisfy the pinhole camera model. More specifically, if and are homogeneous in image 1 and 2, respectively, then if and correspond to the same 3D point in the scene. The above relation which defines the essential matrix was published in 1981 by H. Christopher Longuet-Higgins, introducing the concept to the computer vision community.
Point-set registrationIn computer vision, pattern recognition, and robotics, point-set registration, also known as point-cloud registration or scan matching, is the process of finding a spatial transformation (e.g., scaling, rotation and translation) that aligns two point clouds. The purpose of finding such a transformation includes merging multiple data sets into a globally consistent model (or coordinate frame), and mapping a new measurement to a known data set to identify features or to estimate its pose.