Image registration is the process of transforming different sets of data into one coordinate system. Data may be multiple photographs, data from different sensors, times, depths, or viewpoints. It is used in computer vision, medical imaging, military automatic target recognition, and compiling and analyzing images and data from satellites. Registration is necessary in order to be able to compare or integrate the data obtained from these different measurements.
Image registration or image alignment algorithms can be classified into intensity-based and feature-based. One of the images is referred to as the moving or source and the others are referred to as the target, fixed or sensed images. Image registration involves spatially transforming the source/moving image(s) to align with the target image. The reference frame in the target image is stationary, while the other datasets are transformed to match to the target. Intensity-based methods compare intensity patterns in images via correlation metrics, while feature-based methods find correspondence between image features such as points, lines, and contours. Intensity-based methods register entire images or sub-images. If sub-images are registered, centers of corresponding sub images are treated as corresponding feature points. Feature-based methods establish a correspondence between a number of especially distinct points in images. Knowing the correspondence between a number of points in images, a geometrical transformation is then determined to map the target image to the reference images, thereby establishing point-by-point correspondence between the reference and target images. Methods combining intensity-based and feature-based information have also been developed.
Image registration algorithms can also be classified according to the transformation models they use to relate the target image space to the reference image space. The first broad category of transformation models includes linear transformations, which include rotation, scaling, translation, and other affine transforms.
This page is automatically generated and may contain information that is not correct, complete, up-to-date, or relevant to your search query. The same applies to every other page on this website. Please make sure to verify the information with EPFL's official sources.
Students get acquainted with the process of mapping from images (orthophoto and DEM), as well as with methods for monitoring the Earth surface using remotely sensed data. Methods will span from machi
This summer school is an hands-on introduction on the fundamentals of image analysis for scientists. A series of lectures provide students with the key concepts in the field, and are followed by pract
Study of advanced image processing; mathematical imaging. Development of image-processing software and prototyping in Jupyter Notebooks; application to real-world examples in industrial vision and bio
Photometric stereo, a computer vision technique for estimating the 3D shape of objects through images captured under varying illumination conditions, has been a topic of research for nearly four decades. In its general formulation, photometric stereo is an ...
The correspondence problem refers to the problem of ascertaining which parts of one image correspond to which parts of another image, where differences are due to movement of the camera, the elapse of time, and/or movement of objects in the photos.
Medical imaging is the technique and process of imaging the interior of a body for clinical analysis and medical intervention, as well as visual representation of the function of some organs or tissues (physiology). Medical imaging seeks to reveal internal structures hidden by the skin and bones, as well as to diagnose and treat disease. Medical imaging also establishes a database of normal anatomy and physiology to make it possible to identify abnormalities.
In computer vision and , a feature is a piece of information about the content of an image; typically about whether a certain region of the image has certain properties. Features may be specific structures in the image such as points, edges or objects. Features may also be the result of a general neighborhood operation or feature detection applied to the image. Other examples of features are related to motion in image sequences, or to shapes defined in terms of curves or boundaries between different image regions.
Related MOOCs (12)
Related lectures (39)
, , , ,
Time-resolved non-line-of-sight (NLOS) imaging based on single-photon avalanche diode (SPAD) detectors have demonstrated impressive results in recent years. To acquire adequate number of indirect photons from a hidden scene in the presence of overwhelming ...
Neurodegenerative and neuroinflammatory disorders often involve complex pathophysiological mechanisms that are â to this date â only partially understood. A more comprehensive understanding of those microstructural processes and their characterization ...
Learn how principles of basic science are integrated into major biomedical imaging modalities and the different techniques used, such as X-ray computed tomography (CT), ultrasounds and positron emissi
Learn how principles of basic science are integrated into major biomedical imaging modalities and the different techniques used, such as X-ray computed tomography (CT), ultrasounds and positron emissi
Learn about magnetic resonance, from the physical principles of Nuclear Magnetic Resonance (NMR) to the basic concepts of image reconstruction (MRI).
Covers memory management for engineers, focusing on crash programs related to memory access errors.
Focuses on the practical application of Digital Image Correlation for civil engineers, covering measuring displacement fields and computing strain fields.
Explores unsupervised domain mapping using a novel geometry-consistent GAN approach.