A time-of-flight camera (ToF camera), also known as time-of-flight sensor (ToF sensor), is a range imaging camera system for measuring distances between the camera and the subject for each point of the image based on time-of-flight, the round trip time of an artificial light signal, as provided by a laser or an LED. Laser-based time-of-flight cameras are part of a broader class of scannerless LIDAR, in which the entire scene is captured with each laser pulse, as opposed to point-by-point with a laser beam such as in scanning LIDAR systems.
Time-of-flight camera products for civil applications began to emerge around 2000, as the semiconductor processes allowed the production of components fast enough for such devices. The systems cover ranges of a few centimeters up to several kilometers.
Several different technologies for time-of-flight cameras have been developed.
Photonic Mixer Devices (PMD), the Swiss Ranger, and CanestaVision work by modulating the outgoing beam with an RF carrier, then measuring the phase shift of that carrier on the receiver side. This approach has a modular error challenge: measured ranges are modulo the RF carrier wavelength. The Swiss Ranger is a compact, short-range device, with ranges of 5 or 10 meters and a resolution of 176 x 144 pixels. With phase unwrapping algorithms, the maximum uniqueness range can be increased. The PMD can provide ranges up to 60 m. Illumination is pulsed LEDs rather than a laser. CanestaVision developer Canesta was purchased by Microsoft in 2010. The Kinect2 for Xbox One was based on ToF technology from Canesta.
These devices have a built-in shutter in the image sensor that opens and closes at the same rate as the light pulses are sent out. Most time-of-flight 3D sensors are based on this principle invented by Medina. Because part of every returning pulse is blocked by the shutter according to its time of arrival, the amount of light received relates to the distance the pulse has traveled.
The distance can be calculated using the equation, z = R (S2 − S1) / 2(S1 + S2) + R / 2 for an ideal camera.
This page is automatically generated and may contain information that is not correct, complete, up-to-date, or relevant to your search query. The same applies to every other page on this website. Please make sure to verify the information with EPFL's official sources.
Related courses (1)
The students will gain the theoretical knowledge in computational photography, which allows recording and processing a richer visual experience than traditional digital imaging. They will also execute
Kinect is a line of motion sensing input devices produced by Microsoft and first released in 2010. The devices generally contain RGB cameras, and infrared projectors and detectors that map depth through either structured light or time of flight calculations, which can in turn be used to perform real-time gesture recognition and body skeletal detection, among other capabilities. They also contain microphones that can be used for speech recognition and voice control.
Computer vision tasks include methods for , , and understanding digital images, and extraction of high-dimensional data from the real world in order to produce numerical or symbolic information, e.g. in the forms of decisions. Understanding in this context means the transformation of visual images (the input to the retina in the human analog) into descriptions of the world that make sense to thought processes and can elicit appropriate action.
A camera is an optical instrument used to capture and store images or videos, either digitally via an electronic , or chemically via a light-sensitive material such as photographic film. As a pivotal technology in the fields of photography and videography, cameras have played a significant role in the progression of visual arts, media, entertainment, surveillance, and scientific research. The invention of the camera dates back to the 19th century and has since evolved with advancements in technology, leading to a vast array of types and models in the 21st century.
The space industry has experienced substantial growth in recent years, leading to rapid advancements in space exploration and space-based technologies. Consequently, the study of electronics and sensor performance in extreme environments has become crucial ...
EPFL2024
, , ,
Significance: Fluorescence guidance is used clinically by surgeons to visualize anatomical and/or physiological phenomena in the surgical field that are difficult or impossible to detect by the naked eye. Such phenomena include tissue perfusion or molecula ...
This bachelor project, conducted at the Experimental Museology Laboratory (eM+) at EPFL, focusing on immersive technologies and visualization systems. The project aimed to enhance the Panorama+, a 360-degree stereoscopic interactive visualization system, b ...