Optical flow or optic flow is the pattern of apparent motion of objects, surfaces, and edges in a visual scene caused by the relative motion between an observer and a scene. Optical flow can also be defined as the distribution of apparent velocities of movement of brightness pattern in an image.
The concept of optical flow was introduced by the American psychologist James J. Gibson in the 1940s to describe the visual stimulus provided to animals moving through the world. Gibson stressed the importance of optic flow for affordance perception, the ability to discern possibilities for action within the environment. Followers of Gibson and his ecological approach to psychology have further demonstrated the role of the optical flow stimulus for the perception of movement by the observer in the world; perception of the shape, distance and movement of objects in the world; and the control of locomotion.
The term optical flow is also used by roboticists, encompassing related techniques from image processing and control of navigation including motion detection, , time-to-contact information, focus of expansion calculations, luminance, motion compensated encoding, and stereo disparity measurement.
Sequences of ordered images allow the estimation of motion as either instantaneous image velocities or discrete image displacements. Fleet and Weiss provide a tutorial introduction to gradient based optical flow.
John L. Barron, David J. Fleet, and Steven Beauchemin provide a performance analysis of a number of optical flow techniques. It emphasizes the accuracy and density of measurements.
The optical flow methods try to calculate the motion between two image frames which are taken at times and at every voxel position. These methods are called differential since they are based on local Taylor series approximations of the image signal; that is, they use partial derivatives with respect to the spatial and temporal coordinates.
Cette page est générée automatiquement et peut contenir des informations qui ne sont pas correctes, complètes, à jour ou pertinentes par rapport à votre recherche. Il en va de même pour toutes les autres pages de ce site. Veillez à vérifier les informations auprès des sources officielles de l'EPFL.
The goal of this lab series is to practice the various theoretical frameworks acquired in the courses on a variety of robots, ranging from industrial robots to autonomous mobile robots, to robotic dev
This summer school is an hands-on introduction on the fundamentals of image analysis for scientists. A series of lectures provide students with the key concepts in the field, and are followed by pract
The course will discuss classic material as well as recent advances in computer vision and machine learning relevant to processing visual data -- with a primary focus on embodied intelligence and visi
In computer vision and , a feature is a piece of information about the content of an image; typically about whether a certain region of the image has certain properties. Features may be specific structures in the image such as points, edges or objects. Features may also be the result of a general neighborhood operation or feature detection applied to the image. Other examples of features are related to motion in image sequences, or to shapes defined in terms of curves or boundaries between different image regions.
La vision industrielle est l'application de la vision par ordinateur aux domaines industriels de production et de recherche. Les productions de masse à haute cadence, le souci constant d'amélioration de la qualité et la recherche de gain économique poussent de plus en plus les industriels à automatiser les moyens de production. La vision industrielle est une réponse à ces préoccupations pour les opérations de contrôles de la production.
La vision par ordinateur est un domaine scientifique et une branche de l’intelligence artificielle qui traite de la façon dont les ordinateurs peuvent acquérir une compréhension de haut niveau à partir d's ou de vidéos numériques. Du point de vue de l'ingénierie, il cherche à comprendre et à automatiser les tâches que le système visuel humain peut effectuer. Les tâches de vision par ordinateur comprennent des procédés pour acquérir, traiter, et « comprendre » des images numériques, et extraire des données afin de produire des informations numériques ou symboliques, par ex.
Explore les concepts de vision stéréoscopique tels que les occlusions, l'impact de la taille de la fenêtre, la stéréo multivue, la reconstruction dynamique de la forme et la segmentation basée sur des graphiques.
Explore les bases de la localisation et de la cartographie simultanées (SLAM), en se concentrant sur les techniques de création de cartes et la prédiction des mesures.
Explore le picking automatisé des barres de renforcement dans les données radar pénétrantes au sol à l'aide de techniques d'apprentissage automatique et de traitement du signal.
We present DepthInSpace, a self-supervised deep-learning method for depth estimation using a structured-light camera. The design of this method is motivated by the commercial use case of embedded depth sensors in nowadays smartphones. We first propose to u ...
Human-centered scene understanding is the process of perceiving and analysing a dynamic scene observed through a network of sensors with emphasis on human-related activities. It includes the visual perception of human-related activities from either single ...
The rapid development of autonomous driving and mobile mapping calls for off-the-shelf LiDAR SLAM solutions that are adaptive to LiDARs of different specifications on various complex scenarios. To this end, we propose MULLS, an efficient, low-drift, and ve ...