In the field of gesture recognition and , finger tracking is a high-resolution technique developed in 1969 that is employed to know the consecutive position of the fingers of the user and hence represent objects in 3D.
In addition to that, the finger tracking technique is used as a tool of the computer, acting as an external device in our computer, similar to a keyboard and a mouse.
The finger tracking system is focused on user-data interaction, where the user interacts with virtual data, by handling through the fingers the volumetric of a 3D object that we want to represent.
This system was born based on the human-computer interaction problem. The objective is to allow the
communication between them and the use of gestures and hand movements to be more intuitive,
Finger tracking systems have been created. These systems track in real time the position in 3D and 2D of
the orientation of the fingers of each marker and use the intuitive hand movements and gestures to interact.
There are many options for the implementation of finger tracking, principally those used with or without an interface.
This system mostly uses inertial and optical motion capture systems.
Inertial motion capture systems are able to capture finger motion by reading the rotation of each finger segment in 3D space. Applying these rotations to kinematic chain, the whole human hand can be tracked in real time, without occlusion and wireless.
Hand inertial motion capture systems, like for example Synertial mocap gloves, use tiny IMU based sensors, located on each finger segment. Precise capture requires at least 16 sensors to be used. There are also mocap glove models with less sensors (13 / 7 sensors) for which the rest of the finger segments is interpolated (proximal segments) or extrapolated (distal segments). The sensors are typically inserted into textile glove which makes the use of the sensors more comfortable.
Inertial sensors can capture movement in all 3 directions, which means finger and thumb flexion, extension and abduction can be detected.
This page is automatically generated and may contain information that is not correct, complete, up-to-date, or relevant to your search query. The same applies to every other page on this website. Please make sure to verify the information with EPFL's official sources.
Gesture recognition is a topic in computer science and language technology with the goal of interpreting human gestures via mathematical algorithms. It is a subdiscipline of computer vision. Gestures can originate from any bodily motion or state, but commonly originate from the face or hand. Focuses in the field include emotion recognition from face and hand gesture recognition since they are all expressions. Users can make simple gestures to control or interact with devices without physically touching them.
Motion capture (sometimes referred as mo-cap or mocap, for short) is the process of recording the movement of objects or people. It is used in military, entertainment, sports, medical applications, and for validation of computer vision and robots. In filmmaking and video game development, it refers to recording actions of human actors and using that information to animate digital character models in 2D or 3D computer animation. When it includes face and fingers or captures subtle expressions, it is often referred to as performance capture.
Virtual reality (VR) is a simulated experience that employs pose tracking and 3D near-eye displays to give the user an immersive feel of a virtual world. Applications of virtual reality include entertainment (particularly video games), education (such as medical or military training) and business (such as virtual meetings). Other distinct types of VR-style technology include augmented reality and mixed reality, sometimes referred to as extended reality or XR, although definitions are currently changing due to the nascence of the industry.
This summer school is an hands-on introduction on the fundamentals of image analysis for scientists. A series of lectures provide students with the key concepts in the field, and are followed by pract
The goal of VR is to embed the users in a potentially complex virtual environment while ensuring that they are able to react as if this environment were real. The course provides a human perception-ac
The lecture presents an overview of the state of the art in the analysis and modeling of human locomotion and the underlying motor circuits. Multiple aspects are considered including neurophysiology,
Le cours suivi propose une initiation aux concepts de base de la programmation impérative tels que : variables, expressions, structures de contrôle, fonctions/méthodes, en les illustrant dans la synta
Le cours suivi propose une introduction aux concepts de base de la programmation orientée objet tels que : encapsulation et abstraction, classes/objets, attributs/méthodes, héritage, polymorphisme, ..
Ce cours initie à la programmation en utilisant le langage C++. Il ne présuppose pas de connaissance préalable. Les aspects plus avancés (programmation orientée objet) sont donnés dans un cours suivan
Explores the background and applications of full-body motion capture techniques, including posture reconstruction and collision avoidance.
Type systems are a device for verifying properties of programs without running them. Many programming languages used in the industry have always had a type system, while others were initially created without a type system and later adopted one, when the ad ...
We propose a test -time adaptation for 6D object pose tracking that learns to adapt a pre -trained model to track the 6D pose of novel objects. We consider the problem of 6D object pose tracking as a 3D keypoint detection and matching task and present a mo ...
Tightly-coupled sensor orientation, i.e. the simultaneous processing of temporal (GNSS and raw inertial) and spatial (image and lidar) constraints in a common adjustment, has demonstrated significant improvement in the quality of attitude determination wit ...