Êtes-vous un étudiant de l'EPFL à la recherche d'un projet de semestre?
Travaillez avec nous sur des projets en science des données et en visualisation, et déployez votre projet sous forme d'application sur Graph Search.
VR (Virtual Reality) is a real-time simulation that creates the subjective illusion of being in a virtual world.This thesis explores how integrating the user's body and fingers can be achieved and beneficial for the user to experience VR.At the advent of VR, the original idea was to completely transpose users' bodies, including hands and fingers, so that we could still be able to see our own movements in the VE (Virtual Environment).However, technological limitations prevented the full integration of the body, hence negatively impairing the subjective user experience.Hands-tracking devices were replaced by controllers and the body integration was, for a time, forsaken.Recently, Mocap (Motion Capture) systems became more reliable, more convenient, and the concern for body integration came back.Body movements can be tracked in real-time with trackers strapped on the body limbs, and IK (Inverse Kinematic) can be used to animate avatars' skeletons from the MoCap data.However, MoCap techniques were not still sufficiently robust to reliably track finger motion, hence preventing our primary way to interact with the environment from being visible in the VE.To address this critical issue, we proposed an approach relying on an active camera-based MoCap (Motion Capture) system to animate virtual hands and fingers in real-time.Here, a first neural network was used to fill the gaps in the input due to occlusions, while a second one was used to provide an IK (Inverse Kinematic) solution handling the animation of the hand and fingers.This method focused on maintaining plausible poses (eventually with slight distortions in the movements) to compensate for tracking errors rather than seeking a perfect MoCap system that would not present any drawback.To confirm the usability of our approach, we investigated, through a user study, whether one could tolerate those errors in the finger animation through the evaluation of an even more distinct type of distortion than motion amplification in the context of succeeding interactions: finger swaps.Our results showed that participants mostly took credit for introduced finger swaps, to the point where participants could bearly notice when they were helped, allowing us to provide guidelines to avoid disrupting the SoE when animating avatar fingers.The learned mechanisms of the cognitive functioning of the SoE at the finger levels combined with the knowledge from the literature on arm/leg reaching movements were then integrated together to provide an approach aiming to avoid BiE (Break in Embodiment) when embodying avatars, with different shapes and proportions, animated at both the body and finger levels.This approach relies on an active optical MoCap system (to acquire the user's movements), a user's body calibration procedure (to construct a numerical model of the user's morphology), and an animation pipeline to transfer the original motion from the user onto an avatar with different shapes and sizes in real-time.The proposed approach was evaluated through a subjective evaluation procedure comparing the proposed approach against a full-body animation using direct forward kinematics and our results showed that the retargeted approach outperformed the direct kinematics forward one.Despite the smaller effect size observed than initially expected, the evaluation highlighted the necessity of adapting the motion, even if the avatar and the user look similar.
Davide Scaramuzza, Christian Pfeiffer, Leyla Loued-Khenissi
Ronan Boulic, Bruno Herbelin, Mathias Guy Delahaye
Michael Herzog, Leila Drissi Daoudi - Kleinbauer, Lukas Vogelsang