**Êtes-vous un étudiant de l'EPFL à la recherche d'un projet de semestre?**

Travaillez avec nous sur des projets en science des données et en visualisation, et déployez votre projet sous forme d'application sur GraphSearch.

Personne# Emmanuel Pignat

Cette page est générée automatiquement et peut contenir des informations qui ne sont pas correctes, complètes, à jour ou pertinentes par rapport à votre recherche. Il en va de même pour toutes les autres pages de ce site. Veillez à vérifier les informations auprès des sources officielles de l'EPFL.

Unités associées

Chargement

Cours enseignés par cette personne

Chargement

Domaines de recherche associés

Chargement

Publications associées

Chargement

Personnes menant des recherches similaires

Chargement

Cours enseignés par cette personne

Aucun résultat

Domaines de recherche associés (7)

Apprentissage

L’apprentissage est un ensemble de mécanismes menant à l'acquisition de savoir-faire, de savoirs ou de connaissances. L'acteur de l'apprentissage est appelé apprenant. On peut opposer l'apprentissag

Robot

vignette|Atlas (2013), robot androïde de Boston Dynamics
vignette|Bras manipulateurs dans un laboratoire (2009)
vignette|NAO (2006), robot humanoïde éducatif d'Aldebaran Robotics
vignette|DER1 (2005),

Loi normale

En théorie des probabilités et en statistique, les lois normales sont parmi les lois de probabilité les plus utilisées pour modéliser des phénomènes naturels issus de plusieurs événements aléatoires.

Unités associées (1)

Publications associées (13)

Chargement

Chargement

Chargement

Personnes menant des recherches similaires (113)

Sylvain Calinon, Emmanuel Pignat

Probability distributions are key components of many learning from demonstration (LfD) approaches, with the spaces chosen to represent tasks playing a central role. Although the robot configuration is defined by its joint angles, end-effector poses are often best explained within several task spaces. In many approaches, distributions within relevant task spaces are learned independently and only combined at the control level. This simplification implies several problems that are addressed in this work. We show that the fusion of models in different task spaces can be expressed as products of experts (PoE), where the probabilities of the models are multiplied and renormalized so that it becomes a proper distribution of joint angles. Multiple experiments are presented to show that learning the different models jointly in the PoE framework significantly improves the quality of the final model. The proposed approach particularly stands out when the robot has to learn hierarchical objectives that arise when a task requires the prioritization of several sub-tasks (e.g. in a humanoid robot, keeping balance has a higher priority than reaching for an object). Since training the model jointly usually relies on contrastive divergence, which requires costly approximations that can affect performance, we propose an alternative strategy using variational inference and mixture model approximations. In particular, we show that the proposed approach can be extended to PoE with a nullspace structure (PoENS), where the model is able to recover secondary tasks that are masked by the resolution of tasks of higher-importance.

Adaptability and ease of programming are key features necessary for a wider spread of robotics in factories and everyday assistance. Learning from demonstration (LfD) is an approach to address this problem. It aims to develop algorithms and interfaces such that a non-expert user can teach the robot new tasks by showing examples.
While the configuration of a manipulator is defined by its joint angles, postures and movements are often best explained under several task spaces. These task spaces are exploited by most of the existing LfD approaches. However, models are often learned independently in the different task spaces and only combined later, at the controller level. This simplification implies several limitations such as recovering the precision and hierarchy of the different tasks. They are also unable to uncover secondary task masked by the resolution of primary ones.
In this thesis, we aim to overcome these limitations by proposing a consistent framework for LfD based on product of experts models (PoEs). In PoEs, data is modelled as a fusion from multiple sources or "experts". Each of them is giving an ``opinion'' on a different view or transformation of the data, which corresponds to different task spaces. Mathematically, the experts are probability density functions, which are multiplied together and renormalized. Distributions of two different nature are targeted in this thesis.
In the first part of the thesis, PoEs are proposed to model distributions of robot configurations. These distributions are a key component of many LfD approaches. They are commonly used to define motions by introducing a dependence to time, as observation models in hidden-Markov models or transformed by a time-dependent basis matrix. Through multiple experiments, we show the advantages of learning models in several task spaces jointly in the PoE framework. We also compare PoE against more general techniques like variational autoencoders and generative adversarial nets. However, training a PoE requires costly approximations to which the performance can be very sensitive. An alternative approach to contrastive divergence is presented, by using variational inference and mixture model approximations. We also propose an extension to PoE with a nullspace structure (PoENS). This model can recover tasks that are masked by the resolution of higher-level objectives.
In the second part of the thesis, PoEs are used to learn stochastic policies. We propose to learn motion primitives as distributions of trajectories. Instead of approximating complicated normalizing constants as in maximum entropy inverse optimal control, we propose to use a generative adversarial approach. The policy is parametrized as a product of Gaussian distributions of velocities, accelerations or forces, acting in different task spaces. Given an approximate and stochastic dynamic model of the system, the policy is trained by stochastic gradient descent, such that the distributions of rollouts match the distribution of demonstrations.

Sylvain Calinon, Julius Maximilian Jankowski, Teguh Santoso Lembono, Emmanuel Pignat

In high dimensional robotic system, the manifold of the valid configuration space often has a complex shape, especially under constraints such as end-effector orientation or static stability. We propose a generative adversarial network approach to learn the distribution of valid robot configurations under such constraints. It can generate configurations that are close to the constraint manifold. We present two applications of this method. First, by learning the conditional distribution with respect to the desired end-effector position, we can do fast inverse kinematics even for very high degrees of freedom (DoF) systems. Then, we use it to generate samples in sampling-based constrained motion planning algorithms to reduce the necessary projection steps, speeding up the computation. We validate the approach in simulation using the 7-DoF Panda manipulator and the 28-DoF humanoid robot Talos.