Are you an EPFL student looking for a semester project?
Work with us on data science and visualisation projects, and deploy your project as an app on top of Graph Search.
Probability distributions are key components of many learning from demonstration (LfD) approaches, with the spaces chosen to represent tasks playing a central role. Although the robot configuration is defined by its joint angles, end-effector poses are often best explained within several task spaces. In many approaches, distributions within relevant task spaces are learned independently and only combined at the control level. This simplification implies several problems that are addressed in this work. We show that the fusion of models in different task spaces can be expressed as products of experts (PoE), where the probabilities of the models are multiplied and renormalized so that it becomes a proper distribution of joint angles. Multiple experiments are presented to show that learning the different models jointly in the PoE framework significantly improves the quality of the final model. The proposed approach particularly stands out when the robot has to learn hierarchical objectives that arise when a task requires the prioritization of several sub-tasks (e.g. in a humanoid robot, keeping balance has a higher priority than reaching for an object). Since training the model jointly usually relies on contrastive divergence, which requires costly approximations that can affect performance, we propose an alternative strategy using variational inference and mixture model approximations. In particular, we show that the proposed approach can be extended to PoE with a nullspace structure (PoENS), where the model is able to recover secondary tasks that are masked by the resolution of tasks of higher-importance.
Aude Billard, Farshad Khadivar, Konstantinos Chatzilygeroudis