Are you an EPFL student looking for a semester project?
Work with us on data science and visualisation projects, and deploy your project as an app on top of Graph Search.
In imitation learning, multivariate Gaussians are widely used to encode robot behaviors. Such approaches do not provide the ability to properly represent end-effector orientation, as the distance metric in the space of orientations is not Euclidean. In this work we present an extension of common probabilistic learning from demonstration techniques to Riemannian manifolds. This generalization enables the encoding of joint distributions that include the robot pose. We show that Gaussian conditioning, Gaussian product and nonlinear regression can be achieved with this representation. The proposed approach is illustrated with examples on a 2-dimensional sphere, as well as with an example of regression between two robot end-effector poses, and by extending TP-GMM and GMR to Riemannian manifolds.
Aude Billard, Iason Batzianoulis, Anqing Duan
Sylvain Calinon, Amirreza Razmjoo Fard, Jie Zhao