Are you an EPFL student looking for a semester project?
Work with us on data science and visualisation projects, and deploy your project as an app on top of Graph Search.
Planning multicontact motions in a receding horizon fashion requires a value function to guide the planning with respect to the future, e.g., building momentum to traverse large obstacles. Traditionally, the value function is approximated by computing trajectories in a prediction horizon (never executed) that foresees the future beyond the execution horizon. However, given the nonconvex dynamics of multicontact motions, this approach is computationally expensive. To enable online receding horizon planning (RHP) of multicontact motions, we find efficient approximations of the value function. Specifically, we propose a trajectory-based and a learning-based approach. In the former, namely RHP with multiple levels of model fidelity, we approximate the value function by computing the prediction horizon with a convex relaxed model. In the latter, namely locally guided RHP, we learn an oracle to predict local objectives for locomotion tasks, and we use these local objectives to construct local value functions for guiding a short-horizon RHP. We evaluate both approaches in simulation by planning centroidal trajectories of a humanoid robot walking on moderate slopes, and on large slopes where the robot cannot maintain static balance. Our results show that locally guided RHP achieves the best computation efficiency (95%-98.6% cycles converge online). This computation advantage enables us to demonstrate online RHP of our real-world humanoid robot Talos walking in dynamic environments that change on-the-fly.
Auke Ijspeert, Peter Eckert, Tomislav Horvat, Anja Eva Maria Schmerbauch
Thibault Lucien Christian Asselborn, Wafa Monia Benkaouar Johal, Thanasis Hadzilacos
Sylvain Calinon, Teguh Santoso Lembono