Êtes-vous un étudiant de l'EPFL à la recherche d'un projet de semestre?
Travaillez avec nous sur des projets en science des données et en visualisation, et déployez votre projet sous forme d'application sur Graph Search.
In this thesis, we address the complex issue of collision avoidance in the joint space of robots. Avoiding collisions with both the robot's own body parts and obstacles in the environment is a critical constraint in motion planning and is crucial for ensuring the safety and stability of the robot. Unlike traditional operational-space control where tasks are projected to joint space through the Jacobian, we tackle collisions directly in joint space, where the robot can be treated as a material point. This brings new challenges, such as computational complexity of high-dimensional spaces, and the need for fast and accurate collision models. Nevertheless, this approach can lead to more stable and efficient solutions and reduce the potential for numerical instabilities and local minima that can arise in operational-space control. In the first part of this thesis, we present a real-time self-collision avoidance method for controlling humanoid robots. This is achieved by learning the feasible regions of control in the robot's joint space and representing these regions as smooth self-collision boundary functions. The learned boundary functions are used as constraints in a real-time quadratic program-based inverse kinematic solver to generate collision-free motions. The humanoid robot's high-dimensional joint space poses challenges in learning efficient and accurate boundary functions. To resolve this, we partition the robot model into smaller, lower-dimensional submodels. We also evaluate different state-of-the-art machine learning techniques for learning the boundary functions. Our approach has been validated on the 29 degree-of-freedom (DoF) iCub humanoid robot, showing highly accurate real-time self-collision avoidance.In the second part of this thesis, we extend the method to efficiently compute the distance-to-collision between a robot in arbitrary configuration and a point in the robot's workspace. Our approach involves learning a neural implicit signed distance function expressed in joint space coordinates. The differentiable nature of the function enables us to efficiently compute gradients, generating a continuous repulsive vector field in joint space. With GPU parallelization, complex scenes with multiple obstacles can be processed in real-time. This high-resolution collision representation can be used to achieve real-time reactive collision-free control by integrating it as a constraint in an inverse kinematics solver or in a model predictive controller. We demonstrate the effectiveness of our approach through experiments with a 7 DoF robot in a reaching task with dynamic obstacles.Lastly, we present a method for modulating dynamical systems using a sampling-based model predictive control. The approach leverages on the properties of previously learned collision detection models, and involves locally perturbing the nominal DS to ensure a meaningful tangential component is always present near obstacles. The algorithm facilitates obstacle avoidance in the high-dimensional joint space of a robot and has been validated in simulations and real-world tests with a 7 DoF robot, successfully navigating a cluttered environment with moving concave obstacles while maintaining local stability of the nominal DS.
,