Are you an EPFL student looking for a semester project?
Work with us on data science and visualisation projects, and deploy your project as an app on top of Graph Search.
Many robotics problems are formulated as optimization problems. However, most optimization solvers in robotics are locally optimal and the performance depends a lot on the initial guess. For challenging problems, the solver will often get stuck at poor local optima without a good initialization. In this thesis, we consider various techniques to provide a good initial guess to the solver based on previous experience. We use the term memory of motion to collectively refer to these techniques. The key idea is to use the existing system models, cost functions, and simulation tools to generate a database of solutions, and then construct a memory of motion model. During online execution, we can then query the initial guess of a given task from the memory of motion. We show that it improves the solver performance in terms of the solution quality, the success rates, and the computation time. We consider two different formulations, i.e., supervised learning and probability density estimation. In the first part, we formulate a regression problem to find the mapping between the task parameters and the solutions. Such a formulation is convenient, as there are a lot of function approximations available, but using them as a black box tool may result in poor predictions. It is especially the case for multimodal problems where there can be several different solutions for a given task and standard function approximators will simply average the different modes. We first propose an ensemble of function approximators that can handle multimodal problems to initialize an optimization-based motion planner. We then investigate the problem of initializing an optimal control solver for legged robot locomotion, where we need to also provide the initial guess of the control sequence. We evaluate the effect of different initialization components on the optimal control solver performance. In the second part, we consider another formulation by first transforming the cost function into an unnormalized Probability Density Function (PDF) and approximating it using various models. This formulation addresses several shortcomings of the supervised learning approaches by using the cost function itself to train or construct the predictive model. It allows us to generate initial guesses that have high probabilities of having low-cost values instead of simply imitating the dataset. We first show that we can obtain a trajectory distribution of an iLQR problem as a Gaussian distribution, and tracking this distribution results in a cost-efficient and robust controller. We then propose a generative adversarial framework to learn the distribution of robot configurations under constraints. Finally, we use tensor methods to approximate the unnormalized PDF. Since it does not rely on gradient information, the method is quite robust in finding the (possibly multiple) global optima or at least the good local optima of various challenging problems including some benchmark optimization functions, inverse kinematics, and motion planning.