Gaussians on Riemannian Manifolds for Robot Learning and Adaptive Control
Related publications (101)
Graph Chatbot
Chat with Graph Search
Ask any question about EPFL courses, lectures, exercises, research, news, etc. or try the example questions below.
DISCLAIMER: The Graph Chatbot is not programmed to provide explicit or categorical answers to your questions. Rather, it transforms your questions into API requests that are distributed across the various IT services officially administered by EPFL. Its purpose is solely to collect and recommend relevant references to content that you can explore to help you answer your questions.
We consider minimizing a nonconvex, smooth function f on a Riemannian manifold M. We show that a perturbed version of Riemannian gradient descent algorithm converges to a second-order stationary point (and hence is able to escape saddle point ...
In this article, motivated by the study of symplectic structures on manifolds with boundary and the systematic study of b-symplectic manifolds started in Guillemin, Miranda, and Pires Adv. Math. 264 (2014), 864-896, we prove a slice theorem for Lie group a ...
We prove equidistribution at shrinking scales for the monochromatic ensemble on a compact Riemannian manifold of any dimension. This ensemble on an arbitrary manifold takes a slowly growing spectral window in order to synthesize a random function. With hig ...
The purpose of this thesis is to provide an intrinsic proof of a Gauss-Bonnet-Chern formula for complete Riemannian manifolds with finitely many conical singularities and asymptotically conical ends. A geometric invariant is associated to the link of both ...
Humans exhibit outstanding learning and adaptation capabilities while performing various types of manipulation tasks. When learning new skills, humans are able to extract important information by observing examples of a task and efficiently refine a priori ...
Bayesian optimization (BO) recently became popular in robotics to optimize control parameters and parametric policies in direct reinforcement learning due to its data efficiency and gradient-free approach. However, its performance may be seriously compromi ...
We consider minimizing a nonconvex, smooth function f on a Riemannian manifold M. We show that a perturbed version of Riemannian gradient descent algorithm converges to a second-order stationary point (and hence is able to escape saddle points on the manif ...
Oscillators have two main limitations: their synchronization properties are limited (i.e., they have a finite synchronization region) and they have no memory of past interactions (i.e., they return to their intrinsic frequency whenever the entraining signa ...
We consider the singular set in the thin obstacle problem with weight vertical bar x(n +1)vertical bar(a) for a epsilon (-1, 1), which arises as the local extension of the obstacle problem for the fractional Laplacian (a nonlocal problem). We develop a ref ...
In this paper, we provide a simple pedagogical proof of the existence of covariant renormalizations in Euclidean perturbative quantum field theory on closed Riemannian manifolds, following the Epstein–Glaser philosophy. We rely on a local method that allow ...