Learning Non-Parametric Models with Guarantees: A Smooth Lipschitz Interpolation Approach
Graph Chatbot
Chattez avec Graph Search
Posez n’importe quelle question sur les cours, conférences, exercices, recherches, actualités, etc. de l’EPFL ou essayez les exemples de questions ci-dessous.
AVERTISSEMENT : Le chatbot Graph n'est pas programmé pour fournir des réponses explicites ou catégoriques à vos questions. Il transforme plutôt vos questions en demandes API qui sont distribuées aux différents services informatiques officiellement administrés par l'EPFL. Son but est uniquement de collecter et de recommander des références pertinentes à des contenus que vous pouvez explorer pour vous aider à répondre à vos questions.
We study generalization properties of distributed algorithms in the setting of nonparametric regression over a reproducing kernel Hilbert space (RKHS). We first investigate distributed stochastic gradient methods (SGM), with mini-batches and multi-passes o ...
Data-driven modeling and feedback control play a vital role in several application areas ranging from robotics, control theory, manufacturing to management of assets, financial portfolios and supply chains. Many such problems in one way or another are rela ...
In this paper, we present a multilevel Monte Carlo (MLMC) version of the Stochastic Gradient (SG) method for optimization under uncertainty, in order to tackle Optimal Control Problems (OCP) where the constraints are described in the form of PDEs with rand ...
Optimization-based controllers are advanced control systems whose mechanism of determining control inputs requires the solution of a mathematical optimization problem. In this thesis, several contributions related to the computational effort required for o ...
Many decision problems in science, engineering, and economics are affected by uncertainty, which is typically modeled by a random variable governed by an unknown probability distribution. For many practical applications, the probability distribution is onl ...
This paper develops a methodology for regret minimization with stochastic first-order oracle feedback in online, constrained, non-smooth, non-convex problems. In this setting, the minimization of external regret is beyond reach for first-order methods, and ...
In presence of sparse noise we propose kernel regression for predicting output vectors which are smooth over a given graph. Sparse noise models the training outputs being corrupted either with missing samples or large perturbations. The presence of sparse ...
The present application concerns a computer-implemented method for training a machine learning model in a distributed fashion, using Stochastic Gradient Descent, SGD, wherein the method is performed by a first computer in a distributed computing environmen ...
This paper proposes a parallelizable real-time algorithm for model predictive control (MPC). In contrast to existing distributed and parallel optimization algorithms for linear MPC such as dual decomposition or the alternating direction method of multiplie ...
The nonparametric learning of positive-valued functions appears widely in machine learning, especially in the context of estimating intensity functions of point processes. Yet, existing approaches either require computing expensive projections or semidefin ...