Convex Optimization using Sparsified Stochastic Gradient Descent with Memory
Graph Chatbot
Chattez avec Graph Search
Posez n’importe quelle question sur les cours, conférences, exercices, recherches, actualités, etc. de l’EPFL ou essayez les exemples de questions ci-dessous.
AVERTISSEMENT : Le chatbot Graph n'est pas programmé pour fournir des réponses explicites ou catégoriques à vos questions. Il transforme plutôt vos questions en demandes API qui sont distribuées aux différents services informatiques officiellement administrés par l'EPFL. Son but est uniquement de collecter et de recommander des références pertinentes à des contenus que vous pouvez explorer pour vous aider à répondre à vos questions.
We propose a convergence analysis of accelerated forward-backward splitting methods for composite function minimization, when the proximity operator is not available in closed form, and can only be computed up to a certain precision. We prove that the 1/k( ...
We investigate a compressive sensing system in which the sensors introduce a distortion to the measurements in the form of unknown gains. We focus on blind calibration, using measures performed on a few unknown (but sparse) signals. We extend our earlier s ...
Vision is a natural tool for human-computer interaction, since it pro- vides visual feedback to the user and mimics some human behaviors. It requires however the fast and robust computation of motion primi- tives, which remains a difficult problem. In this ...
Ieee Service Center, 445 Hoes Lane, Po Box 1331, Piscataway, Nj 08855-1331 Usa2011
Machine learning is most often cast as an optimization problem. Ideally, one expects a convex objective function to rely on efficient convex optimizers with nice guarantees such as no local optima. Yet, non-convexity is very frequent in practice and it may ...
We present a framework based on convex optimization and spectral regularization to perform learning when feature observations are multidimensional arrays (tensors). We give a mathematical characterization of spectral penalties for tensors and analyze a uni ...
A new decomposition optimization algorithm, called path-following gradient-based decomposition, is proposed to solve separable convex optimization problems. Unlike path-following Newton methods considered in the literature, this algorithm does not require ...
We consider the class of convex minimization problems, composed of a self-concordant function, such as the logdet metric, a convex data fidelity term h and, a regularizing -- possibly non-smooth -- function g. This type of problems have recently attracted ...
This paper proposes a smoothing technique for nonsmooth convex minimization using self-concordant barriers. To illustrate the main ideas, we compare our technique and the proximity smoothing approach (Nesterov2005) via the classical gradient method on both ...
We propose an algorithmic framework for convex minimization problems of a composite function with two terms: a self-concordant function and a possibly nonsmooth regularization term. Our method is a new proximal Newton algorithm that features a local quadra ...
This paper introduces a set of algorithms for Monte-Carlo Bayesian reinforcement learning. Firstly, Monte-Carlo estimation of upper bounds on the Bayes-optimal value function is employed to construct an optimistic policy. Secondly, gradient-based algorithm ...