Regularized Diffusion Adaptation via Conjugate Smoothing
Publications associées (37)
Graph Chatbot
Chattez avec Graph Search
Posez n’importe quelle question sur les cours, conférences, exercices, recherches, actualités, etc. de l’EPFL ou essayez les exemples de questions ci-dessous.
AVERTISSEMENT : Le chatbot Graph n'est pas programmé pour fournir des réponses explicites ou catégoriques à vos questions. Il transforme plutôt vos questions en demandes API qui sont distribuées aux différents services informatiques officiellement administrés par l'EPFL. Son but est uniquement de collecter et de recommander des références pertinentes à des contenus que vous pouvez explorer pour vous aider à répondre à vos questions.
We develop an effective distributed strategy for seeking the Pareto solution of an aggregate cost consisting of regularized risks. The focus is on stochastic optimization problems where each risk function is expressed as the expectation of some loss functi ...
Dynamic optimization problems affected by uncertainty are ubiquitous in many application domains. Decision makers typically model the uncertainty through random variables governed by a probability distribution. If the distribution is precisely known, then ...
EPFL2016
, ,
The goal of regression and classification methods in supervised learning is to minimize the empirical risk, that is, the expectation of some loss function quantifying the prediction error under the empirical distribution. When facing scarce training data, ...
Decentralized optimization is a powerful paradigm that finds applications in engineering and learning design. This work studies decentralized composite optimization problems with non-smooth regularization terms. Most existing gradient-based proximal decent ...
Generalized additive models (GAMs) are regression models wherein parameters of probability distributions depend on input variables through a sum of smooth functions, whose degrees of smoothness are selected by L-2 regularization. Such models have become th ...
This paper proposes a tradeoff between computational time, sample complexity, and statistical accuracy that applies to statistical estimators based on convex optimization. When we have a large amount of data, we can exploit excess samples to decrease stati ...
We consider the model selection consistency or sparsistency of a broad set of ℓ1-regularized M-estimators for linear and non-linear statistical models in a unified fashion. For this purpose, we propose the local structured smoothness condition (LSS ...
In a recent article series, the authors have promoted convex optimization algorithms for radio-interferometric imaging in the framework of compressed sensing, which leverages sparsity regularization priors for the associated inverse problem and defines a m ...
We consider the transfer learning scenario, where the learner does not have access to the source domain directly, but rather operates on the basis of hypotheses induced from it - the Hypothesis Transfer Learning (HTL) problem. Particularly, we conduct a th ...
2013
For the radial energy-supercritical nonlinear wave equation □u=−utt+△u=±u7 on R3+1, we prove the existence of a class of global in forward time C∞-smooth solutions with infinite critical Sobolev norm $\dot{H}^{\f ...