Regularized Diffusion Adaptation via Conjugate Smoothing
Related publications (37)
Graph Chatbot
Chat with Graph Search
Ask any question about EPFL courses, lectures, exercises, research, news, etc. or try the example questions below.
DISCLAIMER: The Graph Chatbot is not programmed to provide explicit or categorical answers to your questions. Rather, it transforms your questions into API requests that are distributed across the various IT services officially administered by EPFL. Its purpose is solely to collect and recommend relevant references to content that you can explore to help you answer your questions.
We develop an effective distributed strategy for seeking the Pareto solution of an aggregate cost consisting of regularized risks. The focus is on stochastic optimization problems where each risk function is expressed as the expectation of some loss functi ...
Dynamic optimization problems affected by uncertainty are ubiquitous in many application domains. Decision makers typically model the uncertainty through random variables governed by a probability distribution. If the distribution is precisely known, then ...
For the radial energy-supercritical nonlinear wave equation □u=−utt+△u=±u7 on R3+1, we prove the existence of a class of global in forward time C∞-smooth solutions with infinite critical Sobolev norm $\dot{H}^{\f ...
In a recent article series, the authors have promoted convex optimization algorithms for radio-interferometric imaging in the framework of compressed sensing, which leverages sparsity regularization priors for the associated inverse problem and defines a m ...
We consider the transfer learning scenario, where the learner does not have access to the source domain directly, but rather operates on the basis of hypotheses induced from it - the Hypothesis Transfer Learning (HTL) problem. Particularly, we conduct a th ...
The goal of regression and classification methods in supervised learning is to minimize the empirical risk, that is, the expectation of some loss function quantifying the prediction error under the empirical distribution. When facing scarce training data, ...
Decentralized optimization is a powerful paradigm that finds applications in engineering and learning design. This work studies decentralized composite optimization problems with non-smooth regularization terms. Most existing gradient-based proximal decent ...
NEURAL INFORMATION PROCESSING SYSTEMS (NIPS)2019
,
Generalized additive models (GAMs) are regression models wherein parameters of probability distributions depend on input variables through a sum of smooth functions, whose degrees of smoothness are selected by L-2 regularization. Such models have become th ...
MICROTOME PUBL2019
This paper proposes a tradeoff between computational time, sample complexity, and statistical accuracy that applies to statistical estimators based on convex optimization. When we have a large amount of data, we can exploit excess samples to decrease stati ...
We consider the model selection consistency or sparsistency of a broad set of ℓ1-regularized M-estimators for linear and non-linear statistical models in a unified fashion. For this purpose, we propose the local structured smoothness condition (LSS ...