Smoothing technique for nonsmooth composite minimization with linear operator
Graph Chatbot
Chattez avec Graph Search
Posez n’importe quelle question sur les cours, conférences, exercices, recherches, actualités, etc. de l’EPFL ou essayez les exemples de questions ci-dessous.
AVERTISSEMENT : Le chatbot Graph n'est pas programmé pour fournir des réponses explicites ou catégoriques à vos questions. Il transforme plutôt vos questions en demandes API qui sont distribuées aux différents services informatiques officiellement administrés par l'EPFL. Son but est uniquement de collecter et de recommander des références pertinentes à des contenus que vous pouvez explorer pour vous aider à répondre à vos questions.
Herein, machine learning (ML) models using multiple linear regression (MLR), support vector regression (SVR), random forest (RF) and artificial neural network (ANN) are developed and compared to predict the output features viz. specific capacitance (Csp), ...
Lausanne2024
, ,
We consider the problem of learning a target function corresponding to a deep, extensive-width, non-linear neural network with random Gaussian weights. We consider the asymptotic limit where the number of samples, the input dimension and the network width ...
2023
, , ,
We study the problem of one-dimensional regression of data points with total-variation (TV) regularization (in the sense of measures) on the second derivative, which is known to promote piecewise-linear solutions with few knots. While there are efficient a ...
Elsevier2021
, , ,
Ion-sensors play a major role in physiology and healthcare monitoring since they are capable of continuously collecting biological data from body fluids. Nevertheless, ion interference from background electrolytes present in the sample is a paramount chall ...
We present a framework for performing regression when both covariate and response are probability distributions on a compact and convex subset of Rd. Our regression model is based on the theory of optimal transport and links the conditional Fr'echet m ...
In this paper we fully describe the trajectory of gradient flow over diagonal linear networks in the limit of vanishing initialisation. We show that the limiting flow successively jumps from a saddle of the training loss to another until reaching the minim ...
Proximal splitting methods are standard tools for nonsmooth optimization. While primal-dual methods have become very popular in the last decade for their flexibility, primal methods may still be preferred for two reasons: acceleration schemes are more effe ...
The parDE family of toxin-antitoxin (TA) operons is ubiquitous in bacterial genomes and, in Vibrio cholerae, is an essential component to maintain the presence of chromosome II. Here, we show that transcription of the V. cholerae parDE2 (VcparDE) operon is ...
In this paper, we investigate the impact of stochasticity and large stepsizes on the implicit regularisation of gradient descent (GD) and stochastic gradient descent (SGD) over diagonal linear networks. We prove the convergence of GD and SGD with macroscop ...
We consider stochastic approximation for the least squares regression problem in the non-strongly convex setting. We present the first practical algorithm that achieves the optimal prediction error rates in terms of dependence on the noise of the problem, ...