Ask any question about EPFL courses, lectures, exercises, research, news, etc. or try the example questions below.
DISCLAIMER: The Graph Chatbot is not programmed to provide explicit or categorical answers to your questions. Rather, it transforms your questions into API requests that are distributed across the various IT services officially administered by EPFL. Its purpose is solely to collect and recommend relevant references to content that you can explore to help you answer your questions.
The interest for distributed stochastic optimization has raised to train complex Machine Learning models with more data on distributed systems. Increasing the computation power speeds up the training but it faces a communication bottleneck between workers ...
We study convex optimization problems that feature low-rank matrix solutions. In such scenarios, non-convex methods offer significant advantages over convex methods due to their lower space complexity as well as faster convergence speed. Moreover, many of ...
The Generative Adversarial Networks (GANs) have demonstrated impressive performance for data synthesis, and are now used in a wide range of computer vision tasks. In spite of this success, they gained a reputation for being difficult to train, what results ...
We use the nodal discontinuous Galerkin method with a Lax-Friedrich flux to model the wave propagation in transversely isotropic and poroelastic media. The effect of dissipation due to global fluid flow causes a stiff relaxation term, which is incorporated ...
The analysis in Part I [1] revealed interesting properties for subgradient learning algorithms in the context of stochastic optimization. These algorithms are used when the risk functions are non-smooth or involve non-differentiable components. They have b ...
We propose a conditional gradient framework for a composite convex minimization template with broad applications. Our approach combines the notions of smoothing and homotopy under the CGM framework, and provably achieves the optimal O(1/sqrt(k)) convergenc ...
We provide a framework for the sparse approximation of multilinear problems and show that several problems in uncertainty quantification fit within this framework. In these problems, the value of a multilinear map has to be approximated using approximation ...
We consider an adaptive isogeometric method (AIGM) based on (truncated) hierarchical B-splines and continue the study of its numerical properties. We prove that our AIGM is optimal in the sense that delivers optimal convergence rates as soon as the solutio ...
In this work and the supporting Part II [1], we examine the performance of stochastic sub-gradient learning strategies under weaker conditions than usually considered in the literature. The new conditions are shown to be automatically satisfied by several ...
Learning a good speaker embedding is critical for many speech processing tasks, including recognition, verification, and diarization. To this end, we propose a complementary optimizing goal called intra-class loss to improve deep speaker embed dings learne ...