Publication

Faster One-Sample Stochastic Conditional Gradient Method for Composite Convex Minimization

Related publications (66)

Augmented Lagrangian Methods for Provable and Scalable Machine Learning

Mehmet Fatih Sahin

Non-convex constrained optimization problems have become a powerful framework for modeling a wide range of machine learning problems, with applications in k-means clustering, large- scale semidefinite programs (SDPs), and various other tasks. As the perfor ...
EPFL2023

Byzantine Fault-Tolerance in Federated Local SGD Under 2f-Redundancy

Nirupam Gupta

In this article, we study the problem of Byzantine fault-tolerance in a federated optimization setting, where there is a group of agents communicating with a centralized coordinator. We allow up to ff Byzantine-faulty agents, which may not follow a prescr ...
Piscataway2023

Improving K-means Clustering Using Speculation

Anastasia Ailamaki, Viktor Sanca, Eleni Zapridou, Stefan Igescu

K-means is one of the fundamental unsupervised data clustering and machine learning methods. It has been well studied over the years: parallelized, approximated, and optimized for different cases and applications. With increasingly higher parallelism leadi ...
2023

Unsupervised Electrofacies Clustering Based on Parameterization of Petrophysical Properties: A Dynamic Programming Approach

François Fleuret, Karthigan Sinnathamby

Electrofacies using well logs play a vital role in reservoir characterization. Often, they are sorted into clusters according to the self-similarity of input logs and do not capture the known underlying physical process. In this paper, we propose an unsupe ...
SOC PETROPHYSICISTS & WELL LOG ANALYSTS-SPWLA2023

Discriminative clustering with representation learning with any ratio of labeled to unlabeled data

We present a discriminative clustering approach in which the feature representation can be learned from data and moreover leverage labeled data. Representation learning can give a similarity-based clustering method the ability to automatically adapt to an ...
2022

On the Double Descent of Random Features Models Trained with SGD

Volkan Cevher, Fanghui Liu

We study generalization properties of random features (RF) regression in high dimensions optimized by stochastic gradient descent (SGD) in under-/overparameterized regime. In this work, we derive precise non-asymptotic error bounds of RF regression under b ...
2022

Graph Chatbot

Chat with Graph Search

Ask any question about EPFL courses, lectures, exercises, research, news, etc. or try the example questions below.

DISCLAIMER: The Graph Chatbot is not programmed to provide explicit or categorical answers to your questions. Rather, it transforms your questions into API requests that are distributed across the various IT services officially administered by EPFL. Its purpose is solely to collect and recommend relevant references to content that you can explore to help you answer your questions.