Personne

Fabian Ricardo Latorre Gomez

Cette personne n’est plus à l’EPFL

Publications associées (9)

Improving SAM Requires Rethinking its Optimization Formulation

Volkan Cevher, Kimon Antonakopoulos, Thomas Michaelsen Pethick, Wanyun Xie, Fabian Ricardo Latorre Gomez

This paper rethinks Sharpness-Aware Minimization (SAM), which is originally formulated as a zero-sum game where the weights of a network and a bounded perturbation try to minimize/maximize, respectively, the same differentiable loss. We argue that SAM shou ...
2024

Adversarial Training Should Be Cast As a Non-Zero-Sum Game

Volkan Cevher, Seyed Hamed Hassani, Fabian Ricardo Latorre Gomez

One prominent approach toward resolving the adversarial vulnerability of deep neural networks is the two-player zero-sum paradigm of adversarial training, in which predictors are trained against adversarially-chosen perturbations of data. Despite the promi ...
2023

Robust Training and Verification of Deep Neural Networks

Fabian Ricardo Latorre Gomez

According to the proposed Artificial Intelligence Act by the European Comission (expected to pass at the end of 2023), the class of High-Risk AI Systems (Title III) comprises several important applications of Deep Learning like autonomous driving vehicles ...
EPFL2023

Controlling the Complexity and Lipschitz Constant improves Polynomial Nets

Volkan Cevher, Grigorios Chrysos, Fabian Ricardo Latorre Gomez

While the class of Polynomial Nets demonstrates comparable performance to neural networks (NN), it currently has neither theoretical generalization characterization nor robustness guarantees. To this end, we derive new complexity bounds for the set of Coup ...
2022

A Plug-and-Play Deep Image Prior

Volkan Cevher, Fabian Ricardo Latorre Gomez, Thomas Sanchez, Zhaodong Sun

Deep image priors (DIP) offer a novel approach for the regularization that leverages the inductive bias of a deep convolutional architecture in inverse problems. However, the quality of DIP approaches often degrades when the number of iterations exceeds a ...
IEEE2021

Lipschitz constant estimation for Neural Networks via sparse polynomial optimization

Volkan Cevher, Paul Thierry Yves Rolland, Fabian Ricardo Latorre Gomez

We introduce LiPopt, a polynomial optimization framework for computing increasingly tighter upper bounds on the Lipschitz constant of neural networks. The underlying optimization problems boil down to either linear (LP) or semidefinite (SDP) programming. W ...
2020

Efficient Proximal Mapping of the 1-path-norm of Shallow Networks

Volkan Cevher, Paul Thierry Yves Rolland, Fabian Ricardo Latorre Gomez, Shaul Nadav Hallak

We demonstrate two new important properties of the 1-path-norm of shallow neural networks. First, despite its non-smoothness and non-convexity it allows a closed form proximal operator which can be efficiently computed, allowing the use of stochastic proxi ...
2020

An Inexact Augmented Lagrangian Framework for Nonconvex Optimization with Nonlinear Constraints

Volkan Cevher, Ahmet Alacaoglu, Armin Eftekhari, Fabian Ricardo Latorre Gomez, Mehmet Fatih Sahin

We propose a practical inexact augmented Lagrangian method (iALM) for nonconvex problems with nonlinear constraints. We characterize the total computational complexity of our method subject to a verifiable geometric condition, which is closely related to t ...
2019

Fast and Provable ADMM for Learning with Generative Priors

Volkan Cevher, Armin Eftekhari, Fabian Ricardo Latorre Gomez

In this work, we propose a (linearized) Alternating Direction Method-of-Multipliers (ADMM) algorithm for minimizing a convex function subject to a nonconvex constraint. We focus on the special case where such constraint arises from the specification that a ...
2019

Graph Chatbot

Chattez avec Graph Search

Posez n’importe quelle question sur les cours, conférences, exercices, recherches, actualités, etc. de l’EPFL ou essayez les exemples de questions ci-dessous.

AVERTISSEMENT : Le chatbot Graph n'est pas programmé pour fournir des réponses explicites ou catégoriques à vos questions. Il transforme plutôt vos questions en demandes API qui sont distribuées aux différents services informatiques officiellement administrés par l'EPFL. Son but est uniquement de collecter et de recommander des références pertinentes à des contenus que vous pouvez explorer pour vous aider à répondre à vos questions.