Êtes-vous un étudiant de l'EPFL à la recherche d'un projet de semestre?
Travaillez avec nous sur des projets en science des données et en visualisation, et déployez votre projet sous forme d'application sur Graph Search.
We consider the idealized setting of gradient flow on the population risk for infinitely wide two-layer ReLU neural networks (without bias), and study the effect of symmetries on the learned parameters and predictors. We first describe a general class of symmetries which, when satisfied by the target function f* and the input distribution, are preserved by the dynamics. We then study more specific cases. When f* is odd, we show that the dynamics of the predictor reduces to that of a (non -linearly parameterized) linear predictor, and its exponential convergence can be guaranteed. When f* has a low-dimensional structure, we prove that the gradient flow PDE reduces to a lower-dimensional PDE. Furthermore, we present informal and numerical arguments that suggest that the input neurons align with the lower-dimensional structure of the problem.
Volkan Cevher, Grigorios Chrysos, Fanghui Liu
Marco Picasso, Alexandre Caboussat, Maude Girardin