**Are you an EPFL student looking for a semester project?**

Work with us on data science and visualisation projects, and deploy your project as an app on top of GraphSearch.

Person# Mario Geiger

Official source

This page is automatically generated and may contain information that is not correct, complete, up-to-date, or relevant to your search query. The same applies to every other page on this website. Please make sure to verify the information with EPFL's official sources.

Related units

Loading

Courses taught by this person

Loading

Related research domains

Loading

Related publications

Loading

People doing similar research

Loading

Related research domains (15)

Deep learning

Deep learning is part of a broader family of machine learning methods, which is based on artificial neural networks with representation learning. The adjective "deep" in deep learning refers to the

Neural network

A neural network can refer to a neural circuit of biological neurons (sometimes also called a biological neural network), a network of artificial neurons or nodes in the case of an artificial neur

General Dynamics

General Dynamics Corporation (GD) is an American publicly traded aerospace and defense corporation headquartered in Reston, Virginia. As of 2020, it was the fifth-largest defense contractor in the wor

Related publications (16)

Loading

Loading

Loading

People doing similar research (110)

Courses taught by this person

No results

Related units (5)

A long-standing goal of science is to accurately simulate large molecular systems using quantum mechanics. The poor scaling of current quantum chemistry algorithms on classical computers, however, imposes an effective limit of about a few dozen atoms on traditional electronic structure calculations. We present a machine learning (ML) method to break through this scaling limit for electron densities. We show that Euclidean neural networks can be trained to predict molecular electron densities from limited data. By learning the electron density, the model can be trained on small systems and make accurate predictions on large ones. In the context of water clusters, we show that an ML model trained on clusters of just 12 molecules contains all the information needed to make accurate electron density predictions on cluster sizes of 50 or more, beyond the scaling limit of current quantum chemistry methods.

Curie's principle states that "when effects show certain asymmetry, this asymmetry must be found in the causes that gave rise to them." We demonstrate that symmetry equivariant neural networks uphold Curie's principle and can be used to articulate many symmetry-relevant scientific questions as simple optimization problems. We prove these properties mathematically and demonstrate them numerically by training a Euclidean symmetry equivariant neural network to learn symmetry breaking input to deform a square into a rectangle and to generate octahedra tilting patterns in perovskites.

Mario Geiger, Leonardo Petrini, Matthieu Wyart

Deep learning algorithms are responsible for a technological revolution in a variety oftasks including image recognition or Go playing. Yet, why they work is not understood.Ultimately, they manage to classify data lying in high dimension – a feat genericallyimpossible due to the geometry of high dimensional space and the associatedcurse ofdimensionality. Understanding what kind of structure, symmetry or invariance makesdata such as images learnable is a fundamental challenge. Other puzzles include that(i) learning corresponds to minimizing a loss in high dimension, which is in generalnot convex and could well get stuck bad minima. (ii) Deep learning predicting powerincreases with the number of fitting parameters, even in a regime where data areperfectly fitted. In this manuscript, we review recent results elucidating (i, ii) andthe perspective they offer on the (still unexplained) curse of dimensionality paradox.We base our theoretical discussion on the (h,α) plane wherehcontrols the numberof parameters andαthe scale of the output of the network at initialization, andprovide new systematic measures of performance in that plane for two common imageclassification datasets. We argue that different learning regimes can be organized intoa phase diagram. A line of critical points sharply delimits an under-parametrized phasefrom an over-parametrized one. In over-parametrized nets, learning can operate intwo regimes separated by a smooth cross-over. At large initialization, it correspondsto a kernel method, whereas for small initializations features can be learnt, togetherwith invariants in the data. We review the properties of these different phases, ofthe transition separating them and some open questions. Our treatment emphasizesanalogies with physical systems, scaling arguments and the development of numericalobservables to quantitatively test these results empirically. Practical implications arealso discussed, including the benefit of averaging nets with distinct initial weights, orthe choice of parameters (h,α) optimizing performance.

2021