**Are you an EPFL student looking for a semester project?**

Work with us on data science and visualisation projects, and deploy your project as an app on top of GraphSearch.

Person# Michaël Defferrard

Official source

This page is automatically generated and may contain information that is not correct, complete, up-to-date, or relevant to your search query. The same applies to every other page on this website. Please make sure to verify the information with EPFL's official sources.

Related units

Loading

Courses taught by this person

Loading

Related research domains

Loading

Related publications

Loading

People doing similar research

Loading

Courses taught by this person

No results

Related publications (9)

Loading

Loading

Loading

Related units (3)

Related research domains (2)

Convolutional neural network

Convolutional neural network (CNN) is a regularized type of feed-forward neural network that learns feature engineering by itself via filters (or kernel) optimization. Vanishing gradients and explodin

Data

In common usage and statistics, data (USˈdætə; UKˈdeɪtə) is a collection of discrete or continuous values that convey information, describing the quantity, quality, fact, statistics, other basic uni

People doing similar research (135)

When learning from data, leveraging the symmetries of the domain the data lies on is a principled way to combat the curse of dimensionality: it constrains the set of functions to learn from. It is more data efficient than augmentation and gives a generalization guarantee. Symmetries might however be unknown or expensive to find; domains might not be homogeneous.From the building blocks of space (vertices, edges, simplices), an incidence structure, and a metric---the domain's topology and geometry---a linear operator naturally emerges that commutes with any known and unknown symmetry action. We call that equivariant operator the generalized convolution operator. And we use it, designed or learned, to transform data and embed domains. In our generalized setting involving unknown and non-transitive symmetry groups, our convolution is an inner-product with a kernel that is localized instead of moved around by group actions like translations: a bias-variance tradeoff that paves the way to efficient learning on arbitrary discrete domains.We develop convolutional neural networks that operate on graphs, meshes, and simplicial complexes. Their implementation amounts to the multiplications of data tensors by sparse matrices and pointwise operations, with linear compute, memory, and communication requirements. We demonstrate our method's efficiency by reaching state-of-the-art performance for multiple tasks on large discretizations of the sphere. DeepSphere has been used for studies in cosmology and shall be used for operational weather forecasting---advancing our understanding of the world and impacting billions of individuals.

Michaël Defferrard, Patric Hagmann, David Pascucci, Gijs Plomp, Sébastien Tourbier

We present an approach for tracking fast spatiotemporal cortical dynamics in which we combine white matter connectivity data with source-projected electroencephalographic (EEG) data. We employ the mathematical framework of graph signal processing in order to derive the Fourier modes of the brain structural connectivity graph, or "network harmonics". These network harmonics are naturally ordered by smoothness. Smoothness in this context can be understood as the amount of variation along the cortex, leading to a multi-scale representation of brain connectivity. We demonstrate that network harmonics provide a sparse representation of the EEG signal, where, at certain times, the smoothest 15 network harmonics capture 90% of the signal power. This suggests that network harmonics are functionally meaningful, which we demonstrate by using them as a basis for the functional EEG data recorded from a face detection task. There, only 13 network harmonics are sufficient to track the large-scale cortical activity during the processing of the stimuli with a 50 ms resolution, reproducing well-known activity in the fusiform face area as well as revealing co-activation patterns in somatosensory/motor and frontal cortices that an unconstrained ROI-by-ROI analysis fails to capture. The proposed approach is simple and fast, provides a means of integration of multimodal datasets, and is tied to a theoretical framework in mathematics and physics. Thus, network harmonics point towards promising research directions both theoretically - for example in exploring the relationship between structure and function in the brain - and practically - for example for network tracking in different tasks and groups of individuals, such as patients.

2020Michaël Defferrard, Nathanaël Perraudin

Convolutional Neural Networks (CNNs) are a cornerstone of the Deep Learning toolbox and have led to many breakthroughs in Artificial Intelligence. So far, these neural networks (NNs) have mostly been developed for regular Euclidean domains such as those supporting images, audio, or video. Because of their success, CNN-based methods are becoming increasingly popular in Cosmology. Cosmological data often comes as spherical maps, which make the use of the traditional CNNs more complicated. The commonly used pixelization scheme for spherical maps is the Hierarchical Equal Area isoLatitude Pixelisation (HEALPix). We present a spherical CNN for analysis of full and partial HEALPix maps, which we call DeepSphere. The spherical CNN is constructed by representing the sphere as a graph. Graphs are versatile data structures that can represent pairwise relationships between objects or act as a discrete representation of a continuous manifold. Using the graph-based representation, we define many of the standard CNN operations, such as convolution and pooling. With filters restricted to being radial, our convolutions are equivariant to rotation on the sphere, and DeepSphere can be made invariant or equivariant to rotation. This way, DeepSphere is a special case of a graph CNN, tailored to the HEALPix sampling of the sphere. This approach is computationally more efficient than using spherical harmonics to perform convolutions. We demonstrate the method on a classification problem of weak lensing mass maps from two cosmological models and compare its performance with that of three baseline classifiers, two based on the power spectrum and pixel density histogram, and a classical 2D CNN. Our experimental results show that the performance of DeepSphere is always superior or equal to the baselines. For high noise levels and for data covering only a smaller fraction of the sphere, DeepSphere achieves typically 10% better classification accuracy than the baselines. Finally, we show how learned filters can be visualized to introspect the NN. Code and examples are available at https://github.com/SwissDataScienceCenter/DeepSphere. (C) 2019 Elsevier B.V. All rights reserved.