Are you an EPFL student looking for a semester project?
Work with us on data science and visualisation projects, and deploy your project as an app on top of Graph Search.
Graphs offer a simple yet meaningful representation of relationships between data. Thisrepresentation is often used in machine learning algorithms in order to incorporate structuralor geometric information about data. However, it can also be used in an inverted fashion:instead of modelling data through graphs, we model graphs through data distributions. In thisthesis, we explore several applications of this new modelling framework.Starting with the graph learning problem, we exploit the probabilistic model of data giventhrough graphs to propose a multi-graph learning method for structured data mixtures. Weexplore various relations that data can have with the underlying graphs through the notionof graph filters. We propose an algorithm to jointly cluster a set of data and learn a graph foreach of the clusters. Experiments demonstrate promising performance in data clusteringand multiple graph inference, and show desirable properties in terms of interpretability andproper handling of high dimensionality on synthetic and real data. The model has further beenapplied to fMRI data, where the method is used to successfully identify different functionalbrain networks and their activation times.This probabilistic model of data defined through graphs can be very meaningful evenwhen no data is available. Thus, in the second part of this thesis, we use such models torepresent each graph through the probabilistic distribution of data, which varies smoothly onthe graph. Optimal transport allows for a comparison of two such distributions, which in turngives a structurally meaningful measure for graph comparison. We follow by using this distance to formulate a new graph alignment problem based on theoptimal transport framework, and propose an efficient stochastic algorithm based onBayesian exploration to accommodate for the nonconvexity of the graph alignment problem.We demonstrate the performance of our novel framework on different tasks like graph alignment,graph classification and graph signal prediction, and we show that our method leads tosignificant improvement with respect to the state-of-art algorithms.Furthermore, we cast a new formulation for the one-to-many graph alignment problem,allowing for comparison of graphs of different sizes. The resulting alignment problem issolved with stochastic gradient descent, where a novel Dykstra operator ensures that thesolution is a one-to-many (soft) assignment matrix. Experiments on graph alignment and graph classification problems show that our method for one-to-many alignment leads to meaningful improvements with respect to the state-of-the-art algorithms for each of thesetasks.Finally, we explore a family of probabilistic distributions for data based on graph filters.Distances defined through a graph filter give a high level of flexibility in choosing whichgraph properties we want to emphasize. In addition, in order to make the above graphalignment problem more scalable, we formulate an approximation to our filter Wassersteingraph distance that allows for the exploitation of faster algorithms, without grossly sacrificingthe performance. We propose two algorithms, a simple one based on mirror gradient descentand another one built on its stochastic version, which offers a trade-off between speed andaccuracy. Our experiments show the performance benefits of our novel stochastic algorithm,as well as the strong value of flexibility offered by filter-based distances.
,