**Êtes-vous un étudiant de l'EPFL à la recherche d'un projet de semestre?**

Travaillez avec nous sur des projets en science des données et en visualisation, et déployez votre projet sous forme d'application sur GraphSearch.

Personne# Isabela Cunha Maia Nobre

Official source

Cette page est générée automatiquement et peut contenir des informations qui ne sont pas correctes, complètes, à jour ou pertinentes par rapport à votre recherche. Il en va de même pour toutes les autres pages de ce site. Veillez à vérifier les informations auprès des sources officielles de l'EPFL.

Unités associées

Chargement

Cours enseignés par cette personne

Chargement

Domaines de recherche associés

Chargement

Publications associées

Chargement

Personnes menant des recherches similaires

Chargement

Cours enseignés par cette personne

Aucun résultat

Unités associées (1)

Publications associées (4)

Chargement

Chargement

Chargement

Domaines de recherche associés

Personnes menant des recherches similaires

Aucun résultat

Aucun résultat

Isabela Cunha Maia Nobre, Mireille El Gheche, Pascal Frossard

Graph learning is often a necessary step in processing or representing structured data, when the underlying graph is not given explicitly. Graph learning is generally performed centrally with a full knowledge of the graph signals, namely the data that lives on the graph nodes. However, there are settings where data cannot be collected easily or only with a non-negligible communication cost. In such cases, distributed processing appears as a natural solution, where the data stays mostly local and all processing is performed among neighbours nodes on the communication graph. We propose here a novel distributed graph learning algorithm, which permits to infer a graph from signal observations on the nodes under the assumption that the data is smooth on the target graph. We solve a distributed optimization problem with local projection constraints to infer a valid graph while limiting the communication costs. Our results show that the distributed approach has a lower communication cost than a centralised algorithm without compromising the accuracy in the inferred graph. It also scales better in communication costs with the increase of the network size, especially for sparse networks.

In the domains of machine learning, data science and signal processing, graph or network data, is becoming increasingly popular. It represents a large portion of the data in computer, transportation systems, energy networks, social, biological, and other scientific applications. Often, such data is physically distributed over different network nodes, and there is a communication cost involved with bringing it to a central unit for processing and analysis. Decentralized algorithms offer solutions to deal with network data and relax communication costs, with nodes sharing messages over communication channels in order to jointly implement data processing or learning tasks. However, messages are typically quantized in practice and represented by a finite number of bits in digital communication channels. As a result, imperfections in received signals may accumulate and eventually degrade the algorithm's overall performance. This thesis focuses on designing new methods to efficiently allocate bits in the different steps of messages exchanges between network nodes when implementing distributed graph signal tasks. First, we consider graph filters that can decompose and shape graph signal frequency components, in order to realize a desired response. Distributed graph filters can be used in applications such as smoothing, denoising and semi-supervised learning. We propose an optimal bit allocation technique that adapts to the network topology and the message importance, such that it minimizes the quantization error. Second, we consider distributed graph neural networks that can be used in applications such as anomaly detection, decentralized control and traffic prediction. We study the effect of quantization in the gnn inference stage and we propose an analytical solution to an optimized bit allocation problem, by solving the corresponding Karush-Kuhn-Tucker (KKT) system of equations. Our method is shown to be beneficial in reducing the error due to quantization, compared to other baselines, on the tasks of distributed denoising and distributed source localization. The optimized bit allocation gives a higher relevance to messages in the middle layers of the neural network model. Finally, we consider the distributed graph learning problem whose objective is to infer an unknown data graph from network observations, in order to enable further processing tasks or interpretability. We propose a novel distributed graph learning algorithm under the assumption that the data is smooth on the data graph. With the use of local projection constraints, we solve the distributed optimization problem and infer a valid graph. For the same accuracy, our distributed algorithm has a lower communication cost compared to a centralized version, especially for sparse networks. Additionally, we propose a bit allocation scheme for the distributed graph learning algorithm. We show that the scheme presents a better accuracy and bit cost trade-off than a baseline uniform bit allocation scheme. Overall, this thesis proposes novel bit allocation techniques for signal quantization in distributed implementations of signal processing and machine learning tasks. We believe that our research efforts will hasten the development of intelligent distributed processing algorithms for network data that balance performance, communication bandwidth, and computational complexity, in a wide range of potential applications in social, sensor, energy, transportation, and other fields.

,

Distributed graph signal processing algorithms require the network nodes to communicate by exchanging messages in order to achieve a common objective. These messages have a finite precision in realistic networks, which may necessitate to implement message quantization. Quantization, in turn, may generate distortion and performance penalty in the distributed processing tasks. This paper proposes a novel method for distributed graph filtering that minimizes the error due to message quantization without compromising the communication costs. It first bounds the exchanged messages and then allocates a limited bit budget in an optimized way to the different messages and network nodes. In particular, our novel quantization algorithm adapts to both the network topology and the message importance in a distributed processing task. Our results show that the proposed method is effective in minimizing the error due to quantization and that it permits to outperform baseline distributed algorithms when the bit budget is limited. They further show that errors produced in nodes with high eccentricity or in the first steps of the distributed algorithm contribute more to the global error. Also, sparse and irregular graphs require more irregular bit distribution. Our method provides one of the first quantization solutions for distributed graph processing, which is able to adapt to the target task, the graph properties and the communication constraints.

2019