Are you an EPFL student looking for a semester project?
Work with us on data science and visualisation projects, and deploy your project as an app on top of Graph Search.
Graph neural networks take node features and graph structure as input to build representations for nodes and graphs. While there are a lot of focus on GNN models, understanding the impact of node features and graph structure to GNN performance has received less attention. In this paper, we propose an explanation for the connection between features and structure: graphs can be constructed by connecting node features according to a latent function. While this hypothesis seems trivial, it has several important implications. First, it allows us to define graph families which we use to explain the transferability of GNN models. Second, it enables application of GNNs for featureless graphs by reconstructing node features from graph structure. Third, it predicts the existence of a latent function which can create graphs that when used with original features in a GNN outperform original graphs for a specific task. We propose a graph generative model to learn such function. Finally, our experiments confirm the hypothesis and these implications. (C) 2022 Elsevier B.V. All rights reserved.