**Are you an EPFL student looking for a semester project?**

Work with us on data science and visualisation projects, and deploy your project as an app on top of GraphSearch.

Publication# Integration of scientific and social networks

Abstract

In this paper, we address the problem of scientific-social network integration to find a matching relationship between members of these networks (i.e. The DBLP publication network and the Twitter social network). This task is a crucial step toward building a multi environment expert finding system that has recently attracted much attention in Information Retrieval community. In this paper, the problem of social and scientific network integration is divided into two sub problems. The first problem concerns finding those profiles in one network, which presumably have a corresponding profile in the other network and the second problem concerns the name disambiguation to find true matching profiles among some candidate profiles for matching. Utilizing several name similarity patterns and contextual properties of these networks, we design a focused crawler to find high probable matching pairs, then the problem of name disambiguation is reduced to predict the label of each candidate pair as either true or false matching. Because the labels of these candidate pairs are not independent, state-of-the-art classification methods such as logistic regression and decision tree, which classify each instance separately, are unsuitable for this task. By defining matching dependency graph, we propose a joint label prediction model to determine the label of all candidate pairs simultaneously. Two main types of dependencies among candidate pairs are considered for designing the joint label prediction model which are quite intuitive and general. Using the discriminative approaches, we utilize various feature sets to train our proposed classifiers. An extensive set of experiments have been conducted on six test collection collected from the DBLP and the Twitter networks to show the effectiveness of the proposed joint label prediction model.

Official source

This page is automatically generated and may contain information that is not correct, complete, up-to-date, or relevant to your search query. The same applies to every other page on this website. Please make sure to verify the information with EPFL's official sources.

Related concepts

Loading

Related publications

Loading

Related publications (19)

Loading

Loading

Loading

Related concepts (21)

Decision tree

A decision tree is a decision support hierarchical model that uses a tree-like model of decisions and their possible consequences, including chance event outcomes, resource costs, and utility. It i

Information retrieval

Information retrieval (IR) in computing and information science is the process of obtaining information system resources that are relevant to an information need from a collection of those resources.

Scientific method

The scientific method is an empirical method for acquiring knowledge that has characterized the development of science since at least the 17th century (with notable practitioners in previous centur

This thesis is devoted to information-theoretic aspects of community detection. The importance of community detection is due to the massive amount of scientific data today that describes relationships between items from a network, e.g., a social network. Items from this network can be inherently partitioned into a known number of communities, but the partition can only be inferred from the data.
To estimate the underlying partition, data scientists can apply any type of advanced statistical techniques; but the data could be very noisy, or the number of data is inadequate. A fundamental question here is about the possibility of weak recovery: does the data contain a sufficient amount of information that enables us to produce a non-trivial estimate of the partition?
For the purpose of mathematical analysis, the above problem can be formulated as Bayesian inference on generative models. These models, including the stochastic block model (SBM) and censored block model (CBM), consider a random graph generated based on a hidden partition that divides the nodes in the graph into labelled groups. In the SBM, nodes are connected with a probability depending on the labels of the endpoints. Whereas, in the CBM, hidden variables are measured through a noisy channel, and the measurement outcomes form a weighted graph. In both models, inference is the task of recovering the hidden partition from the observed graph. The criteria for weak recovery can be studied via an information-theoretic quantity called mutual information. Once the asymptotic mutual information is computed, phase transitions for the weak recovery can be located.
This thesis pertains to rigorous derivations of single-letter variational expressions for the asymptotic mutual information for models in community detection. These variational expressions, known as the replica predictions, come from heuristic methods of statistical physics. We present our development of new rigorous methods for confirming the replica predictions. These methods are based on extending the recently introduced adaptive interpolation method.
We prove the replica prediction for the SBM in the dense-graph regime with two groups of asymmetric size. The existing proofs in the literature are indirect, as they involve mapping the model to an external problem whose mutual information is determined by a combination of methods. Here, on the contrary, we provide a self-contained and direct proof.
Next, we extend this method to sparse models. Before this thesis, adaptive interpolation was known for providing a conceptually simple proof for replica predictions for dense graphs. Whereas, for a sparse graph, the replica prediction involves a more complicated variational expression, and rigorous confirmations are often lacking or obtained by rather complicated methods. Therefore, we focus on a simple version of CBM on sparse graphs, where hidden variables are measured through a binary erasure channel, for which we fully prove the replica prediction by the adaptive interpolation.
The key for extending the adaptive interpolation to a broader class of sparse models is a concentration result for the so-called "multi-overlaps". This concentration forms the basis of the replica "symmetric" prediction. We prove this concentration result for a related sparse model in the context of physics. This provides inspiration for further development of the adaptive interpolation.

In our daily lives, our mobile phones sense our movements and interactions via a rich set of embedded sensors such as a GPS, Bluetooth, accelerometers, and microphones. This enables us to use mobile phones as agents for collecting spatio-temporal data. The idea of mining these spatio-temporal data is currently being explored for many applications, including environmental pollution monitoring, health care, and social networking. When used as sensing devices, a particular feature of mobile phones is their aspect of mobility, in contrast to static sensors. Furthermore, despite having useful applications, collecting data from mobile phones introduces privacy concerns, as the collected data might reveal sensitive information about the users, especially if the collector has access to auxiliary information. In the first part of this thesis, we use spatio-temporal data collected by mobile phones in order to evaluate different features of a population related to their mobility patterns. In particular, we consider the problems of population-size and population-density estimation that have applications, among others, in crowd monitoring, activity-hotspot detection, and urban analysis. We first conduct an experiment where ten attendees of an open-air music festival act as Bluetooth probes. Next, we construct parametric statistical models to estimate the total number of visible Bluetooth devices and their density in the festival area. We further test our proposed models against Wi-Fi traces obtained on the EPFL campus. We conclude that mobile phones can be effectively used as sensing devices to evaluate mobility-related parameters of a population. For the specific problem of population-density estimation, we investigate the mobility aspect of sensing: We quantitatively analyze the performance of mobile sensors compared to static sensors. Under an independent and identically distributed mobility model for the population, we derive the optimal random-movement strategy for mobile sensors in order to yield the best estimate of population density (in the mean-squared error sense). This enables us to plan an adaptive trajectory for the mobile sensors. In particular, we demonstrate that mobility brings an added value to the sensors; these sensors outperform static sensors for long observation intervals. In the second part of this thesis, we analyze the vulnerability of anonymized mobility statistics stored in the form of histograms. We consider an attacker who has access to an anonymized set of histograms of a set of users’ mobility traces and to an independent set of non-anonymized histograms of traces belonging to the same users. We study the hypothesis-testing problem of identifying the correct matching between the anonymized histograms and the non-anonymized histograms. We show that the solution can be obtained by using a minimum-weight matching algorithm on a complete weighted bipartite graph. By applying the algorithm to Wi-Fi traces obtained on the EPFL campus, we demonstrate that in fact anonymized histograms contain a significant amount of information that could be used to uniquely identify users by an attacker with access to auxiliary information about the users. Finally, we demonstrate how trust relationships between users can be exploited to enhance their privacy. We consider the specific problem of the privacy-preserving computation of functions of data that belong to users in a social network. An example of an application is a poll or a survey on a private issue. Most of the known non-cryptographic solutions to this problem can be viewed as belonging to one of the following two extreme regimes. The first regime is when every user trusts only herself and she is responsible for protecting her own privacy. In other words, the circle of trust of a user has a single member: herself. In the second regime, every user trusts herself and the server, but not any of the other users. In other words, the circle of trust of a user comprises herself and the server. We investigate this problem under the assumption that users are willing to share their private data with trusted friends in the network, hence we consider a regime in which the circle of trust of a user consists of herself and her friends. Thus, our approach falls in-between the two mentioned regimes. Our algorithm consists of first partitioning users into circles of trust and then computing the global function by using results of local computations within each circle. We demonstrate that such trust relationships can be exploited to significantly improve the tradeoff between the privacy of users' data and the accuracy of the computation.

Networks are data structures that are fundamental for capturing and analyzing complex interactions between objects. While they have been used for decades to solve problems in virtually all scientific fields, their usage for data analysis in real-world practical applications deserves to be further investigated. In this thesis, we explore multiple aspects of network science and show how the design of new graph-based approaches offers an unprecedented depth for analyzing complex datasets. Through the study of practical applications, we demonstrate how to extract key findings in several domains such as digital humanities, recommender systems, social behavior, neuroscience or knowledge discovery. First, we propose to define in a concise manner the data science workflow. We present the tools, techniques, and questions that the practitioner needs to have in mind when addressing a new large-scale problem as they are of tremendous importance if one wants to apply network science concepts to real applications. Based on this foundation chapter, we begin by demonstrating the worth of networks for music recommendation with Genezik, our smart playlist application that adapts to user taste. Using signal processing, machine learning, and graphs, we show how to improve the performance of recommender systems as well as proposing a radically different user experience that has yet to be found in competing systems. We then move on to the introduction of the causal multilayer graph of activity, a novel graph method dedicated to the analysis of dynamical processes over networks. More than a data structure, we present a data analysis approach that tracks spreading or propagation of events through time in a scalable manner by efficiently combining a network with values associated with its vertices. Used in four different applications, the analysis of spatio-temporal patterns of activity extracted from the causal multilayer graph helps us better understand how rumors spread in social networks or how brain regions interact in resting states for instance. Finally, we study the browsing behavior of millions of people on Wikipedia and show how to extract contextual patterns of activity that reflect what is collectively remembered from past events. Based on their analysis, we confirm social studies on human behavior and conclude by revealing some of the rules that define human curiosity.