Ask any question about EPFL courses, lectures, exercises, research, news, etc. or try the example questions below.
DISCLAIMER: The Graph Chatbot is not programmed to provide explicit or categorical answers to your questions. Rather, it transforms your questions into API requests that are distributed across the various IT services officially administered by EPFL. Its purpose is solely to collect and recommend relevant references to content that you can explore to help you answer your questions.
Graph Neural Networks (GNNs) have emerged as a powerful tool for learning on graphs, demonstrating exceptional performance in various domains. However, as GNNs become increasingly popular, new challenges arise. One of the most pressing is the need to ensur ...
With the pervasive digitalization of modern life, we benefit from efficient access to information and services. Yet, this digitalization poses severe privacy challenges, especially for special-needs individuals. Beyond being a fundamental human right, priv ...
Security system designers favor worst-case security metrics, such as those derived from differential privacy (DP), due to the strong guarantees they provide. On the downside, these guarantees result in a high penalty on the system's performance. In this pa ...
Lawful Interception (LI) is a legal obligation of Communication Service Providers (CSPs) to provide interception capabilities to Law Enforcement Agencies (LEAs) in order to gain insightful data from network communications for criminal proceedings, e.g., ne ...
We develop an exchange rate target zone model with finite exit time and non-Gaussian tails. We show how the tails are a consequence of time-varying investor risk aversion, which generates mean-preserving spreads in the fundamental distribution. We solve ex ...
Federated Learning by nature is susceptible to low-quality, corrupted, or even malicious data that can severely degrade the quality of the learned model. Traditional techniques for data valuation cannot be applied as the data is never revealed. We present ...
Protecting ML classifiers from adversarial examples is crucial. We propose that the main threat is an attacker perturbing a confidently classified input to produce a confident misclassification. We consider in this paper the attack in which a small number ...
The increasing diffusion of novel digital and online sociotechnical systems for arational behavioral influence based on Artificial Intelligence (AI), such as social media, microtargeting advertising, and personalized search algorithms, has brought about ne ...
Mechanisms used in privacy-preserving machine learning often aim to guarantee differential privacy (DP) during model training. Practical DP-ensuring training methods use randomization when fitting model parameters to privacy-sensitive data (e.g., adding Ga ...
One major challenge in distributed learning is to efficiently learn for each client when the data across clients is heterogeneous or non iid (not independent or identically distributed). This provides a significant challenge as the data of the other client ...