Êtes-vous un étudiant de l'EPFL à la recherche d'un projet de semestre?
Travaillez avec nous sur des projets en science des données et en visualisation, et déployez votre projet sous forme d'application sur Graph Search.
In recent years we have seen a marked increase in disinformation including as part of a strategy of so-called hybrid warfare. Adversaries not only directly spread misleading content but manipulate social media by employing sophisticated techniques that exploit platform vulnerabilities and avoid detection. It is getting increasingly important to analyze social media manipulation to better understand, detect and defend public dialogue against it.In this thesis, we contribute to the research on social media manipulation by describing and analyzing how adversaries employ compromised social media accounts. We begin by providing a background of social media: we describe the mechanisms and the influence of the platforms to better understand why the adversaries target them. We then give a detailed overview of social media manipulation, and the techniques to detect and counter it. Next, we discuss our contributions in this thesis: 1) an extensive analysis of an attack on social media algorithms using compromised accounts, 2) a study of the implications of compromised bots for bot research through the characterization of retweet bots, 3) a detection method to find compromised accounts that are later repurposed.Firstly, we uncover and analyze a previously unknown, ongoing astroturfing attack on the popularity mechanisms of social media platforms: ephemeral astroturfing attacks. In this attack, a chosen keyword or topic is artificially promoted by coordinated and inauthentic activity to appear popular. Crucially, this activity is removed as part of the attack which facilitates using compromised accounts that are still managed by their original owners. We detected over 19,000 unique fake trends promoted by over 108,000 accounts. Trends astroturfed by these attacks account for at least 20% of the top 10 global trends. We created a Twitter bot to detect the attacks in real-time and inform the public. Secondly, we study the implications of compromised accounts to bot research. We do this by characterizing retweet bots that have been uncovered by purchasing retweets from the black market. We determine that those accounts were compromised as they observe anomalous behavior, or self-state that they are hacked. We then analyze their differences from human-controlled accounts. From our findings on the nature and life-cycle of retweet bots, we point out several inconsistencies between the retweet bots used in this work and bots studied in prior works. Our findings challenge some of the fundamental assumptions related to bots and in particular how to detect them.Thirdly, we define, describe, and provide a detection method for misleading repurposing, in which an adversary changes the identity of a potentially compromised social media accounts via, among other things, changes to the profile attributes in order to use them for a new purpose while retaining their followers. We propose a methodology to flag repurposed accounts, and detected over 100,000 such accounts. We also characterize repurposed accounts and found that they are more likely to be repurposed after a period of inactivity and deleting old tweets. We present a tool to root out accounts that became popular and repurposed later.Our work is significant in presenting how breaches of user security jeopardize platform security and public dialogue. Furthermore, it enhances the knowledge of how the bots and troll accounts work and aid platforms and researchers in building new solutions.