Are you an EPFL student looking for a semester project?
Work with us on data science and visualisation projects, and deploy your project as an app on top of Graph Search.
Ranking systems, such as those in product review sites, and various recommender systems usually employ user ratings to rank favorite items based on their quality and/or popularity. Since higher ranked items are more likely to be selected by customers and thus yield more revenues for their providers, the latter with unpopular and low quality items have strong incentives to try to strategically manipulate their ranking. However, rank manipulation attempts are also costly, especially in the presence of trust management mechanisms. This paper analyzes the cost-effectiveness of adversary attack strategies for manipulating these rankings from various perspectives and identifies the optimal adversary strategies. Particularly, we analyze the probability of a successful attack by an adversary in relation with (1) its computational and financial power to generate a certain number of fake identities and insert dishonest ratings to the systems, and (2) the capability of the trust mechanism employed to detect and eliminate malicious ratings. By means of extensive simulation experiments, we also estimate the impacts of the trust management mechanisms to the attack costs in a variety of settings and compare the probabilities of success under different attack strategies of the adversary to confirm our theoretical results. Additionally, we provide a preliminary analysis and experimental evaluation of the effectiveness of adversary attacks in case of multiple rival adversaries. Our research provides a new insight to the objective-oriented design of trust-based ranking systems and their attack resilience from an economic perspective.
Boi Faltings, Ljubomir Rokvic, Panayiotis Danassis
Andreas Peter Burg, Robert Giterman, Reza Ghanaatian Jahromi, Andrea Bonetti