Êtes-vous un étudiant de l'EPFL à la recherche d'un projet de semestre?
Travaillez avec nous sur des projets en science des données et en visualisation, et déployez votre projet sous forme d'application sur Graph Search.
Reputation-based trust models using statistical learning have been intensively studied for distributed systems where peers behave maliciously. However practical applications of such models in environments with both malicious and rational behaviors are still very little understood. This paper studies the relation between accuracy of a computational trust model and its ability to effectively enforce cooperation among rational agents. We provide theoretical results showing under which conditions cooperation emerges when using a trust learning algorithms with given accuracy and how cooperation can be still sustained while reducing cost and accuracy of those algorithms. We then verify and extend these theoretical results to a variety of settings involving honest, malicious and strategic players through extensive simulation. These results will enable a much more targeted, cost-effective and realistic design for decentralized trust management systems, such as needed for peer-to-peer systems and electronic commerce.
Bryan Alexander Ford, Verónica del Carmen Estrada Galiñanes, Louis-Henri Manuel Jakob Merino, Haoqian Zhang, Mahsa Bastankhah
François Maréchal, Ivan Daniel Kantor, Julia Granacher