Are you an EPFL student looking for a semester project?
Work with us on data science and visualisation projects, and deploy your project as an app on top of Graph Search.
Reputation-based trust models using statistical learning have been intensively studied for distributed systems where peers behave maliciously. However practical applications of such models in environments with both malicious and rational behaviors are still very little understood. This paper studies the relation between accuracy of a computational trust model and its ability to effectively enforce cooperation among rational agents. We provide theoretical results showing under which conditions cooperation emerges when using a trust learning algorithms with given accuracy and how cooperation can be still sustained while reducing cost and accuracy of those algorithms. We then verify and extend these theoretical results to a variety of settings involving honest, malicious and strategic players through extensive simulation. These results will enable a much more targeted, cost-effective and realistic design for decentralized trust management systems, such as needed for peer-to-peer systems and electronic commerce.
Bryan Alexander Ford, Verónica del Carmen Estrada Galiñanes, Louis-Henri Manuel Jakob Merino, Haoqian Zhang, Mahsa Bastankhah
François Maréchal, Ivan Daniel Kantor, Julia Granacher