Publication

Expressing Motivations By Facilitating Other’s Inverse Reinforcement Learning

Abstract

It is often necessary to understand each other’s motivations in order to cooperate. Reaching such a mutual understanding requires two abilities: to build models of other’s motivations in order to understand them, and to build a model of “my” motivations perceived by others in order to be understood. Having a self-image seen by others requires two recursive orders of modeling, known in psychology as the first and second orders of theory of mind. In this paper, we present a second-order theory of mind cognitive architecture that aims to facilitate mutual understanding in multi-agent scenarios. We study different conditions of empathy and gratitude leading to irrational cooperation in iterated prisoner’s dilemma.

About this result
This page is automatically generated and may contain information that is not correct, complete, up-to-date, or relevant to your search query. The same applies to every other page on this website. Please make sure to verify the information with EPFL's official sources.

Graph Chatbot

Chat with Graph Search

Ask any question about EPFL courses, lectures, exercises, research, news, etc. or try the example questions below.

DISCLAIMER: The Graph Chatbot is not programmed to provide explicit or categorical answers to your questions. Rather, it transforms your questions into API requests that are distributed across the various IT services officially administered by EPFL. Its purpose is solely to collect and recommend relevant references to content that you can explore to help you answer your questions.