Are you an EPFL student looking for a semester project?
Work with us on data science and visualisation projects, and deploy your project as an app on top of Graph Search.
By using other agents experiences and knowledge, a learning agent may learn faster, make fewer mistakes, and create some rules for unseen situations. These benefits would be gained if the learning agent can extract proper rules out of the other agents knowledge for its own requirements. One possible way to do this is that, the learner assigns some expertness values (intelligence level values) to the other agents and uses their knowledge accordingly. In this paper, some criteria to measure the expertness of the Reinforcement Learning agents are introduced. Also, a new cooperative learning method, called Weighted Strategy Sharing is presented. In this method, each agent measures the expertness of its teammates and assigns a weight to their knowledge and learns from them accordingly. The presented methods are tested on two Hunter-Prey systems. We consider the agents all learning from each other and compare them with those who cooperate only with the more expert ones. Also, the effect of the communication noise, as a source of uncertainty, on the cooperative learning method is studied. Moreover, the Q-table of one of the cooperative agents is changed randomly and its effects on the presented methods are examined.
Ali H. Sayed, Mert Kayaalp, Virginia Bordignon