We revisit the problem of solving two-player zero- sum games in the decentralized setting. We pro- pose a simple algorithmic framework that simulta- neously achieves the best rates for honest regret as well as adversarial regret, and in addition resolves the open problem of removing the logarithmic terms in convergence to the value of the game. We achieve this goal in three steps. First, we provide a novel analysis of the optimistic mirror descent (OMD), showing that it can be modified to guarantee fast convergence for both honest re- gret and value of the game, when the players are playing collaboratively. Second, we propose a new algorithm, dubbed as robust optimistic mir- ror descent (ROMD), which attains optimal ad- versarial regret without knowing the time horizon beforehand. Finally, we propose a simple signal- ing scheme, which enables us to bridge OMD and ROMD to achieve the best of both worlds. Numerical examples are presented to support our theoretical claims and show that our non-adaptive ROMD algorithm can be competitive to OMD with adaptive step-size selection.
Mario Paolone, André Hodder, Lucien André Félicien Pierrejean, Simone Rametti
Farhad Rachidi-Haeri, Marcos Rubinstein, Elias Per Joachim Le Boudec, Nicolas Mora Parra, Chaouki Kasmi, Emanuela Radici
Stefano Alberti, Jean-Philippe Hogge, Joaquim Loizu Cisquella, Jérémy Genoud, Francesco Romano, Guillaume Michel Le Bars