We prove that every online learnable class of functions of Littlestone dimension d admits a learning algorithm with finite information complexity. Towards this end, we use the notion of a globally stable algorithm. Generally, the information complexity of such a globally stable algorithm is large yet finite, roughly exponential in d. We also show there is room for improvement; for a canonical online learnable class, indicator functions of affine subspaces of dimension d, the information complexity can be upper bounded logarithmically in d.
Matthias Finger, Qian Wang, Yiming Li, Varun Sharma, Konstantin Androsov, Jan Steggemann, Xin Chen, Rakesh Chawla, Matteo Galli, Jian Wang, João Miguel das Neves Duarte, Tagir Aushev, Matthias Wolf, Yi Zhang, Tian Cheng, Yixing Chen, Werner Lustermann, Andromachi Tsirou, Alexis Kalogeropoulos, Andrea Rizzi, Ioannis Papadopoulos, Paolo Ronchese, Hua Zhang, Leonardo Cristella, Siyuan Wang, Tao Huang, David Vannerom, Michele Bianco, Sebastiana Gianì, Sun Hee Kim, Davide Di Croce, Kun Shi, Abhisek Datta, Jian Zhao, Federica Legger, Gabriele Grosso, Anna Mascellani, Ji Hyun Kim, Donghyun Kim, Zheng Wang, Sanjeev Kumar, Wei Li, Yong Yang, Ajay Kumar, Ashish Sharma, Georgios Anagnostou, Joao Varela, Csaba Hajdu, Muhammad Ahmad, Ekaterina Kuznetsova, Ioannis Evangelou, Milos Dordevic, Meng Xiao, Sourav Sen, Xiao Wang, Kai Yi, Jing Li, Rajat Gupta, Hui Wang, Seungkyu Ha, Pratyush Das, Anton Petrov, Xin Sun, Valérie Scheurer, Muhammad Ansar Iqbal, Lukas Layer