Êtes-vous un étudiant de l'EPFL à la recherche d'un projet de semestre?
Travaillez avec nous sur des projets en science des données et en visualisation, et déployez votre projet sous forme d'application sur Graph Search.
Meta-learning is the core capability that enables intelligent systems to rapidly generalize their prior ex-perience to learn new tasks. In general, the optimization-based methods formalize the meta-learning as a bi-level optimization problem, that is a nested optimization framework, in which meta-parameters are optimized (or learned) at the outer-level, while the inner-level optimizes the task-specific parameters. In this paper, we introduce RMAML, a meta-learning method that enforces orthogonality constraints to the bi-level optimization problem. We develop a geometry aware framework that generalizes the bi-level optimization problem to the Riemannian (constrained) setting. Using the Riemannian operations such as orthogonal projection, retraction and parallel transport, the bi-level optimization is reformulated so that it respects the Riemannian geometry. Moreover, we observe that a superior stable optimization and an im-proved generalization ability can be achieved when the parameters and meta-parameters of the method are modeled using a Stiefel Manifold. We empirically show that RMAML can easily reach competitive performances against several state of the art algorithms for few-shot classification and consistently out-performs its Euclidean counterpart, MAML. For example, by using the geometry of the Stiefel manifold to structure the fully-connected layers in a deep neural network, a 7% increase in single-domain few-shot classification accuracy is achieved. For the cross-domain few-shot learning, RMAML outperforms MAML by up to 9% of accuracy. Our ablation study also demonstrates the effectiveness of RMAML over MAML in terms of higher accuracy with a reduced number of tasks and (or) inner-level updates.(c) 2023 Elsevier Ltd. All rights reserved.