Are you an EPFL student looking for a semester project?
Work with us on data science and visualisation projects, and deploy your project as an app on top of Graph Search.
Meta-learning is the core capability that enables intelligent systems to rapidly generalize their prior ex-perience to learn new tasks. In general, the optimization-based methods formalize the meta-learning as a bi-level optimization problem, that is a nested optimization framework, in which meta-parameters are optimized (or learned) at the outer-level, while the inner-level optimizes the task-specific parameters. In this paper, we introduce RMAML, a meta-learning method that enforces orthogonality constraints to the bi-level optimization problem. We develop a geometry aware framework that generalizes the bi-level optimization problem to the Riemannian (constrained) setting. Using the Riemannian operations such as orthogonal projection, retraction and parallel transport, the bi-level optimization is reformulated so that it respects the Riemannian geometry. Moreover, we observe that a superior stable optimization and an im-proved generalization ability can be achieved when the parameters and meta-parameters of the method are modeled using a Stiefel Manifold. We empirically show that RMAML can easily reach competitive performances against several state of the art algorithms for few-shot classification and consistently out-performs its Euclidean counterpart, MAML. For example, by using the geometry of the Stiefel manifold to structure the fully-connected layers in a deep neural network, a 7% increase in single-domain few-shot classification accuracy is achieved. For the cross-domain few-shot learning, RMAML outperforms MAML by up to 9% of accuracy. Our ablation study also demonstrates the effectiveness of RMAML over MAML in terms of higher accuracy with a reduced number of tasks and (or) inner-level updates.(c) 2023 Elsevier Ltd. All rights reserved.