Sharp-MAML: Sharpness-Aware Model-Agnostic Meta Learning

From MaRDI portal
Publication:6401490

arXiv2206.03996MaRDI QIDQ6401490FDOQ6401490


Authors: Momin Abbas, Quan Xiao, Lisha Chen, Pin-Yu Chen, Tianyi Chen Edit this on Wikidata


Publication date: 8 June 2022

Abstract: Model-agnostic meta learning (MAML) is currently one of the dominating approaches for few-shot meta-learning. Albeit its effectiveness, the optimization of MAML can be challenging due to the innate bilevel problem structure. Specifically, the loss landscape of MAML is much more complex with possibly more saddle points and local minimizers than its empirical risk minimization counterpart. To address this challenge, we leverage the recently invented sharpness-aware minimization and develop a sharpness-aware MAML approach that we term Sharp-MAML. We empirically demonstrate that Sharp-MAML and its computation-efficient variant can outperform the plain-vanilla MAML baseline (e.g., +3% accuracy on Mini-Imagenet). We complement the empirical study with the convergence rate analysis and the generalization bound of Sharp-MAML. To the best of our knowledge, this is the first empirical and theoretical study on sharpness-aware minimization in the context of bilevel learning. The code is available at https://github.com/mominabbass/Sharp-MAML.




Has companion code repository: https://github.com/mominabbass/sharp-maml









This page was built for publication: Sharp-MAML: Sharpness-Aware Model-Agnostic Meta Learning

Report a bug (only for logged in users!)Click here to report a bug for this page (MaRDI item Q6401490)