DRESS: Disentangled Representation-based Self-Supervised Meta-Learning for Diverse Tasks
Published in NeurIPS 2024 Workshop on Self-Supervised Learning - Theory and Practice, 2024
Recommended citation: Wei Cui, Yi Sui, Jesse C. Cresswell, Keyvan Golestan. DRESS: Disentangled Representation-based Self-Supervised Meta-Learning for Diverse Tasks. NeurIPS 2024 Workshop on Self-Supervised Learning - Theory and Practice
Meta-learning represents a strong class of approaches for solving few-shot learning tasks. Nonetheless, recent research suggests that simply pre-training a generic encoder can potentially surpass meta-learning algorithms. In this paper, we first discuss the reasons why meta-learning fails to stand out in these few-shot learning experiments, and hypothesize that it is due to the few-shot learning tasks lacking diversity. Furthermore, we propose DRESS, a task-agnostic Disentangled REpresentation-based Self-Supervised meta-learning approach that enables fast model adaptation on highly diversified few-shot learning tasks. Specifically, DRESS utilizes disentangled representation learning to create self-supervised tasks that can fuel the meta-training process. We validate the effectiveness of DRESS through experiments on few-shot classification tasks on datasets with multiple factors of variation. Through this paper, we advocate for a re-examination of proper setups for task adaptation studies, and aim to reignite interest in the potential of meta-learning for solving few-shot learning tasks via disentangled representations.
[Paper] [PDF]