Boosting Few-Shot Learning with Task-Adaptive Multi-level Mixed Supervision

Duo Wang, Qianxia Ma, Ming Zhang, Tao Zhang

Research output: Chapter in Book/Published conference outputConference publication


In this paper, we propose a novel task-adaptive few-shot learning (FSL) method called Multi-Level Mixed Supervision (MLMS), which adapts a classifier specifically for each task by mixed supervision. Our method complements the supervised training with a multi-level unsupervised loss including the instance-level certainty term, set-level divergence term, and group-level consistency term. We further modify the set-level divergence term under the unbalanced prior situation where different classes of the unlabeled set contain different numbers of samples. Besides, we propose an approximate solution of minimizing our MLMS loss which is faster than the gradient-based method. Extensive experiments on multiple FSL datasets demonstrate that our method outperforms several recent models by an obvious margin on both transductive FSL and semi-supervised FSL tasks. Codes and trained models are available at
Original languageEnglish
Title of host publicationArtificial Intelligence - 1st CAAI International Conference, CICAI 2021, Proceedings
EditorsLu Fang, Yiran Chen, Guangtao Zhai, Jane Wang, Ruiping Wang, Weisheng Dong
Number of pages12
ISBN (Electronic)978-3-030-93049-3
ISBN (Print)978-3-030-93048-6
Publication statusPublished - 1 Jan 2022

Publication series

NameLecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics)
Volume13070 LNAI
ISSN (Print)0302-9743
ISSN (Electronic)1611-3349


  • Few-shot learning
  • Multi-level mixed supervision
  • Semi-supervised FSL
  • Task-adaptive
  • Transductive FSL
  • Unbalanced prior


Dive into the research topics of 'Boosting Few-Shot Learning with Task-Adaptive Multi-level Mixed Supervision'. Together they form a unique fingerprint.

Cite this