Weakly-Supervised Self-Ensembling Vision Transformer for MRI Cardiac Segmentation

Ziyang Wang*, Haodong Zhang, Yang Liu

*Corresponding author for this work

Research output: Chapter in Book/Published conference outputConference publication

3 Citations (Scopus)

Abstract

Deep learning techniques are crucial in medical image segmentation, but their effectiveness heavily relies on a vast amount of fully annotated data, which is costly in labour and time. To address this challenge, this paper introduced a framework for scribble-supervised learning with a self-ensembling approach. The transformation consistency scheme is further developed to boost performance. Inspired by the recent achievements of the Vision Transformer (ViT) in modeling long-range dependencies, and to enable a fair comparison with convolutional operations, we employ a U-shaped segmentation network composed of pure self-attention-based blocks. Our proposed scribble-supervised segmentation ViT is validated on a public benchmark dataset against classical methods with various evaluation metrics. The code, trained model, and preprocessed scribble-annotated sets are publicly available at https://github.com/ziyangwang007/CV-WSL-MIS.
Original languageEnglish
Title of host publication2023 IEEE Conference on Artificial Intelligence (CAI)
PublisherIEEE
Pages101-102
Number of pages2
ISBN (Electronic)9798350339840
DOIs
Publication statusPublished - 2 Aug 2023
Event2023 IEEE Conference on Artificial Intelligence, CAI 2023 - Santa Clara, United States
Duration: 5 Jun 20236 Jun 2023

Conference

Conference2023 IEEE Conference on Artificial Intelligence, CAI 2023
Country/TerritoryUnited States
CitySanta Clara
Period5/06/236/06/23

Keywords

  • Medical Image Segmentation
  • Vision Transformer
  • Weakly-Supervised Learning

Fingerprint

Dive into the research topics of 'Weakly-Supervised Self-Ensembling Vision Transformer for MRI Cardiac Segmentation'. Together they form a unique fingerprint.

Cite this