Abstract
Deep learning techniques are crucial in medical image segmentation, but their effectiveness heavily relies on a vast amount of fully annotated data, which is costly in labour and time. To address this challenge, this paper introduced a framework for scribble-supervised learning with a self-ensembling approach. The transformation consistency scheme is further developed to boost performance. Inspired by the recent achievements of the Vision Transformer (ViT) in modeling long-range dependencies, and to enable a fair comparison with convolutional operations, we employ a U-shaped segmentation network composed of pure self-attention-based blocks. Our proposed scribble-supervised segmentation ViT is validated on a public benchmark dataset against classical methods with various evaluation metrics. The code, trained model, and preprocessed scribble-annotated sets are publicly available at https://github.com/ziyangwang007/CV-WSL-MIS.
| Original language | English |
|---|---|
| Title of host publication | 2023 IEEE Conference on Artificial Intelligence (CAI) |
| Publisher | IEEE |
| Pages | 101-102 |
| Number of pages | 2 |
| ISBN (Electronic) | 9798350339840 |
| DOIs | |
| Publication status | Published - 2 Aug 2023 |
| Event | 2023 IEEE Conference on Artificial Intelligence, CAI 2023 - Santa Clara, United States Duration: 5 Jun 2023 → 6 Jun 2023 |
Conference
| Conference | 2023 IEEE Conference on Artificial Intelligence, CAI 2023 |
|---|---|
| Country/Territory | United States |
| City | Santa Clara |
| Period | 5/06/23 → 6/06/23 |
Keywords
- Medical Image Segmentation
- Vision Transformer
- Weakly-Supervised Learning