Abstract
The limited quantity of training data can hamper supervised machine learningmethods that generally need large amounts of data to avoid overfitting. Data augmentation has a long history of using machine learning algorithms and is a straightforward method to overcome overfitting and improve model generalisation. However, data augmentation schemes are typically designed by hand and demand substantial domain knowledge to create suitable data transformations. This dissertation introduces a new deep learning Generative Adversarial Network (GAN) method for image synthesis that automatically learns an augmentation strategy appropriate for sparse datasets and can be used to improve pixel-level semantic segmentation accuracy by filling the gaps in the training set. The contributions of this thesis are summarised as follows. (1) Initially, in the image synthesis domain, we propose two new generative methods based on GAN that can synthesise arbitrary-sized, high resolution images based on a single source image. (2) Next, for the first time, by using a loss function constrained by semantic segmentation, we introduce a new GAN-based model that does label-to-image translation and delivers state-of-the-art results as an augmentation strategy. (3) Additionally, this thesis presents the first strong evidence that data density correlates with the improvement brought about
by an augmentation algorithm based on GAN.
Date of Award | 2023 |
---|---|
Original language | English |
Awarding Institution |
|
Supervisor | Luis J. Manso (Supervisor) |
Keywords
- GAN
- Data augmentation
- Semantic segmentation