AbstractDeep Learning has made impressive progress in a number of data processing domains. A large part of this progress comes from building large and complex models which are costly to train and, when data is limited, prone to over-fitting. We explore solutions to this problem through the domain adaptation paradigm.
Domain adaptation assumes that although data may be visually dissimilar, drawn from differing distributions such as photographs versus paintings, they still contain the same content and so share a representation space. We propose a model for domain adaptation building on the recent concept of generative adversarial networks. We show how this model can be used for domain adaptation with applications to reinforcement learning. We further investigate the utility of the model for explicit extrapolation problems.
Having proposed a model, we develop a further understanding of the conditions that lead to better shared embedding spaces. With this understanding, we propose a penalty term for the adversarial domain adaptation problem which we demonstrate achieves state of the art performance on a number of benchmark domain adaptation datasets.
We also consider the problem of adaptation from the perspective of extrapolation along the sampling space. Domain adaptation research does not explicitly consider this case and we propose two new datasets to examine the concept.
Further to this, we take a broader view of the potential applications of various transfer learning techniques and apply one-shot learning to a recently proposed extrapolation task. Through our extrapolation experiments we demonstrate the need for new datasets and testing protocols in order to appropriately verify model generalisation performance, which may otherwise be difficult to judge.
|Date of Award||Dec 2020|
|Supervisor||Maria Chli (Supervisor) & George Vogiatzis (Supervisor)|
- Domain Adaptation
- Deep Learning
- Transfer Learning