Abstract
This paper presents the team TransQuest’s participation in Sentence-Level Direct Assessment shared task in WMT 2020. We introduce a simple QE framework based on cross-lingual transformers, and we use it to implement and evaluate two different neural architectures. The proposed methods achieve state-of-the-art results surpassing the results obtained by OpenKiwi, the baseline used in the shared task. We further fine tune the QE framework by performing ensemble and data augmentation. Our approach is the winning solution in all of the language pairs according to the WMT 2020 official results.
Original language | English |
---|---|
Title of host publication | Fifth Conference on Machine Translation |
Publisher | Association for Computational Linguistics (ACL) |
Pages | 1049–1055 |
Publication status | Published - Nov 2020 |
Event | 5th Conference on Machine Translation - Duration: 19 Nov 2020 → 20 Nov 2020 |
Conference
Conference | 5th Conference on Machine Translation |
---|---|
Abbreviated title | WMT 20 |
Period | 19/11/20 → 20/11/20 |