Abstract
Most studies on word-level Quality Estimation (QE) of machine translation focus on languagespecific models. The obvious disadvantages of these approaches are the need for labelled data for each language pair and the high cost required to maintain several language-specific models. To overcome these problems, we explore different approaches to multilingual, word-level QE. We show that multilingual QE models perform on par with the current language-specific models. In the cases of zeroshot and few-shot QE, we demonstrate that it is possible to accurately predict word-level quality for any given new language pair from models trained on other language pairs. Our findings suggest that the word-level QE models based on powerful pre-trained transformers that we propose in this paper generalise well across languages, making them more useful in real-world scenarios.
Original language | English |
---|---|
Title of host publication | Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing |
Publisher | Association for Computational Linguistics (ACL) |
Pages | 434-440 |
Volume | 2 |
ISBN (Electronic) | 9781954085527 |
DOIs | |
Publication status | Published - Aug 2021 |
Event | Joint Conference of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing, ACL-IJCNLP 2021 - Virtual, Online Duration: 1 Aug 2021 → 6 Aug 2021 https://2021.aclweb.org/ |
Conference
Conference | Joint Conference of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing, ACL-IJCNLP 2021 |
---|---|
City | Virtual, Online |
Period | 1/08/21 → 6/08/21 |
Internet address |