An End-to-End Scalable Iterative Sequence Tagging with Multi-Task Learning

Lin Gui, Jiachen Du, Zhishan Zhao, Yulan He, Ruifeng Xu, Chuang Fan

Research output: Chapter in Book/Published conference outputChapter


Multi-task learning (MTL) models, which pool examples arisen out of several tasks, have achieved remarkable results in language processing. However, multi-task learning is not always effective when compared with the single-task methods in sequence tagging. One possible reason is that existing methods to multi-task sequence tagging often reply on lower layer parameter sharing to connect different tasks. The lack of interactions between different tasks results in limited performance improvement. In this paper, we propose a novel multi-task learning architecture which could iteratively utilize the prediction results of each task explicitly. We train our model for part-of-speech (POS) tagging, chunking and named entity recognition (NER) tasks simultaneously. Experimental results show that without any task-specific features, our model obtains the state-of-the-art performance on both chunking and NER.
Original languageEnglish
Title of host publicationCCF International Conference on Natural Language Processing and Chinese Computing
ISBN (Electronic)978-3-319-99501-4
ISBN (Print)978-3-319-99500-7
Publication statusE-pub ahead of print - 14 Aug 2018
EventNLPCC 2018: The Seventh CCF International Conference on Natural Language Processing and Chinese Computing - Hohhot, China
Duration: 26 Aug 201830 Aug 2018

Publication series

NameNatural Language Processing and Chinese Computing
ISSN (Print)0302-9743
ISSN (Electronic)1611-3349


ConferenceNLPCC 2018

Cite this