An End-to-End Scalable Iterative Sequence Tagging with Multi-Task Learning

Lin Gui, Jiachen Du, Zhishan Zhao, Yulan He, Ruifeng Xu, Chuang Fan

Research output: Chapter in Book/Published conference outputChapter

Abstract

Multi-task learning (MTL) models, which pool examples arisen out of several tasks, have achieved remarkable results in language processing. However, multi-task learning is not always effective when compared with the single-task methods in sequence tagging. One possible reason is that existing methods to multi-task sequence tagging often reply on lower layer parameter sharing to connect different tasks. The lack of interactions between different tasks results in limited performance improvement. In this paper, we propose a novel multi-task learning architecture which could iteratively utilize the prediction results of each task explicitly. We train our model for part-of-speech (POS) tagging, chunking and named entity recognition (NER) tasks simultaneously. Experimental results show that without any task-specific features, our model obtains the state-of-the-art performance on both chunking and NER.
Original languageEnglish
Title of host publicationCCF International Conference on Natural Language Processing and Chinese Computing
PublisherSpringer
Chapter25
Pages288-298
Volume11109
ISBN (Electronic)978-3-319-99501-4
ISBN (Print)978-3-319-99500-7
DOIs
Publication statusE-pub ahead of print - 14 Aug 2018
EventNLPCC 2018: The Seventh CCF International Conference on Natural Language Processing and Chinese Computing - Hohhot, China
Duration: 26 Aug 201830 Aug 2018

Publication series

NameNatural Language Processing and Chinese Computing
Volume11109
ISSN (Print)0302-9743
ISSN (Electronic)1611-3349

Conference

ConferenceNLPCC 2018
Country/TerritoryChina
CityHohhot
Period26/08/1830/08/18

Fingerprint

Dive into the research topics of 'An End-to-End Scalable Iterative Sequence Tagging with Multi-Task Learning'. Together they form a unique fingerprint.

Cite this