An End-to-End Scalable Iterative Sequence Tagging with Multi-Task Learning

Lin Gui, Jiachen Du, Zhishan Zhao, Yulan He, Ruifeng Xu, Chuang Fan

Research output: Chapter in Book/Report/Conference proceedingChapter

Abstract

Multi-task learning (MTL) models, which pool examples arisen out of several tasks, have achieved remarkable results in language processing. However, multi-task learning is not always effective when compared with the single-task methods in sequence tagging. One possible reason is that existing methods to multi-task sequence tagging often reply on lower layer parameter sharing to connect different tasks. The lack of interactions between different tasks results in limited performance improvement. In this paper, we propose a novel multi-task learning architecture which could iteratively utilize the prediction results of each task explicitly. We train our model for part-of-speech (POS) tagging, chunking and named entity recognition (NER) tasks simultaneously. Experimental results show that without any task-specific features, our model obtains the state-of-the-art performance on both chunking and NER.
Original languageEnglish
Title of host publicationCCF International Conference on Natural Language Processing and Chinese Computing
PublisherSpringer
Chapter25
Pages288-298
Volume11109
ISBN (Electronic)978-3-319-99501-4
ISBN (Print)978-3-319-99500-7
DOIs
Publication statusE-pub ahead of print - 14 Aug 2018
EventNLPCC 2018: The Seventh CCF International Conference on Natural Language Processing and Chinese Computing - Hohhot, China
Duration: 26 Aug 201830 Aug 2018

Publication series

NameNatural Language Processing and Chinese Computing
Volume11109
ISSN (Print)0302-9743
ISSN (Electronic)1611-3349

Conference

ConferenceNLPCC 2018
CountryChina
CityHohhot
Period26/08/1830/08/18

Fingerprint

Processing

Cite this

Gui, L., Du, J., Zhao, Z., He, Y., Xu, R., & Fan, C. (2018). An End-to-End Scalable Iterative Sequence Tagging with Multi-Task Learning. In CCF International Conference on Natural Language Processing and Chinese Computing (Vol. 11109, pp. 288-298). (Natural Language Processing and Chinese Computing; Vol. 11109). Springer. https://doi.org/10.1007/978-3-319-99501-4_25
Gui, Lin ; Du, Jiachen ; Zhao, Zhishan ; He, Yulan ; Xu, Ruifeng ; Fan, Chuang. / An End-to-End Scalable Iterative Sequence Tagging with Multi-Task Learning. CCF International Conference on Natural Language Processing and Chinese Computing. Vol. 11109 Springer, 2018. pp. 288-298 (Natural Language Processing and Chinese Computing).
@inbook{6c63be9b22c84b3997a1672b1853db7d,
title = "An End-to-End Scalable Iterative Sequence Tagging with Multi-Task Learning",
abstract = "Multi-task learning (MTL) models, which pool examples arisen out of several tasks, have achieved remarkable results in language processing. However, multi-task learning is not always effective when compared with the single-task methods in sequence tagging. One possible reason is that existing methods to multi-task sequence tagging often reply on lower layer parameter sharing to connect different tasks. The lack of interactions between different tasks results in limited performance improvement. In this paper, we propose a novel multi-task learning architecture which could iteratively utilize the prediction results of each task explicitly. We train our model for part-of-speech (POS) tagging, chunking and named entity recognition (NER) tasks simultaneously. Experimental results show that without any task-specific features, our model obtains the state-of-the-art performance on both chunking and NER.",
author = "Lin Gui and Jiachen Du and Zhishan Zhao and Yulan He and Ruifeng Xu and Chuang Fan",
year = "2018",
month = "8",
day = "14",
doi = "10.1007/978-3-319-99501-4_25",
language = "English",
isbn = "978-3-319-99500-7",
volume = "11109",
series = "Natural Language Processing and Chinese Computing",
publisher = "Springer",
pages = "288--298",
booktitle = "CCF International Conference on Natural Language Processing and Chinese Computing",
address = "Germany",

}

Gui, L, Du, J, Zhao, Z, He, Y, Xu, R & Fan, C 2018, An End-to-End Scalable Iterative Sequence Tagging with Multi-Task Learning. in CCF International Conference on Natural Language Processing and Chinese Computing. vol. 11109, Natural Language Processing and Chinese Computing, vol. 11109, Springer, pp. 288-298, NLPCC 2018, Hohhot, China, 26/08/18. https://doi.org/10.1007/978-3-319-99501-4_25

An End-to-End Scalable Iterative Sequence Tagging with Multi-Task Learning. / Gui, Lin; Du, Jiachen; Zhao, Zhishan; He, Yulan; Xu, Ruifeng; Fan, Chuang.

CCF International Conference on Natural Language Processing and Chinese Computing. Vol. 11109 Springer, 2018. p. 288-298 (Natural Language Processing and Chinese Computing; Vol. 11109).

Research output: Chapter in Book/Report/Conference proceedingChapter

TY - CHAP

T1 - An End-to-End Scalable Iterative Sequence Tagging with Multi-Task Learning

AU - Gui, Lin

AU - Du, Jiachen

AU - Zhao, Zhishan

AU - He, Yulan

AU - Xu, Ruifeng

AU - Fan, Chuang

PY - 2018/8/14

Y1 - 2018/8/14

N2 - Multi-task learning (MTL) models, which pool examples arisen out of several tasks, have achieved remarkable results in language processing. However, multi-task learning is not always effective when compared with the single-task methods in sequence tagging. One possible reason is that existing methods to multi-task sequence tagging often reply on lower layer parameter sharing to connect different tasks. The lack of interactions between different tasks results in limited performance improvement. In this paper, we propose a novel multi-task learning architecture which could iteratively utilize the prediction results of each task explicitly. We train our model for part-of-speech (POS) tagging, chunking and named entity recognition (NER) tasks simultaneously. Experimental results show that without any task-specific features, our model obtains the state-of-the-art performance on both chunking and NER.

AB - Multi-task learning (MTL) models, which pool examples arisen out of several tasks, have achieved remarkable results in language processing. However, multi-task learning is not always effective when compared with the single-task methods in sequence tagging. One possible reason is that existing methods to multi-task sequence tagging often reply on lower layer parameter sharing to connect different tasks. The lack of interactions between different tasks results in limited performance improvement. In this paper, we propose a novel multi-task learning architecture which could iteratively utilize the prediction results of each task explicitly. We train our model for part-of-speech (POS) tagging, chunking and named entity recognition (NER) tasks simultaneously. Experimental results show that without any task-specific features, our model obtains the state-of-the-art performance on both chunking and NER.

UR - https://link.springer.com/chapter/10.1007%2F978-3-319-99501-4_25

U2 - 10.1007/978-3-319-99501-4_25

DO - 10.1007/978-3-319-99501-4_25

M3 - Chapter

SN - 978-3-319-99500-7

VL - 11109

T3 - Natural Language Processing and Chinese Computing

SP - 288

EP - 298

BT - CCF International Conference on Natural Language Processing and Chinese Computing

PB - Springer

ER -

Gui L, Du J, Zhao Z, He Y, Xu R, Fan C. An End-to-End Scalable Iterative Sequence Tagging with Multi-Task Learning. In CCF International Conference on Natural Language Processing and Chinese Computing. Vol. 11109. Springer. 2018. p. 288-298. (Natural Language Processing and Chinese Computing). https://doi.org/10.1007/978-3-319-99501-4_25