Abstract
Natural language understanding (NLU) aims to map sentences to their semantic mean representations. Statistical approaches to NLU normally require fully-annotated training data where each sentence is paired with its word-level semantic annotations. In this paper, we propose a novel learning framework which trains the Hidden Markov Support Vector Machines (HM-SVMs) without the use of expensive fully-annotated data. In particular, our learning approach takes as input a training set of sentences labeled with abstract semantic annotations encoding underlying embedded structural relations and automatically induces derivation rules that map sentences to their semantic meaning representations. The proposed approach has been tested on the DARPA Communicator Data and achieved 93.18% in F-measure, which outperforms the previously proposed approaches of training the hidden vector state model or conditional random fields from unaligned data, with a relative error reduction rate of 43.3% and 10.6% being achieved.
Original language | English |
---|---|
Title of host publication | Proceeding : CIKM '11 proceedings of the 20th ACM international conference on Information and knowledge management |
Editors | Bettina Berendt, Arjen de Vries, Wenfei Fan, Craig Macdonald, Iadh Ounis, Ian Ruthven |
Place of Publication | New York (US) |
Publisher | ACM |
Pages | 2025-2028 |
Number of pages | 4 |
ISBN (Print) | 978-1-4503-0717-8 |
DOIs | |
Publication status | Published - 2011 |
Event | 20th ACM international conference on Information and knowledge management, CIKM '11 - Glasgow, United Kingdom Duration: 24 Oct 2011 → 28 Oct 2011 |
Conference
Conference | 20th ACM international conference on Information and knowledge management, CIKM '11 |
---|---|
Country/Territory | United Kingdom |
City | Glasgow |
Period | 24/10/11 → 28/10/11 |