Neural Models for Large-scale Semantic Role Labelling

Neural Models for Large-scale Semantic Role Labelling
Author :
Publisher :
Total Pages : 91
Release :
ISBN-10 : OCLC:1084664698
ISBN-13 :
Rating : 4/5 (98 Downloads)

Book Synopsis Neural Models for Large-scale Semantic Role Labelling by : Nicholas FitzGerald

Download or read book Neural Models for Large-scale Semantic Role Labelling written by Nicholas FitzGerald and published by . This book was released on 2018 with total page 91 pages. Available in PDF, EPUB and Kindle. Book excerpt: Recovering predicate-argument structures from natural language sentences is an important task in natural language processing (NLP), where the goal is to identify ``who did what to whom'' with respect to events described in a sentence. A key challenge in this task is sparsity of labeled data: a given predicate-role instance may only occur a handful of times in the training set. While attempts have been made to collect large, diverse datasets which could help mitigate this sparseness, these effort are hampered by the difficulty inherent in labelling traditional SRL formalisms such as PropBank and FrameNet. We take a two-pronged approach to solving these issues. First, we develop models which can be used to jointly represent multiple SRL annotation schemes, allowing us to pool annotations between multiple datasets. We present a new method for semantic role labeling in which arguments and semantic roles are jointly embedded in a shared vector space for a given predicate. We further show how the model can learn jointly from PropBank and FrameNet annotations to obtain additional improvements on the smaller FrameNet dataset. Next, we demonstrate that crowdsourcing techniques can be used to collect a large, high-quality SRL dataset at much lower cost than previous methods, and that this data can be used to learn a high-quality SRL parser. Our corpus, QA-SRL Bank 2.0, consists of over 250,000 question-answer pairs for over 64,000 sentences across 3 domains and was gathered with a new crowd-sourcing scheme that we show has high precision and good recall at modest cost. We also present neural models for two QA-SRL subtasks: detecting argument spans for a predicate and generating questions to label the semantic relationship. Finally, we combine these two approaches, investigating whether QA-SRL annotations can be used to improve perfomance on PropBank in a multitask learning setup. We find that using the QA-SRL data improves performance in regimes with small amounts of in-domain PropBank data, but that these improvements are overshadowed by those obtained by using deep contextual word representations trained on large amounts of unlabeled text, raising important questions for future work as to the utility of multitask training relative to these unsupervised approaches.


Neural Models for Large-scale Semantic Role Labelling Related Books