2020
pdf
bib
abs
Identifying Annotator Bias: A new IRT-based method for bias identification
Jacopo Amidei
|
Paul Piwek
|
Alistair Willis
Proceedings of the 28th International Conference on Computational Linguistics
A basic step in any annotation effort is the measurement of the Inter Annotator Agreement (IAA). An important factor that can affect the IAA is the presence of annotator bias. In this paper we introduce a new interpretation and application of the Item Response Theory (IRT) to detect annotators’ bias. Our interpretation of IRT offers an original bias identification method that can be used to compare annotators’ bias and characterise annotation disagreement. Our method can be used to spot outlier annotators, improve annotation guidelines and provide a better picture of the annotation reliability. Additionally, because scales for IAA interpretation are not generally agreed upon, our bias identification method is valuable as a complement to the IAA value which can help with understanding the annotation disagreement.
2019
pdf
bib
abs
Agreement is overrated: A plea for correlation to assess human evaluation reliability
Jacopo Amidei
|
Paul Piwek
|
Alistair Willis
Proceedings of the 12th International Conference on Natural Language Generation
Inter-Annotator Agreement (IAA) is used as a means of assessing the quality of NLG evaluation data, in particular, its reliability. According to existing scales of IAA interpretation – see, for example, Lommel et al. (2014), Liu et al. (2016), Sedoc et al. (2018) and Amidei et al. (2018a) – most data collected for NLG evaluation fail the reliability test. We confirmed this trend by analysing papers published over the last 10 years in NLG-specific conferences (in total 135 papers that included some sort of human evaluation study). Following Sampson and Babarczy (2008), Lommel et al. (2014), Joshi et al. (2016) and Amidei et al. (2018b), such phenomena can be explained in terms of irreducible human language variability. Using three case studies, we show the limits of considering IAA as the only criterion for checking evaluation reliability. Given human language variability, we propose that for human evaluation of NLG, correlation coefficients and agreement coefficients should be used together to obtain a better assessment of the evaluation data reliability. This is illustrated using the three case studies.
pdf
bib
abs
The use of rating and Likert scales in Natural Language Generation human evaluation tasks: A review and some recommendations
Jacopo Amidei
|
Paul Piwek
|
Alistair Willis
Proceedings of the 12th International Conference on Natural Language Generation
Rating and Likert scales are widely used in evaluation experiments to measure the quality of Natural Language Generation (NLG) systems. We review the use of rating and Likert scales for NLG evaluation tasks published in NLG specialized conferences over the last ten years (135 papers in total). Our analysis brings to light a number of deviations from good practice in their use. We conclude with some recommendations about the use of such scales. Our aim is to encourage the appropriate use of evaluation methodologies in the NLG community.
2018
pdf
bib
abs
Evaluation methodologies in Automatic Question Generation 2013-2018
Jacopo Amidei
|
Paul Piwek
|
Alistair Willis
Proceedings of the 11th International Conference on Natural Language Generation
In the last few years Automatic Question Generation (AQG) has attracted increasing interest. In this paper we survey the evaluation methodologies used in AQG. Based on a sample of 37 papers, our research shows that the systems’ development has not been accompanied by similar developments in the methodologies used for the systems’ evaluation. Indeed, in the papers we examine here, we find a wide variety of both intrinsic and extrinsic evaluation methodologies. Such diverse evaluation practices make it difficult to reliably compare the quality of different generation systems. Our study suggests that, given the rapidly increasing level of research in the area, a common framework is urgently needed to compare the performance of AQG systems and NLG systems more generally.
pdf
bib
abs
Rethinking the Agreement in Human Evaluation Tasks
Jacopo Amidei
|
Paul Piwek
|
Alistair Willis
Proceedings of the 27th International Conference on Computational Linguistics
Human evaluations are broadly thought to be more valuable the higher the inter-annotator agreement. In this paper we examine this idea. We will describe our experiments and analysis within the area of Automatic Question Generation. Our experiments show how annotators diverge in language annotation tasks due to a range of ineliminable factors. For this reason, we believe that annotation schemes for natural language generation tasks that are aimed at evaluating language quality need to be treated with great care. In particular, an unchecked focus on reduction of disagreement among annotators runs the danger of creating generation goals that reward output that is more distant from, rather than closer to, natural human-like language. We conclude the paper by suggesting a new approach to the use of the agreement metrics in natural language generation evaluation tasks.
2016
pdf
bib
abs
Adverse Drug Reaction Classification With Deep Neural Networks
Trung Huynh
|
Yulan He
|
Alistair Willis
|
Stefan Rueger
Proceedings of COLING 2016, the 26th International Conference on Computational Linguistics: Technical Papers
We study the problem of detecting sentences describing adverse drug reactions (ADRs) and frame the problem as binary classification. We investigate different neural network (NN) architectures for ADR classification. In particular, we propose two new neural network models, Convolutional Recurrent Neural Network (CRNN) by concatenating convolutional neural networks with recurrent neural networks, and Convolutional Neural Network with Attention (CNNA) by adding attention weights into convolutional neural networks. We evaluate various NN architectures on a Twitter dataset containing informal language and an Adverse Drug Effects (ADE) dataset constructed by sampling from MEDLINE case reports. Experimental results show that all the NN architectures outperform the traditional maximum entropy classifiers trained from n-grams with different weighting strategies considerably on both datasets. On the Twitter dataset, all the NN architectures perform similarly. But on the ADE dataset, CNN performs better than other more complex CNN variants. Nevertheless, CNNA allows the visualisation of attention weights of words when making classification decisions and hence is more appropriate for the extraction of word subsequences describing ADRs.
2015
pdf
bib
Using NLP to Support Scalable Assessment of Short Free Text Responses
Alistair Willis
Proceedings of the Tenth Workshop on Innovative Use of NLP for Building Educational Applications
2013
pdf
bib
Literature-driven Curation for Taxonomic Name Databases
Hui Yang
|
Alistair Willis
|
David Morse
|
Anne de Roeck
Proceedings of the Joint Workshop on NLP&LOD and SWAIE: Semantic Web, Linked Open Data and Information Extraction
2012
pdf
bib
A Generalised Hybrid Architecture for NLP
Alistair Willis
|
Hui Yang
|
Anne De Roeck
Proceedings of the Workshop on Innovative Hybrid Approaches to the Processing of Textual Data
2010
pdf
bib
abs
From XML to XML: The Why and How of Making the Biodiversity Literature Accessible to Researchers
Alistair Willis
|
David King
|
David Morse
|
Anton Dil
|
Chris Lyal
|
Dave Roberts
Proceedings of the Seventh International Conference on Language Resources and Evaluation (LREC'10)
We present the ABLE document collection, which consists of a set of annotated volumes of the Bulletin of the British Museum (Natural History). These were developed during our ongoing work on automating the markup of scanned copies of the biodiversity literature. Such automation is required if historic literature is to be used to inform contemporary issues in biodiversity research. We consider an enhanced TEI XML markup language, which is used as an intermediate stage in translating from the initial XML obtained from Optical Character Recognition to taXMLit, the target annotation schema. The intermediate representation allows additional information from external sources such as a taxonomic thesaurus to be incorporated before the final translation into taXMLit. We give an overview of the project workflow in automating the markup process, and consider what extensions to existing markup schema will be required to best support working taxonomists. Finally, we discuss some of the particular issues which were encountered in converting between different XML formats.
pdf
bib
A Methodology for Automatic Identification of Nocuous Ambiguity
Hui Yang
|
Anne de Roeck
|
Alistair Willis
|
Bashar Nuseibeh
Proceedings of the 23rd International Conference on Computational Linguistics (Coling 2010)
1999
pdf
bib
Two Accounts of Scope Availability and Semantic Underspecification
Alistair Willis
|
Suresh Manandhar
Proceedings of the 37th Annual Meeting of the Association for Computational Linguistics