2024
pdf
bib
abs
Graph Guided Question Answer Generation for Procedural Question-Answering
Hai Pham
|
Isma Hadji
|
Xinnuo Xu
|
Ziedune Degutyte
|
Jay Rainey
|
Evangelos Kazakos
|
Afsaneh Fazly
|
Georgios Tzimiropoulos
|
Brais Martinez
Proceedings of the 18th Conference of the European Chapter of the Association for Computational Linguistics (Volume 1: Long Papers)
In this paper, we focus on task-specific question answering (QA). To this end, we introduce a method for generating exhaustive and high-quality training data, which allows us to train compact (e.g., run on a mobile device), task-specific QA models that are competitive against GPT variants. The key technological enabler is a novel mechanism for automatic question-answer generation from procedural text which can ingest large amounts of textual instructions and produce exhaustive in-domain QA training data. While current QA data generation methods can produce well-formed and varied data, their non-exhaustive nature is sub-optimal for training a QA model. In contrast, we leverage the highly structured aspect of procedural text and represent each step and the overall flow of the procedure as graphs. We then condition on graph nodes to automatically generate QA pairs in an exhaustive and controllable manner. Comprehensive evaluations of our method show that: 1) small models trained with our data achieve excellent performance on the target QA task, even exceeding that of GPT3 and ChatGPT despite being several orders of magnitude smaller. 2) semantic coverage is the key indicator for downstream QA performance. Crucially, while large language models excel at syntactic diversity, this does not necessarily result in improvements on the end QA model. In contrast, the higher semantic coverage provided by our method is critical for QA performance.
pdf
bib
abs
End-to-end Parsing of Procedural Text into Flow Graphs
Dhaivat J. Bhatt
|
Seyed Ahmad Abdollahpouri Hosseini
|
Federico Fancellu
|
Afsaneh Fazly
Proceedings of the 2024 Joint International Conference on Computational Linguistics, Language Resources and Evaluation (LREC-COLING 2024)
We focus on the problem of parsing procedural text into fine-grained flow graphs that encode actions and entities, as well as their interactions. Specifically, we focus on parsing cooking recipes, and address a few limitations of existing parsers. Unlike SOTA approaches to flow graph parsing that work in two separate stages identifying actions and entities (tagging) and encoding their interactions via connecting edges (graph generation). we propose an end-to-end multi-task framework that simultaneously performs tagging and graph generation. In addition, due to the end-to-end nature of our proposed model, we can unify the input representation, and moreover can use compact encoders, resulting in small models with significantly fewer parameters than SOTA models. Another key challenge in training flow graph parsers is the lack of sufficient annotated data, due to the costly nature of the fine-grained annotations. We address this problem by taking advantage of the abundant unlabelled recipes, and show that pre-training on automatically-generated noisy silver annotations (from unlabelled recipes) results in a large improvement in flow graph parsing.
2022
pdf
bib
abs
Visual Semantic Parsing: From Images to Abstract Meaning Representation
Mohamed Ashraf Abdelsalam
|
Zhan Shi
|
Federico Fancellu
|
Kalliopi Basioti
|
Dhaivat Bhatt
|
Vladimir Pavlovic
|
Afsaneh Fazly
Proceedings of the 26th Conference on Computational Natural Language Learning (CoNLL)
The success of scene graphs for visual scene understanding has brought attention to the benefits of abstracting a visual input (e.g., image) into a structured representation, where entities (people and objects) are nodes connected by edges specifying their relations. Building these representations, however, requires expensive manual annotation in the form of images paired with their scene graphs or frames. These formalisms remain limited in the nature of entities and relations they can capture. In this paper, we propose to leverage a widely-used meaning representation in the field of natural language processing, the Abstract Meaning Representation (AMR), to address these shortcomings. Compared to scene graphs, which largely emphasize spatial relationships, our visual AMR graphs are more linguistically informed, with a focus on higher-level semantic concepts extrapolated from visual input. Moreover, they allow us to generate meta-AMR graphs to unify information contained in multiple image descriptions under one representation. Through extensive experimentation and analysis, we demonstrate that we can re-purpose an existing text-to-AMR parser to parse images into AMRs. Our findings point to important future research directions for improved scene understanding.
2021
pdf
bib
abs
Dependency parsing with structure preserving embeddings
Ákos Kádár
|
Lan Xiao
|
Mete Kemertas
|
Federico Fancellu
|
Allan Jepson
|
Afsaneh Fazly
Proceedings of the 16th Conference of the European Chapter of the Association for Computational Linguistics: Main Volume
Modern neural approaches to dependency parsing are trained to predict a tree structure by jointly learning a contextual representation for tokens in a sentence, as well as a head–dependent scoring function. Whereas this strategy results in high performance, it is difficult to interpret these representations in relation to the geometry of the underlying tree structure. Our work seeks instead to learn interpretable representations by training a parser to explicitly preserve structural properties of a tree. We do so by casting dependency parsing as a tree embedding problem where we incorporate geometric properties of dependency trees in the form of training losses within a graph-based parser. We provide a thorough evaluation of these geometric losses, showing that a majority of them yield strong tree distance preservation as well as parsing performance on par with a competitive graph-based parser (Qi et al., 2018). Finally, we show where parsing errors lie in terms of tree relationship in order to guide future work.
pdf
bib
abs
An in-depth look at Euclidean disk embeddings for structure preserving parsing
Federico Fancellu
|
Lan Xiao
|
Allan Jepson
|
Afsaneh Fazly
Proceedings of the Fourth BlackboxNLP Workshop on Analyzing and Interpreting Neural Networks for NLP
Preserving the structural properties of trees or graphs when embedding them into a metric space allows for a high degree of interpretability, and has been shown beneficial for downstream tasks (e.g., hypernym detection, natural language inference, multimodal retrieval). However, whereas the majority of prior work looks at using structure-preserving embeddings when encoding a structure given as input, e.g., WordNet (Fellbaum, 1998), there is little exploration on how to use such embeddings when predicting one. We address this gap for two structure generation tasks, namely dependency and semantic parsing. We test the applicability of disk embeddings (Suzuki et al., 2019) that has been proposed for embedding Directed Acyclic Graphs (DAGs) but has not been tested on tasks that generate such structures. Our experimental results show that for both tasks the original disk embedding formulation leads to much worse performance when compared to non-structure-preserving baselines. We propose enhancements to this formulation and show that they almost close the performance gap for dependency parsing. However, the gap still remains notable for semantic parsing due to the complexity of meaning representation graphs, suggesting a challenge for generating interpretable semantic parse representations.
2020
pdf
bib
abs
Accurate polyglot semantic parsing with DAG grammars
Federico Fancellu
|
Ákos Kádár
|
Ran Zhang
|
Afsaneh Fazly
Findings of the Association for Computational Linguistics: EMNLP 2020
Semantic parses are directed acyclic graphs (DAGs), but in practice most parsers treat them as strings or trees, mainly because models that predict graphs are far less understood. This simplification, however, comes at a cost: there is no guarantee that the output is a well-formed graph. A recent work by Fancellu et al. (2019) addressed this problem by proposing a graph-aware sequence model that utilizes a DAG grammar to guide graph generation. We significantly improve upon this work, by proposing a simpler architecture as well as more efficient training and inference algorithms that can always guarantee the well-formedness of the generated graphs. Importantly, unlike Fancellu et al., our model does not require language-specific features, and hence can harness the inherent ability of DAG-grammar parsing in multilingual settings. We perform monolingual as well as multilingual experiments on the Parallel Meaning Bank (Abzianidze et al., 2017). Our parser outperforms previous graph-aware models by a large margin, and closes the performance gap between string-based and DAG-grammar parsing.
pdf
bib
abs
How coherent are neural models of coherence?
Leila Pishdad
|
Federico Fancellu
|
Ran Zhang
|
Afsaneh Fazly
Proceedings of the 28th International Conference on Computational Linguistics
Despite the recent advances in coherence modelling, most such models including state-of-the-art neural ones, are evaluated on either contrived proxy tasks such as the standard order discrimination benchmark, or tasks that require special expert annotation. Moreover, most evaluations are conducted on small newswire corpora. To address these shortcomings, in this paper we propose four generic evaluation tasks that draw on different aspects of coherence at both the lexical and document levels, and can be applied to any corpora. In designing these tasks, we aim at capturing coherence-specific properties, such as the correct use of discourse connectives, lexical cohesion, as well as the overall temporal and causal consistency among events and participants in a story. Importantly, our proposed tasks either rely on automatically-generated data, or data annotated for other purposes, hence alleviating the need for annotation specifically targeted to the task of coherence modelling. We perform experiments with several existing state-of-the-art neural models of coherence on these tasks, across large corpora from different domains, including newswire, dialogue, as well as narrative and instructional text. Our findings point to a strong need for revisiting the common practices in the development and evaluation of coherence models.
2017
pdf
bib
abs
Investigating the Opacity of Verb-Noun Multiword Expression Usages in Context
Shiva Taslimipoor
|
Omid Rohanian
|
Ruslan Mitkov
|
Afsaneh Fazly
Proceedings of the 13th Workshop on Multiword Expressions (MWE 2017)
This study investigates the supervised token-based identification of Multiword Expressions (MWEs). This is an ongoing research to exploit the information contained in the contexts in which different instances of an expression could occur. This information is used to investigate the question of whether an expression is literal or MWE. Lexical and syntactic context features derived from vector representations are shown to be more effective over traditional statistical measures to identify tokens of MWEs.
2016
pdf
bib
abs
Classifying Out-of-vocabulary Terms in a Domain-Specific Social Media Corpus
SoHyun Park
|
Afsaneh Fazly
|
Annie Lee
|
Brandon Seibel
|
Wenjie Zi
|
Paul Cook
Proceedings of the Tenth International Conference on Language Resources and Evaluation (LREC'16)
In this paper we consider the problem of out-of-vocabulary term classification in web forum text from the automotive domain. We develop a set of nine domain- and application-specific categories for out-of-vocabulary terms. We then propose a supervised approach to classify out-of-vocabulary terms according to these categories, drawing on features based on word embeddings, and linguistic knowledge of common properties of out-of-vocabulary terms. We show that the features based on word embeddings are particularly informative for this task. The categories that we predict could serve as a preliminary, automatically-generated source of lexical knowledge about out-of-vocabulary terms. Furthermore, we show that this approach can be adapted to give a semi-automated method for identifying out-of-vocabulary terms of a particular category, automotive named entities, that is of particular interest to us.
2014
pdf
bib
A Cognitive Model of Semantic Network Learning
Aida Nematzadeh
|
Afsaneh Fazly
|
Suzanne Stevenson
Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP)
pdf
bib
Learning Verb Classes in an Incremental Model
Libby Barak
|
Afsaneh Fazly
|
Suzanne Stevenson
Proceedings of the Fifth Workshop on Cognitive Modeling and Computational Linguistics
pdf
bib
A Usage-Based Model of Early Grammatical Development
Barend Beekhuizen
|
Rens Bod
|
Afsaneh Fazly
|
Suzanne Stevenson
|
Arie Verhagen
Proceedings of the Fifth Workshop on Cognitive Modeling and Computational Linguistics
2013
pdf
bib
Acquisition of Desires before Beliefs: A Computional Investigation
Libby Barak
|
Afsaneh Fazly
|
Suzanne Stevenson
Proceedings of the Seventeenth Conference on Computational Natural Language Learning
2012
pdf
bib
Modeling the Acquisition of Mental State Verbs
Libby Barak
|
Afsaneh Fazly
|
Suzanne Stevenson
Proceedings of the 3rd Workshop on Cognitive Modeling and Computational Linguistics (CMCL 2012)
pdf
bib
A Computational Model of Memory, Attention, and Word Learning
Aida Nematzadeh
|
Afsaneh Fazly
|
Suzanne Stevenson
Proceedings of the 3rd Workshop on Cognitive Modeling and Computational Linguistics (CMCL 2012)
pdf
bib
Unsupervised Disambiguation of Image Captions
Wesley May
|
Sanja Fidler
|
Afsaneh Fazly
|
Sven Dickinson
|
Suzanne Stevenson
*SEM 2012: The First Joint Conference on Lexical and Computational Semantics – Volume 1: Proceedings of the main conference and the shared task, and Volume 2: Proceedings of the Sixth International Workshop on Semantic Evaluation (SemEval 2012)
pdf
bib
abs
Using Noun Similarity to Adapt an Acceptability Measure for Persian Light Verb Constructions
Shiva Taslimipoor
|
Afsaneh Fazly
|
Ali Hamzeh
Proceedings of the Eighth International Conference on Language Resources and Evaluation (LREC'12)
Light verb constructions (LVCs), such as take a walk and make a decision, are a common subclass of multiword expressions (MWEs), whose distinct syntactic and semantic properties call for a special treatment within a computational system. In particular, LVCs are formed semi-productively: often a semantically-general verb (such as take) combines with a number of semantically-similar nouns to form semantically-related LVCs, as in make a decision/choice/commitment. Nonetheless, there are restrictions as to which verbs combine with which class of nouns. A proper computational account of LVCs is even more important for languages such as Persian, in which most verbs are of the form of LVCs. Recently, there has been some work on the automatic identification of MWEs (including LVCs) in resource-rich languages, such as English and Dutch. We adapt such existing techniques for the automatic identification of LVCs in Persian, an under-resourced language. Specifically, we extend an existing statistical measure of the acceptability of English LVCs (Fazly et al., 2007) to make explicit use of semantic classes of noun, and show that such classes are in particular useful for determining the LVC acceptability of new combinations.
2009
pdf
bib
Unsupervised Type and Token Identification of Idiomatic Expressions
Afsaneh Fazly
|
Paul Cook
|
Suzanne Stevenson
Computational Linguistics, Volume 35, Number 1, March 2009
2008
pdf
bib
Fast Mapping in Word Learning: What Probabilities Tell Us
Afra Alishahi
|
Afsaneh Fazly
|
Suzanne Stevenson
CoNLL 2008: Proceedings of the Twelfth Conference on Computational Natural Language Learning
pdf
bib
An Incremental Bayesian Model for Learning Syntactic Categories
Christopher Parisien
|
Afsaneh Fazly
|
Suzanne Stevenson
CoNLL 2008: Proceedings of the Twelfth Conference on Computational Natural Language Learning
2007
pdf
bib
Distinguishing Subtypes of Multiword Expressions Using Linguistically-Motivated Statistical Measures
Afsaneh Fazly
|
Suzanne Stevenson
Proceedings of the Workshop on A Broader Perspective on Multiword Expressions
pdf
bib
Pulling their Weight: Exploiting Syntactic Forms for the Automatic Identification of Idiomatic Expressions in Context
Paul Cook
|
Afsaneh Fazly
|
Suzanne Stevenson
Proceedings of the Workshop on A Broader Perspective on Multiword Expressions
2006
pdf
bib
Automatically Constructing a Lexicon of Verb Phrase Idiomatic Combinations
Afsaneh Fazly
|
Suzanne Stevenson
11th Conference of the European Chapter of the Association for Computational Linguistics
2005
pdf
bib
Automatically Distinguishing Literal and Figurative Usages of Highly Polysemous Verbs
Afsaneh Fazly
|
Ryan North
|
Suzanne Stevenson
Proceedings of the ACL-SIGLEX Workshop on Deep Lexical Acquisition
pdf
bib
Automatic Acquisition of Knowledge About Multiword Predicates
Afsaneh Fazly
|
Suzanne Stevenson
Proceedings of the 19th Pacific Asia Conference on Language, Information and Computation
2004
pdf
bib
Statistical Measures of the Semi-Productivity of Light Verb Constructions
Suzanne Stevenson
|
Afsaneh Fazly
|
Ryan North
Proceedings of the Workshop on Multiword Expressions: Integrating Processing
2003
pdf
bib
Testing the Efficacy of Part-of-Speech Information in Word Completion
Afsaneh Fazly
|
Graeme Hirst
Proceedings of the 2003 EACL Workshop on Language Modeling for Text Entry Methods