Yaser Al-Onaizan

Also published as: Yaser Al-onaizan


2022

pdf bib
Label Semantics for Few Shot Named Entity Recognition
Jie Ma | Miguel Ballesteros | Srikanth Doss | Rishita Anubhai | Sunil Mallya | Yaser Al-Onaizan | Dan Roth
Findings of the Association for Computational Linguistics: ACL 2022

We study the problem of few shot learning for named entity recognition. Specifically, we leverage the semantic information in the names of the labels as a way of giving the model additional signal and enriched priors. We propose a neural architecture that consists of two BERT encoders, one to encode the document and its tokens and another one to encode each of the labels in natural language format. Our model learns to match the representations of named entities computed by the first encoder with label representations computed by the second encoder. The label semantics signal is shown to support improved state-of-the-art results in multiple few shot NER benchmarks and on-par performance in standard benchmarks. Our model is especially effective in low resource settings.

2021

pdf bib
Multi-Task Learning and Adapted Knowledge Models for Emotion-Cause Extraction
Elsbeth Turcan | Shuai Wang | Rishita Anubhai | Kasturi Bhattacharjee | Yaser Al-Onaizan | Smaranda Muresan
Findings of the Association for Computational Linguistics: ACL-IJCNLP 2021

2020

pdf bib
Exploring Content Selection in Summarization of Novel Chapters
Faisal Ladhak | Bryan Li | Yaser Al-Onaizan | Kathleen McKeown
Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics

We present a new summarization task, generating summaries of novel chapters using summary/chapter pairs from online study guides. This is a harder task than the news summarization task, given the chapter length as well as the extreme paraphrasing and generalization found in the summaries. We focus on extractive summarization, which requires the creation of a gold-standard set of extractive summaries. We present a new metric for aligning reference summary sentences with chapter sentences to create gold extracts and also experiment with different alignment methods. Our experiments demonstrate significant improvement over prior alignment approaches for our task as shown through automatic metrics and a crowd-sourced pyramid analysis.

pdf bib
Words Aren’t Enough, Their Order Matters: On the Robustness of Grounding Visual Referring Expressions
Arjun Akula | Spandana Gella | Yaser Al-Onaizan | Song-Chun Zhu | Siva Reddy
Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics

Visual referring expression recognition is a challenging task that requires natural language understanding in the context of an image. We critically examine RefCOCOg, a standard benchmark for this task, using a human study and show that 83.7% of test instances do not require reasoning on linguistic structure, i.e., words are enough to identify the target object, the word order doesn’t matter. To measure the true progress of existing models, we split the test set into two sets, one which requires reasoning on linguistic structure and the other which doesn’t. Additionally, we create an out-of-distribution dataset Ref-Adv by asking crowdworkers to perturb in-domain examples such that the target object changes. Using these datasets, we empirically show that existing methods fail to exploit linguistic structure and are 12% to 23% lower in performance than the established progress for this task. We also propose two methods, one based on contrastive learning and the other based on multi-task learning, to increase the robustness of ViLBERT, the current state-of-the-art model for this task. Our datasets are publicly available at https://github.com/aws/aws-refcocog-adv.

pdf bib
Evaluating Robustness to Input Perturbations for Neural Machine Translation
Xing Niu | Prashant Mathur | Georgiana Dinu | Yaser Al-Onaizan
Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics

Neural Machine Translation (NMT) models are sensitive to small perturbations in the input. Robustness to such perturbations is typically measured using translation quality metrics such as BLEU on the noisy input. This paper proposes additional metrics which measure the relative degradation and changes in translation when small perturbations are added to the input. We focus on a class of models employing subword regularization to address robustness and perform extensive evaluations of these models using the robustness measures proposed. Results show that our proposed metrics reveal a clear trend of improved robustness to perturbations when subword regularization methods are used.

pdf bib
Joint Translation and Unit Conversion for End-to-end Localization
Georgiana Dinu | Prashant Mathur | Marcello Federico | Stanislas Lauly | Yaser Al-Onaizan
Proceedings of the 17th International Conference on Spoken Language Translation

A variety of natural language tasks require processing of textual data which contains a mix of natural language and formal languages such as mathematical expressions. In this paper, we take unit conversions as an example and propose a data augmentation technique which lead to models learning both translation and conversion tasks as well as how to adequately switch between them for end-to-end localization.

pdf bib
Resource-Enhanced Neural Model for Event Argument Extraction
Jie Ma | Shuai Wang | Rishita Anubhai | Miguel Ballesteros | Yaser Al-Onaizan
Findings of the Association for Computational Linguistics: EMNLP 2020

Event argument extraction (EAE) aims to identify the arguments of an event and classify the roles that those arguments play. Despite great efforts made in prior work, there remain many challenges: (1) Data scarcity. (2) Capturing the long-range dependency, specifically, the connection between an event trigger and a distant event argument. (3) Integrating event trigger information into candidate argument representation. For (1), we explore using unlabeled data. For (2), we use Transformer that uses dependency parses to guide the attention mechanism. For (3), we propose a trigger-aware sequence encoder with several types of trigger-dependent sequence representations. We also support argument extraction either from text annotated with gold entities or from plain text. Experiments on the English ACE 2005 benchmark show that our approach achieves a new state-of-the-art.

pdf bib
Severing the Edge Between Before and After: Neural Architectures for Temporal Ordering of Events
Miguel Ballesteros | Rishita Anubhai | Shuai Wang | Nima Pourdamghani | Yogarshi Vyas | Jie Ma | Parminder Bhatia | Kathleen McKeown | Yaser Al-Onaizan
Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP)

In this paper, we propose a neural architecture and a set of training methods for ordering events by predicting temporal relations. Our proposed models receive a pair of events within a span of text as input and they identify temporal relations (Before, After, Equal, Vague) between them. Given that a key challenge with this task is the scarcity of annotated data, our models rely on either pretrained representations (i.e. RoBERTa, BERT or ELMo), transfer and multi-task learning (by leveraging complementary datasets), and self-training techniques. Experiments on the MATRES dataset of English documents establish a new state-of-the-art on this task.

pdf bib
To BERT or Not to BERT: Comparing Task-specific and Task-agnostic Semi-Supervised Approaches for Sequence Tagging
Kasturi Bhattacharjee | Miguel Ballesteros | Rishita Anubhai | Smaranda Muresan | Jie Ma | Faisal Ladhak | Yaser Al-Onaizan
Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP)

Leveraging large amounts of unlabeled data using Transformer-like architectures, like BERT, has gained popularity in recent times owing to their effectiveness in learning general representations that can then be further fine-tuned for downstream tasks to much success. However, training these models can be costly both from an economic and environmental standpoint. In this work, we investigate how to effectively use unlabeled data: by exploring the task-specific semi-supervised approach, Cross-View Training (CVT) and comparing it with task-agnostic BERT in multiple settings that include domain and task relevant English data. CVT uses a much lighter model architecture and we show that it achieves similar performance to BERT on a set of sequence tagging tasks, with lesser financial and environmental impact.

2019

pdf bib
Training Neural Machine Translation to Apply Terminology Constraints
Georgiana Dinu | Prashant Mathur | Marcello Federico | Yaser Al-Onaizan
Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics

This paper proposes a novel method to inject custom terminology into neural machine translation at run time. Previous works have mainly proposed modifications to the decoding algorithm in order to constrain the output to include run-time-provided target terms. While being effective, these constrained decoding methods add, however, significant computational overhead to the inference step, and, as we show in this paper, can be brittle when tested in realistic conditions. In this paper we approach the problem by training a neural MT system to learn how to use custom terminology when provided with the input. Comparative experiments show that our method is not only more effective than a state-of-the-art implementation of constrained decoding, but is also as fast as constraint-free decoding.

pdf bib
Span-Level Model for Relation Extraction
Kalpit Dixit | Yaser Al-Onaizan
Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics

Relation Extraction is the task of identifying entity mention spans in raw text and then identifying relations between pairs of the entity mentions. Recent approaches for this span-level task have been token-level models which have inherent limitations. They cannot easily define and implement span-level features, cannot model overlapping entity mentions and have cascading errors due to the use of sequential decoding. To address these concerns, we present a model which directly models all possible spans and performs joint entity mention detection and relation extraction. We report a new state-of-the-art performance of 62.83 F1 (prev best was 60.49) on the ACE2005 dataset.

pdf bib
Robustness to Capitalization Errors in Named Entity Recognition
Sravan Bodapati | Hyokun Yun | Yaser Al-Onaizan
Proceedings of the 5th Workshop on Noisy User-generated Text (W-NUT 2019)

Robustness to capitalization errors is a highly desirable characteristic of named entity recognizers, yet we find standard models for the task are surprisingly brittle to such noise. Existing methods to improve robustness to the noise completely discard given orthographic information, which significantly degrades their performance on well-formed text. We propose a simple alternative approach based on data augmentation, which allows the model to learn to utilize or ignore orthographic information depending on its usefulness in the context. It achieves competitive robustness to capitalization errors while making negligible compromise to its performance on well-formed text and significantly improving generalization power on noisy user-generated text. Our experiments clearly and consistently validate our claim across different types of machine learning models, languages, and dataset sizes.

pdf bib
Neural Word Decomposition Models for Abusive Language Detection
Sravan Bodapati | Spandana Gella | Kasturi Bhattacharjee | Yaser Al-Onaizan
Proceedings of the Third Workshop on Abusive Language Online

The text we see in social media suffers from lots of undesired characterstics like hatespeech, abusive language, insults etc. The nature of this text is also very different compared to the traditional text we see in news with lots of obfuscated words, intended typos. This poses several robustness challenges to many natural language processing (NLP) techniques developed for traditional text. Many techniques proposed in the recent times such as charecter encoding models, subword models, byte pair encoding to extract subwords can aid in dealing with few of these nuances. In our work, we analyze the effectiveness of each of the above techniques, compare and contrast various word decomposition techniques when used in combination with others. We experiment with recent advances of finetuning pretrained language models, and demonstrate their robustness to domain shift. We also show our approaches achieve state of the art performance on Wikipedia attack, toxicity datasets, and Twitter hatespeech dataset.

2017

pdf bib
AMR Parsing using Stack-LSTMs
Miguel Ballesteros | Yaser Al-Onaizan
Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing

We present a transition-based AMR parser that directly generates AMR parses from plain text. We use Stack-LSTMs to represent our parser state and make decisions greedily. In our experiments, we show that our parser achieves very competitive scores on English using only AMR training data. Adding additional information, such as POS tags and dependency trees, improves the results further.

pdf bib
Beam Search Strategies for Neural Machine Translation
Markus Freitag | Yaser Al-Onaizan
Proceedings of the First Workshop on Neural Machine Translation

The basic concept in Neural Machine Translation (NMT) is to train a large Neural Network that maximizes the translation performance on a given parallel corpus. NMT is then using a simple left-to-right beam-search decoder to generate new translations that approximately maximize the trained conditional probability. The current beam search strategy generates the target sentence word by word from left-to-right while keeping a fixed amount of active candidates at each time step. First, this simple search is less adaptive as it also expands candidates whose scores are much worse than the current best. Secondly, it does not expand hypotheses if they are not within the best scoring candidates, even if their scores are close to the best one. The latter one can be avoided by increasing the beam size until no performance improvement can be observed. While you can reach better performance, this has the drawback of a slower decoding speed. In this paper, we concentrate on speeding up the decoder by applying a more flexible beam search strategy whose candidate size may vary at each time step depending on the candidate scores. We speed up the original decoder by up to 43% for the two language pairs German to English and Chinese to English without losing any translation quality.

2016

pdf bib
Zero-Resource Translation with Multi-Lingual Neural Machine Translation
Orhan Firat | Baskaran Sankaran | Yaser Al-onaizan | Fatos T. Yarman Vural | Kyunghyun Cho
Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing

2014

pdf bib
Improved Sentence-Level Arabic Dialect Classification
Christoph Tillmann | Saab Mansour | Yaser Al-Onaizan
Proceedings of the First Workshop on Applying NLP Tools to Similar Languages, Varieties and Dialects

pdf bib
Automatic dialect classification for statistical machine translation
Saab Mansour | Yaser Al-Onaizan | Graeme Blackwood | Christoph Tillmann
Proceedings of the 11th Conference of the Association for Machine Translation in the Americas: MT Researchers Track

The training data for statistical machine translation are gathered from various sources representing a mixture of domains. In this work, we argue that when translating dialects representing varieties of the same language, a manually assigned data source is not a reliable indicator of the dialect. We resort to automatic dialect classification to refine the training corpora according to the different dialects and build improved dialect specific systems. A fairly standard classifier for Arabic developed within this work achieves state-of-the-art performance, with classification precision above 90%, making it usefully accurate for our application. The classification of the data is then used to distinguish between the different dialects, split the data accordingly, and utilize the new splits for several adaptation techniques. Performing translation experiments on a large scale dialectal Arabic to English translation task, our results show that the classifier generates better contrast between the dialects and achieves superior translation quality than using the original manual corpora splits.

2012

pdf bib
Proceedings of the Demonstration Session at the Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies
Aria Haghighi | Yaser Al-Onaizan
Proceedings of the Demonstration Session at the Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies

2011

pdf bib
Goodness: A Method for Measuring Machine Translation Confidence
Nguyen Bach | Fei Huang | Yaser Al-Onaizan
Proceedings of the 49th Annual Meeting of the Association for Computational Linguistics: Human Language Technologies

2008

pdf bib
Generalizing Local and Non-Local Word-Reordering Patterns for Syntax-Based Machine Translation
Bing Zhao | Yaser Al-onaizan
Proceedings of the 2008 Conference on Empirical Methods in Natural Language Processing

2006

pdf bib
Distortion Models for Statistical Machine Translation
Yaser Al-Onaizan | Kishore Papineni
Proceedings of the 21st International Conference on Computational Linguistics and 44th Annual Meeting of the Association for Computational Linguistics

2003

pdf bib
TIPS: A Translingual Information Processing System
Yaser Al-Onaizan | Radu Florian | Martin Franz | Hany Hassan | Young-Suk Lee | J. Scott McCarley | Kishore Papineni | Salim Roukos | Jeffrey Sorensen | Christoph Tillmann | Todd Ward | Fei Xia
Companion Volume of the Proceedings of HLT-NAACL 2003 - Demonstrations

2002

pdf bib
Machine Transliteration of Names in Arabic Texts
Yaser Al-Onaizan | Kevin Knight
Proceedings of the ACL-02 Workshop on Computational Approaches to Semitic Languages

pdf bib
Translating Named Entities Using Monolingual and Bilingual Resources
Yaser Al-Onaizan | Kevin Knight
Proceedings of the 40th Annual Meeting of the Association for Computational Linguistics

1998

pdf bib
Translation with finite-state devices
Kevin Knight | Yaser Al-Onaizan
Proceedings of the Third Conference of the Association for Machine Translation in the Americas: Technical Papers

Statistical models have recently been applied to machine translation with interesting results. Algorithms for processing these models have not received wide circulation, however. By contrast, general finite-state transduction algorithms have been applied in a variety of tasks. This paper gives a finite-state reconstruction of statistical translation and demonstrates the use of standard tools to compute statistically likely translations. Ours is the first translation algorithm for “fertility/permutation” statistical models to be described in replicable detail.

1996

pdf bib
JAPANGLOSS: using statistics to fill knowledge gaps
Kevin Knight | Yaser Al-Onaizan | Ishwar Chander | Eduard Hovy | Irene Langkilde | Richard Whitney | Kenji Yamada
Conference of the Association for Machine Translation in the Americas