2024
pdf
bib
abs
Characterizing Human and Zero-Shot GPT-3.5 Object-Similarity Judgments
D McKnight
|
Alona Fyshe
Findings of the Association for Computational Linguistics: NAACL 2024
Recent advancements in large language models’ (LLMs) capabilities have yielded few-shot, human-comparable performance on a range of tasks. At the same time, researchers expend significant effort and resources gathering human annotations. At some point, LLMs may be able to perform some simple annotation tasks, but studies of LLM annotation accuracy and behavior are sparse. In this paper, we characterize OpenAI’s GPT-3.5’s judgment on a behavioral task for implicit object categorization. We characterize the embedding spaces of models trained on human vs. GPT responses and give similarities and differences between them, finding many similar dimensions. We also find that despite these similar dimensions, augmenting humans’ responses with GPT ones drives model divergence across the sizes of datasets tested.
pdf
bib
abs
RIFF: Learning to Rephrase Inputs for Few-shot Fine-tuning of Language Models
Saeed Najafi
|
Alona Fyshe
Findings of the Association for Computational Linguistics ACL 2024
Pre-trained Language Models (PLMs) can be accurately fine-tuned for downstream text processing tasks. Recently, researchers have introduced several parameter-efficient fine-tuning methods that optimize input prompts or adjust a small number of model parameters (e.g LoRA). In this study, we explore the impact of altering the input text of the original task in conjunction with parameter-efficient fine-tuning methods. To most effectively rewrite the input text, we train a few-shot paraphrase model with a Maximum-Marginal Likelihood objective. Using six few-shot text classification datasets, we show that enriching data with paraphrases at train and test time enhances the performance beyond what can be achieved with parameter-efficient fine-tuning alone. The code used for our experiments can be found at https://github.com/SaeedNajafi/RIFF.
2023
pdf
bib
abs
Language and Mental Health: Measures of Emotion Dynamics from Text as Linguistic Biosocial Markers
Daniela Teodorescu
|
Tiffany Cheng
|
Alona Fyshe
|
Saif Mohammad
Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing
Research in psychopathology has shown that, at an aggregate level, the patterns of emotional change over time—emotion dynamics—are indicators of one’s mental health. One’s patterns of emotion change have traditionally been determined through self-reports of emotions; however, there are known issues with accuracy, bias, and convenience. Recent approaches to determining emotion dynamics from one’s everyday utterances, addresses many of these concerns, but it is not yet known whether these measures of utterance emotion dynamics (UED) correlate with mental health diagnoses. Here, for the first time, we study the relationship between tweet emotion dynamics and mental health disorders. We find that each of the UED metrics studied varied by the user’s self-disclosed diagnosis. For example: average valence was significantly higher (i.e., more positive text) in the control group compared to users with ADHD, MDD, and PTSD. Valence variability was significantly lower in the control group compared to ADHD, depression, bipolar disorder, MDD, PTSD, and OCD but not PPD. Rise and recovery rates of valence also exhibited significant differences from the control. This work provides important early evidence for how linguistic cues pertaining to emotion dynamics can play a crucial role as biosocial markers for mental illnesses and aid in the understanding, diagnosis, and management of mental health disorders.
pdf
bib
abs
Weakly-Supervised Questions for Zero-Shot Relation Extraction
Saeed Najafi
|
Alona Fyshe
Proceedings of the 17th Conference of the European Chapter of the Association for Computational Linguistics
Zero-Shot Relation Extraction (ZRE) is the task of Relation Extraction where the training and test sets have no shared relation types. This very challenging domain is a good test of a model’s ability to generalize. Previous approaches to ZRE reframed relation extraction as Question Answering (QA), allowing for the use of pre-trained QA models. However, this method required manually creating gold question templates for each new relation. Here, we do away with these gold templates and instead learn a model that can generate questions for unseen relations. Our technique can successfully translate relation descriptions into relevant questions, which are then leveraged to generate the correct tail entity. On tail entity extraction, we outperform the previous state-of-the-art by more than 16 F1 points without using gold question templates. On the RE-QA dataset where no previous baseline for relation extraction exists, our proposed algorithm comes within 0.7 F1 points of a system that uses gold question templates. Our model also outperforms the state-of-the-art ZRE baselines on the FewRel and WikiZSL datasets, showing that QA models no longer need template questions to match the performance of models specifically tailored to the ZRE task. Our implementation is available at
https://github.com/fyshelab/QA-ZRE.
pdf
bib
abs
Utterance Emotion Dynamics in Children’s Poems: Emotional Changes Across Age
Daniela Teodorescu
|
Alona Fyshe
|
Saif Mohammad
Proceedings of the 13th Workshop on Computational Approaches to Subjectivity, Sentiment, & Social Media Analysis
Emerging psychopathology studies are showing that patterns of changes in emotional state — emotion dynamics — are associated with overall well-being and mental health. More recently, there has been some work in tracking emotion dynamics through one’s utterances, allowing for data to be collected on a larger scale across time and people. However, several questions about how emotion dynamics change with age, especially in children, and when determined through children’s writing, remain unanswered. In this work, we use both a lexicon and a machine learning based approach to quantify characteristics of emotion dynamics determined from poems written by children of various ages. We show that both approaches point to similar trends: consistent increasing intensities for some emotions (e.g., anger, fear, joy, sadness, arousal, and dominance) with age and a consistent decreasing valence with age. We also find increasing emotional variability, rise rates (i.e., emotional reactivity), and recovery rates (i.e., emotional regulation) with age. These results act as a useful baselines for further research in how patterns of emotions expressed by children change with age, and their association with mental health.
pdf
bib
Better Handling Coreference Resolution in Aspect Level Sentiment Classification by Fine-Tuning Language Models
Dhruv Mullick
|
Bilal Ghanem
|
Alona Fyshe
Proceedings of The Sixth Workshop on Computational Models of Reference, Anaphora and Coreference (CRAC 2023)
2022
pdf
bib
abs
Question Generation for Reading Comprehension Assessment by Modeling How and What to Ask
Bilal Ghanem
|
Lauren Lutz Coleman
|
Julia Rivard Dexter
|
Spencer von der Ohe
|
Alona Fyshe
Findings of the Association for Computational Linguistics: ACL 2022
Reading is integral to everyday life, and yet learning to read is a struggle for many young learners. During lessons, teachers can use comprehension questions to increase engagement, test reading skills, and improve retention. Historically such questions were written by skilled teachers, but recently language models have been used to generate comprehension questions. However, many existing Question Generation (QG) systems focus on generating extractive questions from the text, and have no way to control the type of the generated question. In this paper, we study QG for reading comprehension where inferential questions are critical and extractive techniques cannot be used. We propose a two-step model (HTA-WTA) that takes advantage of previous datasets, and can generate questions for a specific targeted comprehension skill. We propose a new reading comprehension dataset that contains questions annotated with story-based reading comprehension skills (SBRCS), allowing for a more complete reader assessment. Across several experiments, our results show that HTA-WTA outperforms multiple strong baselines on this new dataset. We show that the HTA-WTA model tests for strong SCRS by asking deep inferential questions.
2020
pdf
bib
abs
From Language to Language-ish: How Brain-Like is an LSTM’s Representation of Nonsensical Language Stimuli?
Maryam Hashemzadeh
|
Greta Kaufeld
|
Martha White
|
Andrea E. Martin
|
Alona Fyshe
Findings of the Association for Computational Linguistics: EMNLP 2020
The representations generated by many models of language (word embeddings, recurrent neural networks and transformers) correlate to brain activity recorded while people read. However, these decoding results are usually based on the brain’s reaction to syntactically and semantically sound language stimuli. In this study, we asked: how does an LSTM (long short term memory) language model, trained (by and large) on semantically and syntactically intact language, represent a language sample with degraded semantic or syntactic information? Does the LSTM representation still resemble the brain’s reaction? We found that, even for some kinds of nonsensical language, there is a statistically significant relationship between the brain’s activity and the representations of an LSTM. This indicates that, at least in some instances, LSTMs and the human brain handle nonsensical data similarly.
2018
pdf
bib
abs
The Emergence of Semantics in Neural Network Representations of Visual Information
Dhanush Dharmaretnam
|
Alona Fyshe
Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 2 (Short Papers)
Word vector models learn about semantics through corpora. Convolutional Neural Networks (CNNs) can learn about semantics through images. At the most abstract level, some of the information in these models must be shared, as they model the same real-world phenomena. Here we employ techniques previously used to detect semantic representations in the human brain to detect semantic representations in CNNs. We show the accumulation of semantic information in the layers of the CNN, and discover that, for misclassified images, the correct class can be recovered in intermediate layers of a CNN.
pdf
bib
abs
Social and Emotional Correlates of Capitalization on Twitter
Sophia Chan
|
Alona Fyshe
Proceedings of the Second Workshop on Computational Modeling of People’s Opinions, Personality, and Emotions in Social Media
Social media text is replete with unusual capitalization patterns. We posit that capitalizing a token like THIS performs two expressive functions: it marks a person socially, and marks certain parts of an utterance as more salient than others. Focusing on gender and sentiment, we illustrate using a corpus of tweets that capitalization appears in more negative than positive contexts, and is used more by females compared to males. Yet we find that both genders use capitalization in a similar way when expressing sentiment.
pdf
bib
abs
Interpreting Word-Level Hidden State Behaviour of Character-Level LSTM Language Models
Avery Hiebert
|
Cole Peterson
|
Alona Fyshe
|
Nishant Mehta
Proceedings of the 2018 EMNLP Workshop BlackboxNLP: Analyzing and Interpreting Neural Networks for NLP
While Long Short-Term Memory networks (LSTMs) and other forms of recurrent neural network have been successfully applied to language modeling on a character level, the hidden state dynamics of these models can be difficult to interpret. We investigate the hidden states of such a model by using the HDBSCAN clustering algorithm to identify points in the text at which the hidden state is similar. Focusing on whitespace characters prior to the beginning of a word reveals interpretable clusters that offer insight into how the LSTM may combine contextual and character-level information to identify parts of speech. We also introduce a method for deriving word vectors from the hidden state representation in order to investigate the word-level knowledge of the model. These word vectors encode meaningful semantic information even for words that appear only once in the training text.
2017
pdf
bib
abs
Ensemble Methods for Native Language Identification
Sophia Chan
|
Maryam Honari Jahromi
|
Benjamin Benetti
|
Aazim Lakhani
|
Alona Fyshe
Proceedings of the 12th Workshop on Innovative Use of NLP for Building Educational Applications
Our team—Uvic-NLP—explored and evaluated a variety of lexical features for Native Language Identification (NLI) within the framework of ensemble methods. Using a subset of the highest performing features, we train Support Vector Machines (SVM) and Fully Connected Neural Networks (FCNN) as base classifiers, and test different methods for combining their outputs. Restricting our scope to the closed essay track in the NLI Shared Task 2017, we find that our best SVM ensemble achieves an F1 score of 0.8730 on the test set.
2016
pdf
bib
Poet Admits // Mute Cypher: Beam Search to find Mutually Enciphering Poetic Texts
Cole Peterson
|
Alona Fyshe
Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing
pdf
bib
BrainBench: A Brain-Image Test Suite for Distributional Semantic Models
Haoyan Xu
|
Brian Murphy
|
Alona Fyshe
Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing
2015
pdf
bib
A Compositional and Interpretable Semantic Space
Alona Fyshe
|
Leila Wehbe
|
Partha P. Talukdar
|
Brian Murphy
|
Tom M. Mitchell
Proceedings of the 2015 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies
2014
pdf
bib
Interpretable Semantic Vectors from a Joint Model of Brain- and Text- Based Meaning
Alona Fyshe
|
Partha P. Talukdar
|
Brian Murphy
|
Tom M. Mitchell
Proceedings of the 52nd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)
2013
pdf
bib
Documents and Dependencies: an Exploration of Vector Space Models for Semantic Composition
Alona Fyshe
|
Brian Murphy
|
Partha Talukdar
|
Tom Mitchell
Proceedings of the Seventeenth Conference on Computational Natural Language Learning
2006
pdf
bib
Term Generalization and Synonym Resolution for Biological Abstracts: Using the Gene Ontology for Subcellular Localization Prediction
Alona Fyshe
|
Duane Szafron
Proceedings of the HLT-NAACL BioNLP Workshop on Linking Natural Language and Biology