Zixiaofan Yang
2021
CHoRaL: Collecting Humor Reaction Labels from Millions of Social Media Users
Zixiaofan Yang
|
Shayan Hooshmand
|
Julia Hirschberg
Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing
Humor detection has gained attention in recent years due to the desire to understand user-generated content with figurative language. However, substantial individual and cultural differences in humor perception make it very difficult to collect a large-scale humor dataset with reliable humor labels. We propose CHoRaL, a framework to generate perceived humor labels on Facebook posts, using the naturally available user reactions to these posts with no manual annotation needed. CHoRaL provides both binary labels and continuous scores of humor and non-humor. We present the largest dataset to date with labeled humor on 785K posts related to COVID-19. Additionally, we analyze the expression of COVID-related humor in social media by extracting lexico-semantic and affective features from the posts, and build humor detection models with performance similar to humans. CHoRaL enables the development of large-scale humor detection models on any topic and opens a new path to the study of humor on social media.
2020
Improving Text-to-Text Pre-trained Models for the Graph-to-Text Task
Zixiaofan Yang
|
Arash Einolghozati
|
Hakan Inan
|
Keith Diedrick
|
Angela Fan
|
Pinar Donmez
|
Sonal Gupta
Proceedings of the 3rd International Workshop on Natural Language Generation from the Semantic Web (WebNLG+)
Converting a knowledge graph or sub-graph to natural text is useful when answering questions based on a knowledge base. High-capacity language models pre-trained on large-scale text corpora have recently been shown to be powerful when fine-tuned for the knowledge-graph-to-text (KG-to-text) task. In this paper, we propose two classes of methods to improve such pre-trained models for this task. First, we improve the structure awareness of the model by organizing the input as well as learning optimal ordering via multitask learning. Second, we bridge the domain gap between text-to-text and KG-to-text tasks via a second-phase KG-to-text pre-training on similar datasets and extra lexicalization supervision to make the input more similar to natural text. We demonstrate the efficacy of our methods on the popular WebNLG dataset. Our best model achieves an almost 3 point BLEU improvement on a strong baseline while lowering the relative slot-error-rate by around 35%. We also validate our results via human evaluation.
Search
Co-authors
- Shayan Hooshmand 1
- Julia Hirschberg 1
- Arash Einolghozati 1
- Hakan Inan 1
- Keith Diedrick 1
- show all...