2024
pdf
bib
abs
MultiPICo: Multilingual Perspectivist Irony Corpus
Silvia Casola
|
Simona Frenda
|
Soda Lo
|
Erhan Sezerer
|
Antonio Uva
|
Valerio Basile
|
Cristina Bosco
|
Alessandro Pedrani
|
Chiara Rubagotti
|
Viviana Patti
|
Davide Bernardi
Proceedings of the 62nd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)
Recently, several scholars have contributed to the growth of a new theoretical framework in NLP called perspectivism. This approach aimsto leverage data annotated by different individuals to model diverse perspectives that affect their opinions on subjective phenomena such as irony. In this context, we propose MultiPICo, a multilingual perspectivist corpus of ironic short conversations in different languages andlinguistic varieties extracted from Twitter and Reddit. The corpus includes sociodemographic information about its annotators. Our analysis of the annotated corpus shows how different demographic cohorts may significantly disagree on their annotation of irony and how certain cultural factors influence the perception of the phenomenon and the agreement on the annotation. Moreover, we show how disaggregated annotations and rich annotator metadata can be exploited to benchmark the ability of large language models to recognize irony, their positionality with respect to sociodemographic groups, and the efficacy of perspective-taking prompting for irony detection in multiple languages.
2023
pdf
bib
abs
Confidence-based Ensembling of Perspective-aware Models
Silvia Casola
|
Soda Marem Lo
|
Valerio Basile
|
Simona Frenda
|
Alessandra Cignarella
|
Viviana Patti
|
Cristina Bosco
Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing
Research in the field of NLP has recently focused on the variability that people show in selecting labels when performing an annotation task. Exploiting disagreements in annotations has been shown to offer advantages for accurate modelling and fair evaluation. In this paper, we propose a strongly perspectivist model for supervised classification of natural language utterances. Our approach combines the predictions of several perspective-aware models using key information of their individual confidence to capture the subjectivity encoded in the annotation of linguistic phenomena. We validate our method through experiments on two case studies, irony and hate speech detection, in in-domain and cross-domain settings. The results show that confidence-based ensembling of perspective-aware models seems beneficial for classification performance in all scenarios. In addition, we demonstrate the effectiveness of our method with automatically extracted perspectives from annotations when the annotators’ metadata are not available.
2022
pdf
bib
abs
What’s in a (dataset’s) name? The case of BigPatent
Silvia Casola
|
Alberto Lavelli
|
Horacio Saggion
Proceedings of the 2nd Workshop on Natural Language Generation, Evaluation, and Metrics (GEM)
Sharing datasets and benchmarks has been crucial for rapidly improving Natural Language Processing models and systems. Documenting datasets’ characteristics (and any modification introduced over time) is equally important to avoid confusion and make comparisons reliable. Here, we describe the case of BigPatent, a dataset for patent summarization that exists in at least two rather different versions under the same name. While previous literature has not clearly distinguished among versions, their differences do not only lay on a surface level but also modify the dataset’s core nature and, thus, the complexity of the summarization task. While this paper describes a specific case, we aim to shed light on new challenges that might emerge in resource sharing and advocate for comprehensive documentation of datasets and models.
pdf
bib
abs
Exploring the limits of a base BART for multi-document summarization in the medical domain
Ishmael Obonyo
|
Silvia Casola
|
Horacio Saggion
Proceedings of the Third Workshop on Scholarly Document Processing
This paper is a description of our participation in the Multi-document Summarization for Literature Review (MSLR) Shared Task, in which we explore summarization models to create an automatic review of scientific results. Rather than maximizing the metrics using expensive computational models, we placed ourselves in a situation of scarce computational resources and explore the limits of a base sequence to sequence models (thus with a limited input length) to the task. Although we explore methods to feed the abstractive model with salient sentences only (using a first extractive step), we find the results still need some improvements.
2020
pdf
bib
abs
FBK@SMM4H2020: RoBERTa for Detecting Medications on Twitter
Silvia Casola
|
Alberto Lavelli
Proceedings of the Fifth Social Media Mining for Health Applications Workshop & Shared Task
This paper describes a classifier for tweets that mention medications or supplements, based on a pretrained transformer. We developed such a system for our participation in Subtask 1 of the Social Media Mining for Health Application workshop, which featured an extremely unbalanced dataset. The model showed promising results, with an F1 of 0.8 (task mean: 0.66).