Are LLMs Good Annotators for Discourse-level Event Relation Extraction?

Kangda Wei, Aayush Gautam, Ruihong Huang


Abstract
Large Language Models (LLMs) have demonstrated proficiency in a wide array of natural language processing tasks. However, its effectiveness over discourse-level event relation extraction (ERE) tasks remains unexplored. In this paper, we assess the effectiveness of LLMs in addressing discourse-level ERE tasks characterized by lengthy documents and intricate relations encompassing coreference, temporal, causal, and subevent types. Evaluation is conducted using an commercial model, GPT-3.5, and an open-source model, LLaMA-2. Our study reveals a notable underperformance of LLMs compared to the baseline established through supervised learning. Although Supervised Fine-Tuning (SFT) can improve LLMs performance, it does not scale well compared to the smaller supervised baseline model. Our quantitative and qualitative analysis shows that LLMs have several weaknesses when applied for extracting event relations, including a tendency to fabricate event mentions, and failures to capture transitivity rules among relations, detect long distance relations, or comprehend contexts with dense event mentions.
Anthology ID:
2024.findings-emnlp.1
Volume:
Findings of the Association for Computational Linguistics: EMNLP 2024
Month:
November
Year:
2024
Address:
Miami, Florida, USA
Editors:
Yaser Al-Onaizan, Mohit Bansal, Yun-Nung Chen
Venue:
Findings
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
1–19
Language:
URL:
https://aclanthology.org/2024.findings-emnlp.1/
DOI:
10.18653/v1/2024.findings-emnlp.1
Bibkey:
Cite (ACL):
Kangda Wei, Aayush Gautam, and Ruihong Huang. 2024. Are LLMs Good Annotators for Discourse-level Event Relation Extraction?. In Findings of the Association for Computational Linguistics: EMNLP 2024, pages 1–19, Miami, Florida, USA. Association for Computational Linguistics.
Cite (Informal):
Are LLMs Good Annotators for Discourse-level Event Relation Extraction? (Wei et al., Findings 2024)
Copy Citation:
PDF:
https://aclanthology.org/2024.findings-emnlp.1.pdf