2024
pdf
bib
abs
Data Contamination Report from the 2024 CONDA Shared Task
Oscar Sainz
|
Iker García-Ferrero
|
Alon Jacovi
|
Jon Ander Campos
|
Yanai Elazar
|
Eneko Agirre
|
Yoav Goldberg
|
Wei-Lin Chen
|
Jenny Chim
|
Leshem Choshen
|
Luca D’Amico-Wong
|
Melissa Dell
|
Run-Ze Fan
|
Shahriar Golchin
|
Yucheng Li
|
Pengfei Liu
|
Bhavish Pahwa
|
Ameya Prabhu
|
Suryansh Sharma
|
Emily Silcock
|
Kateryna Solonko
|
David Stap
|
Mihai Surdeanu
|
Yu-Min Tseng
|
Vishaal Udandarao
|
Zengzhi Wang
|
Ruijie Xu
|
Jinglin Yang
Proceedings of the 1st Workshop on Data Contamination (CONDA)
The 1st Workshop on Data Contamination (CONDA 2024) focuses on all relevant aspects of data contamination in natural language processing, where data contamination is understood as situations where evaluation data is included in pre-training corpora used to train large scale models, compromising evaluation results. The workshop fostered a shared task to collect evidence on data contamination in current available datasets and models. The goal of the shared task and associated database is to assist the community in understanding the extent of the problem and to assist researchers in avoiding reporting evaluation results on known contaminated resources. The shared task provides a structured, centralized public database for the collection of contamination evidence, open to contributions from the community via GitHub pool requests. This first compilation paper is based on 566 reported entries over 91 contaminated sources from a total of 23 contributors. The details of the individual contamination events are available in the platform. The platform continues to be online, open to contributions from the community.
pdf
bib
abs
Multi-Lingual ESG Impact Duration Inference
Chung-Chi Chen
|
Yu-Min Tseng
|
Juyeon Kang
|
Anais Lhuissier
|
Yohei Seki
|
Hanwool Lee
|
Min-Yuh Day
|
Teng-Tsai Tu
|
Hsin-Hsi Chen
Proceedings of the Joint Workshop of the 7th Financial Technology and Natural Language Processing, the 5th Knowledge Discovery from Unstructured Data in Financial Services, and the 4th Workshop on Economics and Natural Language Processing
To accurately assess the dynamic impact of a company’s activities on its Environmental, Social, and Governance (ESG) scores, we have initiated a series of shared tasks, named ML-ESG. These tasks adhere to the MSCI guidelines for annotating news articles across various languages. This paper details the third iteration of our series, ML-ESG-3, with a focus on impact duration inference—a task that poses significant challenges in estimating the enduring influence of events, even for human analysts. In ML-ESG-3, we provide datasets in five languages (Chinese, English, French, Korean, and Japanese) and share insights from our experience in compiling such subjective datasets. Additionally, this paper reviews the methodologies proposed by ML-ESG-3 participants and offers a comparative analysis of the models’ performances. Concluding the paper, we introduce the concept for the forthcoming series of shared tasks, namely multi-lingual ESG promise verification, and discuss its potential contributions to the field.
2023
pdf
bib
Multi-Lingual ESG Issue Identification
Chung-Chi Chen
|
Yu-Min Tseng
|
Juyeon Kang
|
Anaïs Lhuissier
|
Min-Yuh Day
|
Teng-Tsai Tu
|
Hsin-Hsi Chen
Proceedings of the Fifth Workshop on Financial Technology and Natural Language Processing and the Second Multimodal AI For Financial Forecasting
pdf
bib
abs
Multi-Lingual ESG Impact Type Identification
Chung-Chi Chen
|
Yu-Min Tseng
|
Juyeon Kang
|
Anaïs Lhuissier
|
Yohei Seki
|
Min-Yuh Day
|
Teng-Tsai Tu
|
Hsin-Hsi Chen
Proceedings of the Sixth Workshop on Financial Technology and Natural Language Processing
Assessing a company’s sustainable development goes beyond just financial metrics; the inclusion of environmental, social, and governance (ESG) factors is becoming increasingly vital. The ML-ESG shared task series seeks to pioneer discussions on news-driven ESG ratings, drawing inspiration from the MSCI ESG rating guidelines. In its second edition, ML-ESG-2 emphasizes impact type identification, offering datasets in four languages: Chinese, English, French, and Japanese. Of the 28 teams registered, 8 participated in the official evaluation. This paper presents a comprehensive overview of ML-ESG-2, detailing the dataset specifics and summarizing the performance outcomes of the participating teams.