Label Confidence Weighted Learning for Target-level Sentence Simplification

Xin Ying Qiu, Jingshen Zhang


Abstract
Multi-level sentence simplification generates simplified sentences with varying language proficiency levels. We propose Label Confidence Weighted Learning (LCWL), a novel approach that incorporates a label confidence weighting scheme in the training loss of the encoder-decoder model, setting it apart from existing confidence-weighting methods primarily designed for classification. Experimentation on English grade-level simplification dataset shows that LCWL outperforms state-of-the-art unsupervised baselines. Fine-tuning the LCWL model on in-domain data and combining with Symmetric Cross Entropy (SCE) consistently delivers better simplifications compared to strong supervised methods. Our results highlight the effectiveness of label confidence weighting techniques for text simplification tasks with encoder-decoder architectures.
Anthology ID:
2024.emnlp-main.999
Volume:
Proceedings of the 2024 Conference on Empirical Methods in Natural Language Processing
Month:
November
Year:
2024
Address:
Miami, Florida, USA
Editors:
Yaser Al-Onaizan, Mohit Bansal, Yun-Nung Chen
Venue:
EMNLP
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
18004–18019
Language:
URL:
https://aclanthology.org/2024.emnlp-main.999/
DOI:
10.18653/v1/2024.emnlp-main.999
Bibkey:
Cite (ACL):
Xin Ying Qiu and Jingshen Zhang. 2024. Label Confidence Weighted Learning for Target-level Sentence Simplification. In Proceedings of the 2024 Conference on Empirical Methods in Natural Language Processing, pages 18004–18019, Miami, Florida, USA. Association for Computational Linguistics.
Cite (Informal):
Label Confidence Weighted Learning for Target-level Sentence Simplification (Qiu & Zhang, EMNLP 2024)
Copy Citation:
PDF:
https://aclanthology.org/2024.emnlp-main.999.pdf