Adaptation Odyssey in LLMs: Why Does Additional Pretraining Sometimes Fail to Improve?

Fırat Öncel, Matthias Bethge, Beyza Ermis, Mirco Ravanelli, Cem Subakan, Çağatay Yıldız


Abstract
In the last decade, the generalization and adaptation abilities of deep learning models were typically evaluated on fixed training and test distributions. Contrary to traditional deep learning, large language models (LLMs) are (i) even more overparameterized, (ii) trained on unlabeled text corpora curated from the Internet with minimal human intervention, and (iii) trained in an online fashion. These stark contrasts prevent researchers from transferring lessons learned on model generalization and adaptation in deep learning contexts to LLMs.To this end, our short paper introduces empirical observations that aim to shed light on further training of already pretrained language models. Specifically, we demonstrate that training a model on a text domain could degrade its perplexity on the test portion of the same domain. We observe with our subsequent analysis that the performance degradation is positively correlated with the similarity between the additional and the original pretraining dataset of the LLM. Our further token-level perplexity analysis reveals that the perplexity degradation is due to a handful of tokens that are not informative about the domain. We hope these findings will guide us in determining when to adapt a model vs when to rely on its foundational capabilities.
Anthology ID:
2024.emnlp-main.1108
Volume:
Proceedings of the 2024 Conference on Empirical Methods in Natural Language Processing
Month:
November
Year:
2024
Address:
Miami, Florida, USA
Editors:
Yaser Al-Onaizan, Mohit Bansal, Yun-Nung Chen
Venue:
EMNLP
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
19834–19843
Language:
URL:
https://aclanthology.org/2024.emnlp-main.1108
DOI:
10.18653/v1/2024.emnlp-main.1108
Bibkey:
Cite (ACL):
Fırat Öncel, Matthias Bethge, Beyza Ermis, Mirco Ravanelli, Cem Subakan, and Çağatay Yıldız. 2024. Adaptation Odyssey in LLMs: Why Does Additional Pretraining Sometimes Fail to Improve?. In Proceedings of the 2024 Conference on Empirical Methods in Natural Language Processing, pages 19834–19843, Miami, Florida, USA. Association for Computational Linguistics.
Cite (Informal):
Adaptation Odyssey in LLMs: Why Does Additional Pretraining Sometimes Fail to Improve? (Öncel et al., EMNLP 2024)
Copy Citation:
PDF:
https://aclanthology.org/2024.emnlp-main.1108.pdf