Tending Towards Stability: Convergence Challenges in Small Language Models

Richard Diehl Martinez, Pietro Lesci, Paula Buttery


Abstract
Increasing the number of parameters in language models is a common strategy to enhance their performance. However, smaller language models remain valuable due to their lower operational costs. Despite their advantages, smaller models frequently underperform compared to their larger counterparts, even when provided with equivalent data and computational resources. Specifically, their performance tends to degrade in the late pretraining phase. This is anecdotally attributed to their reduced representational capacity. Yet, the exact causes of this performance degradation remain unclear. We use the Pythia model suite to analyse the training dynamics that underlie this phenomenon. Across different model sizes, we investigate the convergence of the Attention and MLP activations to their final state and examine how the effective rank of their parameters influences this process. We find that nearly all layers in larger models stabilise early in training - within the first 20% - whereas layers in smaller models exhibit slower and less stable convergence, especially when their parameters have lower effective rank. By linking the convergence of layers’ activations to their parameters’ effective rank, our analyses can guide future work to address inefficiencies in the learning dynamics of small models.
Anthology ID:
2024.findings-emnlp.187
Volume:
Findings of the Association for Computational Linguistics: EMNLP 2024
Month:
November
Year:
2024
Address:
Miami, Florida, USA
Editors:
Yaser Al-Onaizan, Mohit Bansal, Yun-Nung Chen
Venue:
Findings
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
3275–3286
Language:
URL:
https://aclanthology.org/2024.findings-emnlp.187
DOI:
10.18653/v1/2024.findings-emnlp.187
Bibkey:
Cite (ACL):
Richard Diehl Martinez, Pietro Lesci, and Paula Buttery. 2024. Tending Towards Stability: Convergence Challenges in Small Language Models. In Findings of the Association for Computational Linguistics: EMNLP 2024, pages 3275–3286, Miami, Florida, USA. Association for Computational Linguistics.
Cite (Informal):
Tending Towards Stability: Convergence Challenges in Small Language Models (Diehl Martinez et al., Findings 2024)
Copy Citation:
PDF:
https://aclanthology.org/2024.findings-emnlp.187.pdf