Tokenization Falling Short: On Subword Robustness in Large Language Models

Yekun Chai, Yewei Fang, Qiwei Peng, Xuhong Li


Abstract
Language models typically tokenize raw text into sequences of subword identifiers from a predefined vocabulary, a process inherently sensitive to typographical errors, length variations, and largely oblivious to the internal structure of tokens—issues we term *the curse of tokenization*. In this study, we delve into these drawbacks and demonstrate that large language models (LLMs) remain susceptible to these problems. This study systematically investigates these challenges and their impact on LLMs through three critical research questions: (1) complex problem solving, (2) token structure probing, and (3) resilience to typographical variation. Our findings reveal that scaling model parameters can mitigate the issue of tokenization; however, LLMs still suffer from biases induced by typos and other text format variations. Our experiments show that subword regularization such as BPE-dropout can mitigate this issue. We release our evaluation code and data at https://github.com/FloatAI/TKEval.
Anthology ID:
2024.findings-emnlp.86
Volume:
Findings of the Association for Computational Linguistics: EMNLP 2024
Month:
November
Year:
2024
Address:
Miami, Florida, USA
Editors:
Yaser Al-Onaizan, Mohit Bansal, Yun-Nung Chen
Venue:
Findings
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
1582–1599
Language:
URL:
https://aclanthology.org/2024.findings-emnlp.86/
DOI:
10.18653/v1/2024.findings-emnlp.86
Bibkey:
Cite (ACL):
Yekun Chai, Yewei Fang, Qiwei Peng, and Xuhong Li. 2024. Tokenization Falling Short: On Subword Robustness in Large Language Models. In Findings of the Association for Computational Linguistics: EMNLP 2024, pages 1582–1599, Miami, Florida, USA. Association for Computational Linguistics.
Cite (Informal):
Tokenization Falling Short: On Subword Robustness in Large Language Models (Chai et al., Findings 2024)
Copy Citation:
PDF:
https://aclanthology.org/2024.findings-emnlp.86.pdf