Neurons in Large Language Models: Dead, N-gram, Positional

Elena Voita, Javier Ferrando, Christoforos Nalmpantis


Abstract
We analyze a family of large language models in such a lightweight manner that can be done on a single GPU. Specifically, we focus on the OPT family of models ranging from 125m to 66b parameters and rely only on whether an FFN neuron is activated or not. First, we find that the early part of the network is sparse and represents many discrete features. Here, many neurons (more than in some layers of the 66b model) are “dead”, i.e. they never activate on a large collection of diverse data. At the same time, many of the alive neurons are reserved for discrete features and act as token and n-gram detectors. Interestingly, their corresponding FFN updates not only promote next token candidates as could be expected, but also explicitly focus on removing the information about triggering them tokens, i.e., current input. To the best of our knowledge, this is the first example of mechanisms specialized at removing (rather than adding) information from the residual stream. With scale, models become more sparse in a sense that they have more dead neurons and token detectors. Finally, some neurons are positional: them being activated or not depends largely (or solely) on position and less so (or not at all) on textual data. We find that smaller models have sets of neurons acting as position range indicators while larger models operate in a less explicit manner.
Anthology ID:
2024.findings-acl.75
Volume:
Findings of the Association for Computational Linguistics ACL 2024
Month:
August
Year:
2024
Address:
Bangkok, Thailand and virtual meeting
Editors:
Lun-Wei Ku, Andre Martins, Vivek Srikumar
Venue:
Findings
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
1288–1301
Language:
URL:
https://aclanthology.org/2024.findings-acl.75
DOI:
10.18653/v1/2024.findings-acl.75
Bibkey:
Cite (ACL):
Elena Voita, Javier Ferrando, and Christoforos Nalmpantis. 2024. Neurons in Large Language Models: Dead, N-gram, Positional. In Findings of the Association for Computational Linguistics ACL 2024, pages 1288–1301, Bangkok, Thailand and virtual meeting. Association for Computational Linguistics.
Cite (Informal):
Neurons in Large Language Models: Dead, N-gram, Positional (Voita et al., Findings 2024)
Copy Citation:
PDF:
https://aclanthology.org/2024.findings-acl.75.pdf