Natasha Seelam
2022
Dataset Debt in Biomedical Language Modeling
Jason Fries
|
Natasha Seelam
|
Gabriel Altay
|
Leon Weber
|
Myungsun Kang
|
Debajyoti Datta
|
Ruisi Su
|
Samuele Garda
|
Bo Wang
|
Simon Ott
|
Matthias Samwald
|
Wojciech Kusa
Proceedings of BigScience Episode #5 -- Workshop on Challenges & Perspectives in Creating Large Language Models
Large-scale language modeling and natural language prompting have demonstrated exciting capabilities for few and zero shot learning in NLP. However, translating these successes to specialized domains such as biomedicine remains challenging, due in part to biomedical NLP’s significant dataset debt – the technical costs associated with data that are not consistently documented or easily incorporated into popular machine learning frameworks at scale. To assess this debt, we crowdsourced curation of datasheets for 167 biomedical datasets. We find that only 13% of datasets are available via programmatic access and 30% lack any documentation on licensing and permitted reuse. Our dataset catalog is available at: https://tinyurl.com/bigbio22.
Emergent Structures and Training Dynamics in Large Language Models
Ryan Teehan
|
Miruna Clinciu
|
Oleg Serikov
|
Eliza Szczechla
|
Natasha Seelam
|
Shachar Mirkin
|
Aaron Gokaslan
Proceedings of BigScience Episode #5 -- Workshop on Challenges & Perspectives in Creating Large Language Models
Large language models have achieved success on a number of downstream tasks, particularly in a few and zero-shot manner. As a consequence, researchers have been investigating both the kind of information these networks learn and how such information can be encoded in the parameters of the model. We survey the literature on changes in the network during training, drawing from work outside of NLP when necessary, and on learned representations of linguistic features in large language models. We note in particular the lack of sufficient research on the emergence of functional units, subsections of the network where related functions are grouped or organised, within large language models and motivate future work that grounds the study of language models in an analysis of their changing internal structure during training time.
Search
Co-authors
- Jason Fries 1
- Gabriel Altay 1
- Leon Weber 1
- Myungsun Kang 1
- Debajyoti Datta 1
- show all...