A Comparison of Fine-Tuning and In-Context Learning for Clause-Level Morphosyntactic Alternation

Jim Su, Justin Ho, George Broadwell, Sarah Moeller, Bonnie Dorr


Abstract
This paper presents our submission to the AmericasNLP 2024 Shared Task on the Creation of Educational Materials for Indigenous Languages. We frame this task as one of morphological inflection generation, treating each sentence as a single word. We investigate and compare two distinct approaches: fine-tuning neural encoder-decoder models such as NLLB- 200, and in-context learning with proprietary large language models (LLMs). Our findings demonstrate that for this task, no one approach is perfect. Anthropic’s Claude 3 Opus, when supplied with grammatical description entries, achieves the highest performance on Bribri among the evaluated models. This outcome corroborates and extends previous research exploring the efficacy of in-context learning in low- resource settings. For Maya, fine-tuning NLLB- 200-3.3B using StemCorrupt augmented data yielded the best performance.
Anthology ID:
2024.americasnlp-1.21
Volume:
Proceedings of the 4th Workshop on Natural Language Processing for Indigenous Languages of the Americas (AmericasNLP 2024)
Month:
June
Year:
2024
Address:
Mexico City, Mexico
Editors:
Manuel Mager, Abteen Ebrahimi, Shruti Rijhwani, Arturo Oncevay, Luis Chiruzzo, Robert Pugh, Katharina von der Wense
Venues:
AmericasNLP | WS
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
179–187
Language:
URL:
https://aclanthology.org/2024.americasnlp-1.21
DOI:
10.18653/v1/2024.americasnlp-1.21
Bibkey:
Cite (ACL):
Jim Su, Justin Ho, George Broadwell, Sarah Moeller, and Bonnie Dorr. 2024. A Comparison of Fine-Tuning and In-Context Learning for Clause-Level Morphosyntactic Alternation. In Proceedings of the 4th Workshop on Natural Language Processing for Indigenous Languages of the Americas (AmericasNLP 2024), pages 179–187, Mexico City, Mexico. Association for Computational Linguistics.
Cite (Informal):
A Comparison of Fine-Tuning and In-Context Learning for Clause-Level Morphosyntactic Alternation (Su et al., AmericasNLP-WS 2024)
Copy Citation:
PDF:
https://aclanthology.org/2024.americasnlp-1.21.pdf