2024
pdf
bib
Proceedings of the 3rd Workshop on NLP Applications to Field Linguistics (Field Matters 2024)
Oleg Serikov
|
Ekaterina Voloshina
|
Anna Postnikova
|
Saliha Muradoglu
|
Eric Le Ferrand
|
Elena Klyachko
|
Ekaterina Vylomova
|
Tatiana Shavrina
|
Francis Tyers
Proceedings of the 3rd Workshop on NLP Applications to Field Linguistics (Field Matters 2024)
pdf
bib
Proceedings of the 6th Workshop on Research in Computational Linguistic Typology and Multilingual NLP
Michael Hahn
|
Alexey Sorokin
|
Ritesh Kumar
|
Andreas Shcherbakov
|
Yulia Otmakhova
|
Jinrui Yang
|
Oleg Serikov
|
Priya Rani
|
Edoardo M. Ponti
|
Saliha Muradoğlu
|
Rena Gao
|
Ryan Cotterell
|
Ekaterina Vylomova
Proceedings of the 6th Workshop on Research in Computational Linguistic Typology and Multilingual NLP
pdf
bib
abs
Resisting the Lure of the Skyline: Grounding Practices in Active Learning for Morphological Inflection
Saliha Muradoglu
|
Michael Ginn
|
Miikka Silfverberg
|
Mans Hulden
Proceedings of the 62nd Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers)
Active learning (AL) aims to lower the demand of annotation by selecting informative unannotated samples for the model building. In this paper, we explore the importance of conscious experimental design in the language documentation and description setting, particularly the distribution of the unannotated sample pool. We focus on the task of morphological inflection using a Transformer model. We propose context motivated benchmarks: a baseline and skyline. The baseline describes the frequency weighted distribution encountered in natural speech. We simulate this using Wikipedia texts. The skyline defines the more common approach, uniform sampling from a large, balanced corpus (UniMorph, in our case), which often yields mixed results. We note the unrealistic nature of this unannotated pool. When these factors are considered, our results show a clear benefit to targeted sampling.
2023
pdf
bib
abs
Do transformer models do phonology like a linguist?
Saliha Muradoglu
|
Mans Hulden
Findings of the Association for Computational Linguistics: ACL 2023
Neural sequence-to-sequence models have been very successful at tasks in phonology and morphology that seemingly require a capacity for intricate linguistic generalisations. In this paper, we perform a detailed breakdown of the power of such models to capture various phonological generalisations and to benefit from exposure to one phonological rule to infer the behaviour of another similar rule. We present two types of experiments, one of which establishes the efficacy of the transformer model on 29 different processes. The second experiment type follows a priming and held-out case split where our model is exposed to two (or more) phenomena; one which is used as a primer to make the model aware of a linguistic category (e.g. voiceless stops) and a second one which contains a rule with a withheld case that the model is expected to infer (e.g. word-final devoicing with a missing training example such as b→p) results show that the transformer model can successfully model all 29 phonological phenomena considered, regardless of perceived process difficulty. We also show that the model can generalise linguistic categories and structures, such as vowels and syllables, through priming processes.
pdf
bib
Proceedings of the 5th Workshop on Research in Computational Linguistic Typology and Multilingual NLP
Lisa Beinborn
|
Koustava Goswami
|
Saliha Muradoğlu
|
Alexey Sorokin
|
Ritesh Kumar
|
Andreas Shcherbakov
|
Edoardo M. Ponti
|
Ryan Cotterell
|
Ekaterina Vylomova
Proceedings of the 5th Workshop on Research in Computational Linguistic Typology and Multilingual NLP
pdf
bib
abs
A Quest for Paradigm Coverage: The Story of Nen
Saliha Muradoglu
|
Hanna Suominen
|
Nicholas Evans
Proceedings of the Second Workshop on NLP Applications to Field Linguistics
Language documentation aims to collect a representative corpus of the language. Nevertheless, the question of how to quantify the comprehensive of the collection persists. We propose leveraging computational modelling to provide a supplementary metric to address this question in a low-resource language setting. We apply our proposed methods to the Papuan language Nen. Nen is actively in the process of being described and documented. Given the enormity of the task of language documentation, we focus on one subdomain, namely Nen verbal morphology. This study examines four verb types: copula, positional, middle, and transitive. We propose model-based paradigm generation for each verb type as a new way to measure completeness, where accuracy is analogous to the coverage of the paradigm. We contrast the paradigm attestation within the corpus (constructed from fieldwork data) and the accuracy of the paradigm generated by Transformer models trained for inflection. This analysis is extended by extrapolating from the learning curve established to provide predictions for the quantity of data required to generate a complete paradigm correctly. We also explore the correlation between high-frequency morphosyntactic features and model accuracy. We see a positive correlation between high-frequency feature combinations and model accuracy, but this is only sometimes the case. We also see high accuracy for low-frequency morphosyntactic features. Our results show that model coverage is significantly higher for the middle and transitive verbs but not the positional verb. This is an interesting finding, as the positional verb paradigm is the smallest of the four.
2022
pdf
bib
abs
Eeny, meeny, miny, moe. How to choose data for morphological inflection.
Saliha Muradoglu
|
Mans Hulden
Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing
Data scarcity is a widespread problem for numerous natural language processing (NLP) tasks within low-resource languages. Within morphology, the labour-intensive task of tagging/glossing data is a serious bottleneck for both NLP and fieldwork. Active learning (AL) aims to reduce the cost of data annotation by selecting data that is most informative for the model. In this paper, we explore four sampling strategies for the task of morphological inflection using a Transformer model: a pair of oracle experiments where data is chosen based on correct/incorrect predictions by the model, model confidence, entropy, and random selection. We investigate the robustness of each sampling strategy across 30 typologically diverse languages, as well as a 10-cycle iteration using Natügu as a case study. Our results show a clear benefit to selecting data based on model confidence. Unsurprisingly, the oracle experiment, which is presented as a proxy for linguist/language informer feedback, shows the most improvement. This is followed closely by low-confidence and high-entropy forms. We also show that despite the conventional wisdom of larger data sets yielding better accuracy, introducing more instances of high-confidence, low-entropy, or forms that the model can already inflect correctly, can reduce model performance.
2020
pdf
bib
abs
Modelling Verbal Morphology in Nen
Saliha Muradoglu
|
Nicholas Evans
|
Ekaterina Vylomova
Proceedings of the 18th Annual Workshop of the Australasian Language Technology Association
Nen verbal morphology is particularly complex; a transitive verb can take up to 1,740 unique forms. The combined effect of having a large combinatoric space and a low-resource setting amplifies the need for NLP tools. Nen morphology utilises distributed exponence - a non-trivial means of mapping form to meaning. In this paper, we attempt to model Nen verbal morphology using state-of-the-art machine learning models for morphological reinflection. We explore and categorise the types of errors these systems generate. Our results show sensitivity to training data composition; different distributions of verb type yield different accuracies (patterning with E-complexity). We also demonstrate the types of patterns that can be inferred from the training data, through the case study of sycretism.
pdf
bib
abs
Exploring Looping Effects in RNN-based Architectures
Andrei Shcherbakov
|
Saliha Muradoglu
|
Ekaterina Vylomova
Proceedings of the 18th Annual Workshop of the Australasian Language Technology Association
The paper investigates repetitive loops, a common problem in contemporary text generation (such as machine translation, language modelling, morphological inflection) systems. More specifically, we conduct a study on neural models with recurrent units by explicitly altering their decoder internal state. We use a task of morphological reinflection task as a proxy to study the effects of the changes. Our results show that the probability of the occurrence of repetitive loops is significantly reduced by introduction of an extra neural decoder output. The output should be specifically trained to produce gradually increasing value upon generation of each character of a given sequence. We also explored variations of the technique and found that feeding the extra output back to the decoder amplifies the positive effects.
pdf
bib
abs
Linguist vs. Machine: Rapid Development of Finite-State Morphological Grammars
Sarah Beemer
|
Zak Boston
|
April Bukoski
|
Daniel Chen
|
Princess Dickens
|
Andrew Gerlach
|
Torin Hopkins
|
Parth Anand Jawale
|
Chris Koski
|
Akanksha Malhotra
|
Piyush Mishra
|
Saliha Muradoglu
|
Lan Sang
|
Tyler Short
|
Sagarika Shreevastava
|
Elizabeth Spaulding
|
Testumichi Umada
|
Beilei Xiang
|
Changbing Yang
|
Mans Hulden
Proceedings of the 17th SIGMORPHON Workshop on Computational Research in Phonetics, Phonology, and Morphology
Sequence-to-sequence models have proven to be highly successful in learning morphological inflection from examples as the series of SIGMORPHON/CoNLL shared tasks have shown. It is usually assumed, however, that a linguist working with inflectional examples could in principle develop a gold standard-level morphological analyzer and generator that would surpass a trained neural network model in accuracy of predictions, but that it may require significant amounts of human labor. In this paper, we discuss an experiment where a group of people with some linguistic training develop 25+ grammars as part of the shared task and weigh the cost/benefit ratio of developing grammars by hand. We also present tools that can help linguists triage difficult complex morphophonological phenomena within a language and hypothesize inflectional class membership. We conclude that a significant development effort by trained linguists to analyze and model morphophonological patterns are required in order to surpass the accuracy of neural models.
pdf
bib
abs
To compress or not to compress? A Finite-State approach to Nen verbal morphology
Saliha Muradoglu
|
Nicholas Evans
|
Hanna Suominen
Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics: Student Research Workshop
This paper describes the development of a verbal morphological parser for an under-resourced Papuan language, Nen. Nen verbal morphology is particularly complex, with a transitive verb taking up to 1,740 unique features. The structural properties exhibited by Nen verbs raises interesting choices for analysis. Here we compare two possible methods of analysis: ‘Chunking’ and decomposition. ‘Chunking’ refers to the concept of collating morphological segments into one, whereas the decomposition model follows a more classical linguistic approach. Both models are built using the Finite-State Transducer toolkit foma. The resultant architecture shows differences in size and structural clarity. While the ‘Chunking’ model is under half the size of the full de-composed counterpart, the decomposition displays higher structural order. In this paper, we describe the challenges encountered when modelling a language exhibiting distributed exponence and present the first morphological analyser for Nen, with an overall accuracy of 80.3%.