2024
pdf
bib
abs
Robust Neural Machine Translation for Abugidas by Glyph Perturbation
Hour Kaing
|
Chenchen Ding
|
Hideki Tanaka
|
Masao Utiyama
Proceedings of the 18th Conference of the European Chapter of the Association for Computational Linguistics (Volume 2: Short Papers)
Neural machine translation (NMT) systems are vulnerable when trained on limited data. This is a common scenario in low-resource tasks in the real world. To increase robustness, a solution is to intently add realistic noise in the training phase. Noise simulation using text perturbation has been proven to be efficient in writing systems that use Latin letters. In this study, we further explore perturbation techniques on more complex abugida writing systems, for which the visual similarity of complex glyphs is considered to capture the essential nature of these writing systems. Besides the generated noise, we propose a training strategy to improve robustness. We conducted experiments on six languages: Bengali, Hindi, Myanmar, Khmer, Lao, and Thai. By overcoming the introduced noise, we obtained non-degenerate NMT systems with improved robustness for low-resource tasks for abugida glyphs.
pdf
bib
abs
Overcoming Early Saturation on Low-Resource Languages in Multilingual Dependency Parsing
Jiannan Mao
|
Chenchen Ding
|
Hour Kaing
|
Hideki Tanaka
|
Masao Utiyama
|
Tadahiro Matsumoto.
Proceedings of the Joint Workshop on Multiword Expressions and Universal Dependencies (MWE-UD) @ LREC-COLING 2024
UDify is a multilingual and multi-task parser fine-tuned on mBERT that achieves remarkable performance in high-resource languages. However, the performance saturates early and decreases gradually in low-resource languages as training proceeds. This work applies a data augmentation method and conducts experiments on seven few-shot and four zero-shot languages. The unlabeled attachment scores were improved on the zero-shot languages dependency parsing tasks, with the average score rising from 67.1% to 68.7%. Meanwhile, dependency parsing tasks for high-resource languages and other tasks were hardly affected. Experimental results indicate the data augmentation method is effective for low-resource languages in a multilingual dependency parsing.
2023
pdf
bib
Improving Zero-Shot Dependency Parsing by Unsupervised Learning
Jiannan Mao
|
Chenchen Ding
|
Hour Kaing
|
Hideki Tanaka
|
Masao Utiyama
|
Tadahiro Matsumoto
Proceedings of the 37th Pacific Asia Conference on Language, Information and Computation
2021
pdf
bib
abs
Multi-Source Cross-Lingual Constituency Parsing
Hour Kaing
|
Chenchen Ding
|
Katsuhito Sudoh
|
Masao Utiyama
|
Eiichiro Sumita
|
Satoshi Nakamura
Proceedings of the 18th International Conference on Natural Language Processing (ICON)
Pretrained multilingual language models have become a key part of cross-lingual transfer for many natural language processing tasks, even those without bilingual information. This work further investigates the cross-lingual transfer ability of these models for constituency parsing and focuses on multi-source transfer. Addressing structure and label set diversity problems, we propose the integration of typological features into the parsing model and treebank normalization. We trained the model on eight languages with diverse structures and use transfer parsing for an additional six low-resource languages. The experimental results show that the treebank normalization is essential for cross-lingual transfer performance and the typological features introduce further improvement. As a result, our approach improves the baseline F1 of multi-source transfer by 5 on average.
2020
pdf
bib
abs
A Myanmar (Burmese)-English Named Entity Transliteration Dictionary
Aye Myat Mon
|
Chenchen Ding
|
Hour Kaing
|
Khin Mar Soe
|
Masao Utiyama
|
Eiichiro Sumita
Proceedings of the Twelfth Language Resources and Evaluation Conference
Transliteration is generally a phonetically based transcription across different writing systems. It is a crucial task for various downstream natural language processing applications. For the Myanmar (Burmese) language, robust automatic transliteration for borrowed English words is a challenging task because of the complex Myanmar writing system and the lack of data. In this study, we constructed a Myanmar-English named entity dictionary containing more than eighty thousand transliteration instances. The data have been released under a CC BY-NC-SA license. We evaluated the automatic transliteration performance using statistical and neural network-based approaches based on the prepared data. The neural network model outperformed the statistical model significantly in terms of the BLEU score on the character level. Different units used in the Myanmar script for processing were also compared and discussed.
2019
pdf
bib
abs
Supervised and Unsupervised Machine Translation for Myanmar-English and Khmer-English
Benjamin Marie
|
Hour Kaing
|
Aye Myat Mon
|
Chenchen Ding
|
Atsushi Fujita
|
Masao Utiyama
|
Eiichiro Sumita
Proceedings of the 6th Workshop on Asian Translation
This paper presents the NICT’s supervised and unsupervised machine translation systems for the WAT2019 Myanmar-English and Khmer-English translation tasks. For all the translation directions, we built state-of-the-art supervised neural (NMT) and statistical (SMT) machine translation systems, using monolingual data cleaned and normalized. Our combination of NMT and SMT performed among the best systems for the four translation directions. We also investigated the feasibility of unsupervised machine translation for low-resource and distant language pairs and confirmed observations of previous work showing that unsupervised MT is still largely unable to deal with them.