Nikit Srivastava
2024
Benchmarking Low-Resource Machine Translation Systems
Ana Silva
|
Nikit Srivastava
|
Tatiana Moteu Ngoli
|
Michael Röder
|
Diego Moussallem
|
Axel-Cyrille Ngonga Ngomo
Proceedings of the Seventh Workshop on Technologies for Machine Translation of Low-Resource Languages (LoResMT 2024)
Assessing the performance of machine translation systems is of critical value, especially to languages with lower resource availability.Due to the large evaluation effort required by the translation task, studies often compare new systems against single systems or commercial solutions. Consequently, determining the best-performing system for specific languages is often unclear. This work benchmarks publicly available translation systems across 4 datasets and 26 languages, including low-resource languages. We consider both effectiveness and efficiency in our evaluation.Our results are made public through BENG—a FAIR benchmarking platform for Natural Language Generation tasks.