2024
pdf
bib
abs
FINDINGS OF THE IWSLT 2024 EVALUATION CAMPAIGN
Ibrahim Said Ahmad
|
Antonios Anastasopoulos
|
Ondřej Bojar
|
Claudia Borg
|
Marine Carpuat
|
Roldano Cattoni
|
Mauro Cettolo
|
William Chen
|
Qianqian Dong
|
Marcello Federico
|
Barry Haddow
|
Dávid Javorský
|
Mateusz Krubiński
|
Tsz Kim Lam
|
Xutai Ma
|
Prashant Mathur
|
Evgeny Matusov
|
Chandresh Maurya
|
John McCrae
|
Kenton Murray
|
Satoshi Nakamura
|
Matteo Negri
|
Jan Niehues
|
Xing Niu
|
Atul Kr. Ojha
|
John Ortega
|
Sara Papi
|
Peter Polák
|
Adam Pospíšil
|
Pavel Pecina
|
Elizabeth Salesky
|
Nivedita Sethiya
|
Balaram Sarkar
|
Jiatong Shi
|
Claytone Sikasote
|
Matthias Sperber
|
Sebastian Stüker
|
Katsuhito Sudoh
|
Brian Thompson
|
Alex Waibel
|
Shinji Watanabe
|
Patrick Wilken
|
Petr Zemánek
|
Rodolfo Zevallos
Proceedings of the 21st International Conference on Spoken Language Translation (IWSLT 2024)
This paper reports on the shared tasks organized by the 21st IWSLT Conference. The shared tasks address 7 scientific challenges in spoken language translation: simultaneous and offline translation, automatic subtitling and dubbing, speech-to-speech translation, dialect and low-resource speech translation, and Indic languages. The shared tasks attracted 17 teams whose submissions are documented in 27 system papers. The growing interest towards spoken language translation is also witnessed by the constantly increasing number of shared task organizers and contributors to the overview paper, almost evenly distributed across industry and academia.
2023
pdf
bib
abs
FINDINGS OF THE IWSLT 2023 EVALUATION CAMPAIGN
Milind Agarwal
|
Sweta Agrawal
|
Antonios Anastasopoulos
|
Luisa Bentivogli
|
Ondřej Bojar
|
Claudia Borg
|
Marine Carpuat
|
Roldano Cattoni
|
Mauro Cettolo
|
Mingda Chen
|
William Chen
|
Khalid Choukri
|
Alexandra Chronopoulou
|
Anna Currey
|
Thierry Declerck
|
Qianqian Dong
|
Kevin Duh
|
Yannick Estève
|
Marcello Federico
|
Souhir Gahbiche
|
Barry Haddow
|
Benjamin Hsu
|
Phu Mon Htut
|
Hirofumi Inaguma
|
Dávid Javorský
|
John Judge
|
Yasumasa Kano
|
Tom Ko
|
Rishu Kumar
|
Pengwei Li
|
Xutai Ma
|
Prashant Mathur
|
Evgeny Matusov
|
Paul McNamee
|
John P. McCrae
|
Kenton Murray
|
Maria Nadejde
|
Satoshi Nakamura
|
Matteo Negri
|
Ha Nguyen
|
Jan Niehues
|
Xing Niu
|
Atul Kr. Ojha
|
John E. Ortega
|
Proyag Pal
|
Juan Pino
|
Lonneke van der Plas
|
Peter Polák
|
Elijah Rippeth
|
Elizabeth Salesky
|
Jiatong Shi
|
Matthias Sperber
|
Sebastian Stüker
|
Katsuhito Sudoh
|
Yun Tang
|
Brian Thompson
|
Kevin Tran
|
Marco Turchi
|
Alex Waibel
|
Mingxuan Wang
|
Shinji Watanabe
|
Rodolfo Zevallos
Proceedings of the 20th International Conference on Spoken Language Translation (IWSLT 2023)
This paper reports on the shared tasks organized by the 20th IWSLT Conference. The shared tasks address 9 scientific challenges in spoken language translation: simultaneous and offline translation, automatic subtitling and dubbing, speech-to-speech translation, multilingual, dialect and low-resource speech translation, and formality control. The shared tasks attracted a total of 38 submissions by 31 teams. The growing interest towards spoken language translation is also witnessed by the constantly increasing number of shared task organizers and contributors to the overview paper, almost evenly distributed across industry and academia.
pdf
bib
abs
Speech Translation with Style: AppTek’s Submissions to the IWSLT Subtitling and Formality Tracks in 2023
Parnia Bahar
|
Patrick Wilken
|
Javier Iranzo-Sánchez
|
Mattia Di Gangi
|
Evgeny Matusov
|
Zoltán Tüske
Proceedings of the 20th International Conference on Spoken Language Translation (IWSLT 2023)
AppTek participated in the subtitling and formality tracks of the IWSLT 2023 evaluation. This paper describes the details of our subtitling pipeline - speech segmentation, speech recognition, punctuation prediction and inverse text normalization, text machine translation and direct speech-to-text translation, intelligent line segmentation - and how we make use of the provided subtitling-specific data in training and fine-tuning. The evaluation results show that our final submissions are competitive, in particular outperforming the submissions by other participants by 5% absolute as measured by the SubER subtitle quality metric. For the formality track, we participate with our En-Ru and En-Pt production models, which support formality control via prefix tokens. Except for informal Portuguese, we achieve near perfect formality level accuracy while at the same time offering high general translation quality.
2022
pdf
bib
abs
SubER - A Metric for Automatic Evaluation of Subtitle Quality
Patrick Wilken
|
Panayota Georgakopoulou
|
Evgeny Matusov
Proceedings of the 19th International Conference on Spoken Language Translation (IWSLT 2022)
This paper addresses the problem of evaluating the quality of automatically generated subtitles, which includes not only the quality of the machine-transcribed or translated speech, but also the quality of line segmentation and subtitle timing. We propose SubER - a single novel metric based on edit distance with shifts that takes all of these subtitle properties into account. We compare it to existing metrics for evaluating transcription, translation, and subtitle quality. A careful human evaluation in a post-editing scenario shows that the new metric has a high correlation with the post-editing effort and direct human assessment scores, outperforming baseline metrics considering only the subtitle text, such as WER and BLEU, and existing methods to integrate segmentation and timing features.
pdf
bib
abs
AppTek’s Submission to the IWSLT 2022 Isometric Spoken Language Translation Task
Patrick Wilken
|
Evgeny Matusov
Proceedings of the 19th International Conference on Spoken Language Translation (IWSLT 2022)
To participate in the Isometric Spoken Language Translation Task of the IWSLT 2022 evaluation, constrained condition, AppTek developed neural Transformer-based systems for English-to-German with various mechanisms of length control, ranging from source-side and target-side pseudo-tokens to encoding of remaining length in characters that replaces positional encoding. We further increased translation length compliance by sentence-level selection of length-compliant hypotheses from different system variants, as well as rescoring of N-best candidates from a single system. Length-compliant back-translated and forward-translated synthetic data, as well as other parallel data variants derived from the original MuST-C training corpus were important for a good quality/desired length trade-off. Our experimental results show that length compliance levels above 90% can be reached while minimizing losses in MT quality as measured in BERT and BLEU scores.
pdf
bib
abs
Automatic Video Dubbing at AppTek
Mattia Di Gangi
|
Nick Rossenbach
|
Alejandro Pérez
|
Parnia Bahar
|
Eugen Beck
|
Patrick Wilken
|
Evgeny Matusov
Proceedings of the 23rd Annual Conference of the European Association for Machine Translation
Video dubbing is the activity of revoicing a video while offering a viewing experience equivalent to the original video. The revoicing usually comes with a changed script, mostly in a different language, and the revoicing should reproduce the original emotions, coherent with the body language, and lip synchronized. In this project, we aim to build an AD system in three phases: (1) voice-over; (2) emotional voice-over; (3) full dubbing, while enhancing the system with human-in-the-loop capabilities for a higher quality.
2021
pdf
bib
abs
Without Further Ado: Direct and Simultaneous Speech Translation by AppTek in 2021
Parnia Bahar
|
Patrick Wilken
|
Mattia A. Di Gangi
|
Evgeny Matusov
Proceedings of the 18th International Conference on Spoken Language Translation (IWSLT 2021)
This paper describes the offline and simultaneous speech translation systems developed at AppTek for IWSLT 2021. Our offline ST submission includes the direct end-to-end system and the so-called posterior tight integrated model, which is akin to the cascade system but is trained in an end-to-end fashion, where all the cascaded modules are end-to-end models themselves. For simultaneous ST, we combine hybrid automatic speech recognition with a machine translation approach whose translation policy decisions are learned from statistical word alignments. Compared to last year, we improve general quality and provide a wider range of quality/latency trade-offs, both due to a data augmentation method making the MT model robust to varying chunk sizes. Finally, we present a method for ASR output segmentation into sentences that introduces a minimal additional delay.
2020
pdf
bib
Flexible Customization of a Single Neural Machine Translation System with Multi-dimensional Metadata Inputs
Evgeny Matusov
|
Patrick Wilken
|
Christian Herold
Proceedings of the 14th Conference of the Association for Machine Translation in the Americas (Volume 2: User Track)
pdf
bib
abs
Start-Before-End and End-to-End: Neural Speech Translation by AppTek and RWTH Aachen University
Parnia Bahar
|
Patrick Wilken
|
Tamer Alkhouli
|
Andreas Guta
|
Pavel Golik
|
Evgeny Matusov
|
Christian Herold
Proceedings of the 17th International Conference on Spoken Language Translation
AppTek and RWTH Aachen University team together to participate in the offline and simultaneous speech translation tracks of IWSLT 2020. For the offline task, we create both cascaded and end-to-end speech translation systems, paying attention to careful data selection and weighting. In the cascaded approach, we combine high-quality hybrid automatic speech recognition (ASR) with the Transformer-based neural machine translation (NMT). Our end-to-end direct speech translation systems benefit from pretraining of adapted encoder and decoder components, as well as synthetic data and fine-tuning and thus are able to compete with cascaded systems in terms of MT quality. For simultaneous translation, we utilize a novel architecture that makes dynamic decisions, learned from parallel data, to determine when to continue feeding on input or generate output words. Experiments with speech and text input show that even at low latency this architecture leads to superior translation results.
pdf
bib
abs
Neural Simultaneous Speech Translation Using Alignment-Based Chunking
Patrick Wilken
|
Tamer Alkhouli
|
Evgeny Matusov
|
Pavel Golik
Proceedings of the 17th International Conference on Spoken Language Translation
In simultaneous machine translation, the objective is to determine when to produce a partial translation given a continuous stream of source words, with a trade-off between latency and quality. We propose a neural machine translation (NMT) model that makes dynamic decisions when to continue feeding on input or generate output words. The model is composed of two main components: one to dynamically decide on ending a source chunk, and another that translates the consumed chunk. We train the components jointly and in a manner consistent with the inference conditions. To generate chunked training data, we propose a method that utilizes word alignment while also preserving enough context. We compare models with bidirectional and unidirectional encoders of different depths, both on real speech and text input. Our results on the IWSLT 2020 English-to-German task outperform a wait-k baseline by 2.6 to 3.7% BLEU absolute.
2019
pdf
bib
abs
Customizing Neural Machine Translation for Subtitling
Evgeny Matusov
|
Patrick Wilken
|
Yota Georgakopoulou
Proceedings of the Fourth Conference on Machine Translation (Volume 1: Research Papers)
In this work, we customized a neural machine translation system for translation of subtitles in the domain of entertainment. The neural translation model was adapted to the subtitling content and style and extended by a simple, yet effective technique for utilizing inter-sentence context for short sentences such as dialog turns. The main contribution of the paper is a novel subtitle segmentation algorithm that predicts the end of a subtitle line given the previous word-level context using a recurrent neural network learned from human segmentation decisions. This model is combined with subtitle length and duration constraints established in the subtitling industry. We conducted a thorough human evaluation with two post-editors (English-to-Spanish translation of a documentary and a sitcom). It showed a notable productivity increase of up to 37% as compared to translating from scratch and significant reductions in human translation edit rate in comparison with the post-editing of the baseline non-adapted system without a learned segmentation model.
pdf
bib
The Challenges of Using Neural Machine Translation for Literature
Evgeny Matusov
Proceedings of the Qualities of Literary Machine Translation
2018
pdf
bib
abs
Generating E-Commerce Product Titles and Predicting their Quality
José G. Camargo de Souza
|
Michael Kozielski
|
Prashant Mathur
|
Ernie Chang
|
Marco Guerini
|
Matteo Negri
|
Marco Turchi
|
Evgeny Matusov
Proceedings of the 11th International Conference on Natural Language Generation
E-commerce platforms present products using titles that summarize product information. These titles cannot be created by hand, therefore an algorithmic solution is required. The task of automatically generating these titles given noisy user provided titles is one way to achieve the goal. The setting requires the generation process to be fast and the generated title to be both human-readable and concise. Furthermore, we need to understand if such generated titles are usable. As such, we propose approaches that (i) automatically generate product titles, (ii) predict their quality. Our approach scales to millions of products and both automatic and human evaluations performed on real-world data indicate our approaches are effective and applicable to existing e-commerce scenarios.
pdf
bib
abs
Learning from Chunk-based Feedback in Neural Machine Translation
Pavel Petrushkov
|
Shahram Khadivi
|
Evgeny Matusov
Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers)
We empirically investigate learning from partial feedback in neural machine translation (NMT), when partial feedback is collected by asking users to highlight a correct chunk of a translation. We propose a simple and effective way of utilizing such feedback in NMT training. We demonstrate how the common machine translation problem of domain mismatch between training and deployment can be reduced solely based on chunk-level user feedback. We conduct a series of simulation experiments to test the effectiveness of the proposed method. Our results show that chunk-level feedback outperforms sentence based feedback by up to 2.61% BLEU absolute.
pdf
bib
abs
Can Neural Machine Translation be Improved with User Feedback?
Julia Kreutzer
|
Shahram Khadivi
|
Evgeny Matusov
|
Stefan Riezler
Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 3 (Industry Papers)
We present the first real-world application of methods for improving neural machine translation (NMT) with human reinforcement, based on explicit and implicit user feedback collected on the eBay e-commerce platform. Previous work has been confined to simulation experiments, whereas in this paper we work with real logged feedback for offline bandit learning of NMT parameters. We conduct a thorough analysis of the available explicit user judgments—five-star ratings of translation quality—and show that they are not reliable enough to yield significant improvements in bandit learning. In contrast, we successfully utilize implicit task-based feedback collected in a cross-lingual search task to improve task-specific and machine translation quality metrics.
pdf
bib
abs
Neural Speech Translation at AppTek
Evgeny Matusov
|
Patrick Wilken
|
Parnia Bahar
|
Julian Schamper
|
Pavel Golik
|
Albert Zeyer
|
Joan Albert Silvestre-Cerda
|
Adrià Martínez-Villaronga
|
Hendrik Pesch
|
Jan-Thorsten Peter
Proceedings of the 15th International Conference on Spoken Language Translation
This work describes AppTek’s speech translation pipeline that includes strong state-of-the-art automatic speech recognition (ASR) and neural machine translation (NMT) components. We show how these components can be tightly coupled by encoding ASR confusion networks, as well as ASR-like noise adaptation, vocabulary normalization, and implicit punctuation prediction during translation. In another experimental setup, we propose a direct speech translation approach that can be scaled to translation tasks with large amounts of text-only parallel training data but a limited number of hours of recorded and human-translated speech.
2017
pdf
bib
abs
Human Evaluation of Multi-modal Neural Machine Translation: A Case-Study on E-Commerce Listing Titles
Iacer Calixto
|
Daniel Stein
|
Evgeny Matusov
|
Sheila Castilho
|
Andy Way
Proceedings of the Sixth Workshop on Vision and Language
In this paper, we study how humans perceive the use of images as an additional knowledge source to machine-translate user-generated product listings in an e-commerce company. We conduct a human evaluation where we assess how a multi-modal neural machine translation (NMT) model compares to two text-only approaches: a conventional state-of-the-art attention-based NMT and a phrase-based statistical machine translation (PBSMT) model. We evaluate translations obtained with different systems and also discuss the data set of user-generated product listings, which in our case comprises both product listings and associated images. We found that humans preferred translations obtained with a PBSMT system to both text-only and multi-modal NMT over 56% of the time. Nonetheless, human evaluators ranked translations from a multi-modal NMT model as better than those of a text-only NMT over 88% of the time, which suggests that images do help NMT in this use-case.
pdf
bib
abs
Neural Machine Translation Leveraging Phrase-based Models in a Hybrid Search
Leonard Dahlmann
|
Evgeny Matusov
|
Pavel Petrushkov
|
Shahram Khadivi
Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing
In this paper, we introduce a hybrid search for attention-based neural machine translation (NMT). A target phrase learned with statistical MT models extends a hypothesis in the NMT beam search when the attention of the NMT model focuses on the source words translated by this phrase. Phrases added in this way are scored with the NMT model, but also with SMT features including phrase-level translation probabilities and a target language model. Experimental results on German-to-English news domain and English-to-Russian e-commerce domain translation tasks show that using phrase-based models in NMT search improves MT quality by up to 2.3% BLEU absolute as compared to a strong NMT baseline.
pdf
bib
abs
Using Images to Improve Machine-Translating E-Commerce Product Listings.
Iacer Calixto
|
Daniel Stein
|
Evgeny Matusov
|
Pintu Lohar
|
Sheila Castilho
|
Andy Way
Proceedings of the 15th Conference of the European Chapter of the Association for Computational Linguistics: Volume 2, Short Papers
In this paper we study the impact of using images to machine-translate user-generated e-commerce product listings. We study how a multi-modal Neural Machine Translation (NMT) model compares to two text-only approaches: a conventional state-of-the-art attentional NMT and a Statistical Machine Translation (SMT) model. User-generated product listings often do not constitute grammatical or well-formed sentences. More often than not, they consist of the juxtaposition of short phrases or keywords. We train our models end-to-end as well as use text-only and multi-modal NMT models for re-ranking n-best lists generated by an SMT model. We qualitatively evaluate our user-generated training data also analyse how adding synthetic data impacts the results. We evaluate our models quantitatively using BLEU and TER and find that (i) additional synthetic data has a general positive impact on text-only and multi-modal NMT models, and that (ii) using a multi-modal NMT model for re-ranking n-best lists improves TER significantly across different n-best list sizes.
pdf
bib
Neural and Statistical Methods for Leveraging Meta-information in Machine Translation
Shahram Khadivi
|
Patrick Wilken
|
Leonard Dahlmann
|
Evgeny Matusov
Proceedings of Machine Translation Summit XVI: Research Track
2016
pdf
bib
abs
Guided Alignment Training for Topic-Aware Neural Machine Translation
Wenhu Chen
|
Evgeny Matusov
|
Shahram Khadivi
|
Jan-Thorsten Peter
Conferences of the Association for Machine Translation in the Americas: MT Researchers' Track
In this paper, we propose an effective way for biasing the attention mechanism of a sequence-to-sequence neural machine translation (NMT) model towards the well-studied statistical word alignment models. We show that our novel guided alignment training approach improves translation quality on real-life e-commerce texts consisting of product titles and descriptions, overcoming the problems posed by many unknown words and a large type/token ratio. We also show that meta-data associated with input texts such as topic or category information can significantly improve translation quality when used as an additional signal to the decoder part of the network. With both novel features, the BLEU score of the NMT system on a product title set improves from 18.6 to 21.3%. Even larger MT quality gains are obtained through domain adaptation of a general domain NMT system to e-commerce data. The developed NMT system also performs well on the IWSLT speech translation task, where an ensemble of four variant systems outperforms the phrase-based baseline by 2.1% BLEU absolute.
2013
pdf
bib
Omnifluent English-to-French and Russian-to-English Systems for the 2013 Workshop on Statistical Machine Translation
Evgeny Matusov
|
Gregor Leusch
Proceedings of the Eighth Workshop on Statistical Machine Translation
pdf
bib
Selective Combination of Pivot and Direct Statistical Machine Translation Models
Ahmed El Kholy
|
Nizar Habash
|
Gregor Leusch
|
Evgeny Matusov
|
Hassan Sawaf
Proceedings of the Sixth International Joint Conference on Natural Language Processing
pdf
bib
Language Independent Connectivity Strength Features for Phrase Pivot Statistical Machine Translation
Ahmed El Kholy
|
Nizar Habash
|
Gregor Leusch
|
Evgeny Matusov
|
Hassan Sawaf
Proceedings of the 51st Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers)
2012
pdf
bib
abs
Incremental Re-Training of a Hybrid English-French MT System with Customer Translation Memory Data
Evgeny Matusov
Proceedings of the 10th Conference of the Association for Machine Translation in the Americas: Commercial MT User Program
In this paper, we present SAIC’s hybrid machine translation (MT) system and show how it was adapted to the needs of our customer – a major global fashion company. The adaptation was performed in two ways: off-line selection of domain-relevant parallel and monolingual data from a background database, as well as on-line incremental adaptation with customer parallel and translation memory data. The translation memory was integrated into the statistical search using two novel features. We show that these features can be used to produce nearly perfect translations of data that fully or to a large extent partially matches the TM entries, without sacrificing on the translation quality of the data without TM matches. We also describe how the human post-editing effort was reduced due to significantly better MT quality after adaptation, but also due to improved formatting and readability of the MT output.
2010
pdf
bib
abs
Improving Reordering in Statistical Machine Translation from Farsi
Evgeny Matusov
|
Selçuk Köprü
Proceedings of the 9th Conference of the Association for Machine Translation in the Americas: Research Papers
In this paper, we propose a novel model for scoring reordering in phrase-based statistical machine translation (SMT) and successfully use it for translation from Farsi into English and Arabic. The model replaces the distance-based distortion model that is widely used in most SMT systems. The main idea of the model is to penalize each new deviation from the monotonic translation path. We also propose a way for combining this model with manually created reordering rules for Farsi which try to alleviate the difference in sentence structure between Farsi and English/Arabic by changing the position of the verb. The rules are used in the SMT search as soft constraints. In the experiments on two general-domain translation tasks, the proposed penalty-based model improves the BLEU score by up to 1.5% absolute as compared to the baseline of monotonic translation, and up to 1.2% as compared to using the distance-based distortion model.
pdf
bib
abs
AppTek’s APT machine translation system for IWSLT 2010
Evgeny Matusov
|
Selçuk Köprü
Proceedings of the 7th International Workshop on Spoken Language Translation: Evaluation Campaign
In this paper, we describe AppTek’s new APT machine translation system that we employed in the IWSLT 2010 evaluation campaign. This year, we participated in the Arabic-to-English and Turkish-to-English BTEC tasks. We discuss the architecture of the system, the preprocessing steps and the experiments carried out during the campaign. We show that competitive translation quality can be obtained with a system that can be turned into a real-life product without much effort.
2009
pdf
bib
Are Unaligned Words Important for Machine Translation?
Yuqi Zhang
|
Evgeny Matusov
|
Hermann Ney
Proceedings of the 13th Annual Conference of the European Association for Machine Translation
pdf
bib
The RWTH System Combination System for WMT 2009
Gregor Leusch
|
Evgeny Matusov
|
Hermann Ney
Proceedings of the Fourth Workshop on Statistical Machine Translation
pdf
bib
The RWTH Machine Translation System for WMT 2009
Maja Popović
|
David Vilar
|
Daniel Stein
|
Evgeny Matusov
|
Hermann Ney
Proceedings of the Fourth Workshop on Statistical Machine Translation
2008
pdf
bib
abs
The RWTH machine translation system for IWSLT 2008.
David Vilar
|
Daniel Stein
|
Yuqi Zhang
|
Evgeny Matusov
|
Arne Mauser
|
Oliver Bender
|
Saab Mansour
|
Hermann Ney
Proceedings of the 5th International Workshop on Spoken Language Translation: Evaluation Campaign
RWTH’s system for the 2008 IWSLT evaluation consists of a combination of different phrase-based and hierarchical statistical machine translation systems. We participated in the translation tasks for the Chinese-to-English and Arabic-to-English language pairs. We investigated different preprocessing techniques, reordering methods for the phrase-based system, including reordering of speech lattices, and syntax-based enhancements for the hierarchical systems. We also tried the combination of the Arabic-to-English and Chinese-to-English outputs as an additional submission.
pdf
bib
Complexity of Finding the BLEU-optimal Hypothesis in a Confusion Network
Gregor Leusch
|
Evgeny Matusov
|
Hermann Ney
Proceedings of the 2008 Conference on Empirical Methods in Natural Language Processing
pdf
bib
Tighter Integration of Rule-Based and Statistical MT in Serial System Combination
Nicola Ueffing
|
Jens Stephan
|
Evgeny Matusov
|
Loïc Dugast
|
George Foster
|
Roland Kuhn
|
Jean Senellart
|
Jin Yang
Proceedings of the 22nd International Conference on Computational Linguistics (Coling 2008)
2006
pdf
bib
The RWTH statistical machine translation system for the IWSLT 2006 evaluation
Arne Mauser
|
Richard Zens
|
Evgeny Matusov
|
Sasa Hasan
|
Hermann Ney
Proceedings of the Third International Workshop on Spoken Language Translation: Evaluation Campaign
pdf
bib
Automatic sentence segmentation and punctuation prediction for spoken language translation
Evgeny Matusov
|
Arne Mauser
|
Hermann Ney
Proceedings of the Third International Workshop on Spoken Language Translation: Papers
pdf
bib
Computing Consensus Translation for Multiple Machine Translation Systems Using Enhanced Hypothesis Alignment
Evgeny Matusov
|
Nicola Ueffing
|
Hermann Ney
11th Conference of the European Chapter of the Association for Computational Linguistics
pdf
bib
abs
Training a Statistical Machine Translation System without GIZA++
Arne Mauser
|
Evgeny Matusov
|
Hermann Ney
Proceedings of the Fifth International Conference on Language Resources and Evaluation (LREC’06)
The IBM Models (Brown et al., 1993) enjoy great popularity in the machine translation community because they offer high quality word alignments and a free implementation is available with the GIZA++ Toolkit (Och and Ney, 2003). Several methods have been developed to overcome the asymmetry of the alignment generated by the IBM Models. A remaining disadvantage, however, is the high model complexity. This paper describes a word alignment training procedure for statistical machine translation that uses a simple and clear statistical model, different from the IBM models. The main idea of the algorithm is to generate a symmetric and monotonic alignment between the target sentence and a permutation graph representing different reorderings of the words in the source sentence. The quality of the generated alignment is shown to be comparable to the standard GIZA++ training in an SMT setup.
2005
pdf
bib
Novel Reordering Approaches in Phrase-Based Statistical Machine Translation
Stephan Kanthak
|
David Vilar
|
Evgeny Matusov
|
Richard Zens
|
Hermann Ney
Proceedings of the ACL Workshop on Building and Using Parallel Texts
pdf
bib
abs
Statistical Machine Translation of European Parliamentary Speeches
David Vilar
|
Evgeny Matusov
|
Sasa Hasan
|
Richard Zens
|
Hermann Ney
Proceedings of Machine Translation Summit X: Papers
In this paper we present the ongoing work at RWTH Aachen University for building a speech-to-speech translation system within the TC-Star project. The corpus we work on consists of parliamentary speeches held in the European Plenary Sessions. To our knowledge, this is the first project that focuses on speech-to-speech translation applied to a real-life task. We describe the statistical approach used in the development of our system and analyze its performance under different conditions: dealing with syntactically correct input, dealing with the exact transcription of speech and dealing with the (noisy) output of an automatic speech recognition system. Experimental results show that our system is able to perform adequately in each of these conditions.
pdf
bib
Efficient statistical machine translation with constrained reordering
Evgeny Matusov
|
Stephan Kanthak
|
Hermann Ney
Proceedings of the 10th EAMT Conference: Practical applications of machine translation
pdf
bib
Integrated Chinese Word Segmentation in Statistical Machine Translation
Jia Xu
|
Evgeny Matusov
|
Richard Zens
|
Hermann Ney
Proceedings of the Second International Workshop on Spoken Language Translation
pdf
bib
Evaluating Machine Translation Output with Automatic Sentence Segmentation
Evgeny Matusov
|
Gregor Leusch
|
Oliver Bender
|
Hermann Ney
Proceedings of the Second International Workshop on Spoken Language Translation
pdf
bib
The RWTH Phrase-based Statistical Machine Translation System
Richard Zens
|
Oliver Bender
|
Sasa Hasan
|
Shahram Khadivi
|
Evgeny Matusov
|
Jia Xu
|
Yuqi Zhang
|
Hermann Ney
Proceedings of the Second International Workshop on Spoken Language Translation
2004
pdf
bib
Alignment templates: the RWTH SMT system
Oliver Bender
|
Richard Zens
|
Evgeny Matusov
|
Hermann Ney
Proceedings of the First International Workshop on Spoken Language Translation: Evaluation Campaign
pdf
bib
Statistical machine translation of spontaneous speech with scarce resources
Evgeny Matusov
|
Maja Popovic
|
Richard Zens
|
Hermann Ney
Proceedings of the First International Workshop on Spoken Language Translation: Papers
pdf
bib
Improved Word Alignment Using a Symmetric Lexicon Model
Richard Zens
|
Evgeny Matusov
|
Hermann Ney
COLING 2004: Proceedings of the 20th International Conference on Computational Linguistics
pdf
bib
Symmetric Word Alignments for Statistical Machine Translation
Evgeny Matusov
|
Richard Zens
|
Hermann Ney
COLING 2004: Proceedings of the 20th International Conference on Computational Linguistics