Can We Achieve High-quality Direct Speech-to-Speech Translation without Parallel Speech Data?

Qingkai Fang, Shaolei Zhang, Zhengrui Ma, Min Zhang, Yang Feng


Abstract
Recently proposed two-pass direct speech-to-speech translation (S2ST) models decompose the task into speech-to-text translation (S2TT) and text-to-speech (TTS) within an end-to-end model, yielding promising results. However, the training of these models still relies on parallel speech data, which is extremely challenging to collect. In contrast, S2TT and TTS have accumulated a large amount of data and pretrained models, which have not been fully utilized in the development of S2ST models. Inspired by this, in this paper, we first introduce a composite S2ST model named ComSpeech, which can seamlessly integrate any pretrained S2TT and TTS models into a direct S2ST model. Furthermore, to eliminate the reliance on parallel speech data, we propose a novel training method ComSpeech-ZS that solely utilizes S2TT and TTS data. It aligns representations in the latent space through contrastive learning, enabling the speech synthesis capability learned from the TTS data to generalize to S2ST in a zero-shot manner. Experimental results on the CVSS dataset show that when the parallel speech data is available, ComSpeech surpasses previous two-pass models like UnitY and Translatotron 2 in both translation quality and decoding speed. When there is no parallel speech data, ComSpeech-ZS lags behind by only 0.7 ASR-BLEU and outperforms the cascaded models.
Anthology ID:
2024.acl-long.392
Volume:
Proceedings of the 62nd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)
Month:
August
Year:
2024
Address:
Bangkok, Thailand
Editors:
Lun-Wei Ku, Andre Martins, Vivek Srikumar
Venue:
ACL
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
7264–7277
Language:
URL:
https://aclanthology.org/2024.acl-long.392
DOI:
10.18653/v1/2024.acl-long.392
Bibkey:
Cite (ACL):
Qingkai Fang, Shaolei Zhang, Zhengrui Ma, Min Zhang, and Yang Feng. 2024. Can We Achieve High-quality Direct Speech-to-Speech Translation without Parallel Speech Data?. In Proceedings of the 62nd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 7264–7277, Bangkok, Thailand. Association for Computational Linguistics.
Cite (Informal):
Can We Achieve High-quality Direct Speech-to-Speech Translation without Parallel Speech Data? (Fang et al., ACL 2024)
Copy Citation:
PDF:
https://aclanthology.org/2024.acl-long.392.pdf