Learning to Decode Collaboratively with Multiple Language Models

Zejiang Shen, Hunter Lang, Bailin Wang, Yoon Kim, David Sontag


Abstract
We propose a method to teach multiple large language models (LLM) to collaborate by interleaving their generations at the token level. We model the decision of which LLM generates the next token as a latent variable. By optimizing the marginal likelihood of a training set under our latent variable model, the base LLM automatically learns when to generate itself and when to call on one of the “assistant” language models to generate, all without direct supervision. Token-level collaboration during decoding allows for a fusion of each model’s expertise in a manner tailored to the specific task at hand. Our collaborative decoding is especially useful in cross-domain settings where a generalist base LLM learns to invoke domain expert models. On instruction-following, domain-specific QA, and reasoning tasks, we show that the performance of the joint system exceeds that of the individual models. Through qualitative analysis, we show models trained with our method exhibit several interesting collaboration patterns, e.g., template-filling, by visualizing the learned latent decisions.
Anthology ID:
2024.acl-long.701
Volume:
Proceedings of the 62nd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)
Month:
August
Year:
2024
Address:
Bangkok, Thailand
Editors:
Lun-Wei Ku, Andre Martins, Vivek Srikumar
Venue:
ACL
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
12974–12990
Language:
URL:
https://aclanthology.org/2024.acl-long.701
DOI:
10.18653/v1/2024.acl-long.701
Bibkey:
Cite (ACL):
Zejiang Shen, Hunter Lang, Bailin Wang, Yoon Kim, and David Sontag. 2024. Learning to Decode Collaboratively with Multiple Language Models. In Proceedings of the 62nd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 12974–12990, Bangkok, Thailand. Association for Computational Linguistics.
Cite (Informal):
Learning to Decode Collaboratively with Multiple Language Models (Shen et al., ACL 2024)
Copy Citation:
PDF:
https://aclanthology.org/2024.acl-long.701.pdf