Efficient Detection of LLM-generated Texts with a Bayesian Surrogate Model

Yibo Miao, Hongcheng Gao, Hao Zhang, Zhijie Deng


Abstract
The detection of machine-generated text, especially from large language models (LLMs), is crucial in preventing serious social problems resulting from their misuse. Some methods train dedicated detectors on specific datasets but fall short in generalizing to unseen test data, while other zero-shot ones often yield suboptimal performance. Although the recent DetectGPT has shown promising detection performance, it suffers from significant inefficiency issues, as detecting a single candidate requires querying the source LLM with hundreds of its perturbations. This paper aims to bridge this gap. Concretely, we propose to incorporate a Bayesian surrogate model, which allows us to select typical samples based on Bayesian uncertainty and interpolate scores from typical samples to other samples, to improve query efficiency. Empirical results demonstrate that our method significantly outperforms existing approaches under a low query budget. Notably, when detecting the text generated by LLaMA family models, our method with just 2 or 3 queries can outperform DetectGPT with 200 queries.
Anthology ID:
2024.findings-acl.366
Volume:
Findings of the Association for Computational Linguistics: ACL 2024
Month:
August
Year:
2024
Address:
Bangkok, Thailand
Editors:
Lun-Wei Ku, Andre Martins, Vivek Srikumar
Venue:
Findings
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
6118–6130
Language:
URL:
https://aclanthology.org/2024.findings-acl.366
DOI:
10.18653/v1/2024.findings-acl.366
Bibkey:
Cite (ACL):
Yibo Miao, Hongcheng Gao, Hao Zhang, and Zhijie Deng. 2024. Efficient Detection of LLM-generated Texts with a Bayesian Surrogate Model. In Findings of the Association for Computational Linguistics: ACL 2024, pages 6118–6130, Bangkok, Thailand. Association for Computational Linguistics.
Cite (Informal):
Efficient Detection of LLM-generated Texts with a Bayesian Surrogate Model (Miao et al., Findings 2024)
Copy Citation:
PDF:
https://aclanthology.org/2024.findings-acl.366.pdf