Detecting AI-enhanced Opinion Spambots: a study on LLM-generated Hotel Reviews

Vijini Liyanage, Davide Buscaldi, Penelope Forcioli


Abstract
Opinion spamming is the posting of fake opinions or reviews to promote or discredit target products, services, or individuals. The concern surrounding this activity has grown steadily especially because of the development of automated bots for this purpose (“spambots”). Nowadays, Large Language Models (LLMs) have proved their ability to generate text that is almost indistinguishable from human-written text. Therefore, there is a growing concern regarding the use of these models for malicious purposes, among them opinion spamming. In this paper, we carry out a study on LLM-generated reviews, in particular hotel reviews as we chose the well-known Opinion Spam corpus by Myle Ott as the seed for our dataset. We generated a set of fake reviews with various models and applied different classification algorithms to verify how difficult is it to detect this kind of generated content. The results show that by providing enough training data, it is not difficult to detect the fake reviews generated by such models, as they tend to associate the aspects in the reviews with the same attributes.
Anthology ID:
2024.ecnlp-1.8
Volume:
Proceedings of the Seventh Workshop on e-Commerce and NLP @ LREC-COLING 2024
Month:
May
Year:
2024
Address:
Torino, Italia
Editors:
Shervin Malmasi, Besnik Fetahu, Nicola Ueffing, Oleg Rokhlenko, Eugene Agichtein, Ido Guy
Venues:
ECNLP | WS
SIG:
Publisher:
ELRA and ICCL
Note:
Pages:
74–78
Language:
URL:
https://aclanthology.org/2024.ecnlp-1.8
DOI:
Bibkey:
Cite (ACL):
Vijini Liyanage, Davide Buscaldi, and Penelope Forcioli. 2024. Detecting AI-enhanced Opinion Spambots: a study on LLM-generated Hotel Reviews. In Proceedings of the Seventh Workshop on e-Commerce and NLP @ LREC-COLING 2024, pages 74–78, Torino, Italia. ELRA and ICCL.
Cite (Informal):
Detecting AI-enhanced Opinion Spambots: a study on LLM-generated Hotel Reviews (Liyanage et al., ECNLP-WS 2024)
Copy Citation:
PDF:
https://aclanthology.org/2024.ecnlp-1.8.pdf