Aligning Large Language Models with Diverse Political Viewpoints

Dominik Stammbach, Philine Widmer, Eunjung Cho, Caglar Gulcehre, Elliott Ash


Abstract
Large language models such as ChatGPT exhibit striking political biases. If users query them about political information, they often take a normative stance. To overcome this, we align LLMs with diverse political viewpoints from 100,000 comments written by candidates running for national parliament in Switzerland. Models aligned with this data can generate more accurate political viewpoints from Swiss parties, compared to commercial models such as ChatGPT. We also propose a procedure to generate balanced overviews summarizing multiple viewpoints using such models. The replication package contains all code and data.
Anthology ID:
2024.emnlp-main.412
Volume:
Proceedings of the 2024 Conference on Empirical Methods in Natural Language Processing
Month:
November
Year:
2024
Address:
Miami, Florida, USA
Editors:
Yaser Al-Onaizan, Mohit Bansal, Yun-Nung Chen
Venue:
EMNLP
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
7257–7267
Language:
URL:
https://aclanthology.org/2024.emnlp-main.412
DOI:
10.18653/v1/2024.emnlp-main.412
Bibkey:
Cite (ACL):
Dominik Stammbach, Philine Widmer, Eunjung Cho, Caglar Gulcehre, and Elliott Ash. 2024. Aligning Large Language Models with Diverse Political Viewpoints. In Proceedings of the 2024 Conference on Empirical Methods in Natural Language Processing, pages 7257–7267, Miami, Florida, USA. Association for Computational Linguistics.
Cite (Informal):
Aligning Large Language Models with Diverse Political Viewpoints (Stammbach et al., EMNLP 2024)
Copy Citation:
PDF:
https://aclanthology.org/2024.emnlp-main.412.pdf