Non-contrastive sentence representations via self-supervision

Duccio Pappadopulo, Marco Farina


Abstract
Sample contrastive methods, typically referred to simply as contrastive are the foundation of most unsupervised methods to learn text and sentence embeddings. On the other hand, a different class of self-supervised non-contrastive loss functions and methods have been considered in the computer vision community and referred to as dimension contrastive. In this paper, we thoroughly compare this class of methods with the standard baseline for contrastive sentence embeddings, SimCSE. We find that self-supervised embeddings trained using dimension contrastive objectives can outperform SimCSE on downstream tasks without needing auxiliary loss functions.
Anthology ID:
2024.findings-naacl.266
Volume:
Findings of the Association for Computational Linguistics: NAACL 2024
Month:
June
Year:
2024
Address:
Mexico City, Mexico
Editors:
Kevin Duh, Helena Gomez, Steven Bethard
Venue:
Findings
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
4274–4284
Language:
URL:
https://aclanthology.org/2024.findings-naacl.266
DOI:
10.18653/v1/2024.findings-naacl.266
Bibkey:
Cite (ACL):
Duccio Pappadopulo and Marco Farina. 2024. Non-contrastive sentence representations via self-supervision. In Findings of the Association for Computational Linguistics: NAACL 2024, pages 4274–4284, Mexico City, Mexico. Association for Computational Linguistics.
Cite (Informal):
Non-contrastive sentence representations via self-supervision (Pappadopulo & Farina, Findings 2024)
Copy Citation:
PDF:
https://aclanthology.org/2024.findings-naacl.266.pdf