Visual-Textual Entailment with Quantities Using Model Checking and Knowledge Injection

Nobuyuki Iokawa, Hitomi Yanaka


Abstract
In recent years, there has been great interest in multimodal inference. We concentrate on visual-textual entailment (VTE), a critical task in multimodal inference. VTE is the task of determining entailment relations between an image and a sentence. Several deep learning-based approaches have been proposed for VTE, but current approaches struggle with accurately handling quantities. On the other hand, one promising approach, one based on logical inference that can successfully deal with large quantities, has also been proposed. However, that approach uses automated theorem provers, increasing the computational cost for problems involving many entities. In addition, that approach cannot deal well with lexical differences between the semantic representations of images and sentences. In this paper, we present a logic-based VTE system that overcomes these drawbacks, using model checking for inference to increase efficiency and knowledge injection to perform more robust inference. We create a VTE dataset containing quantities and negation to assess how well VTE systems understand such phenomena. Using this dataset, we demonstrate that our system solves VTE tasks with quantities and negation more robustly than previous approaches.
Anthology ID:
2024.lrec-main.1512
Volume:
Proceedings of the 2024 Joint International Conference on Computational Linguistics, Language Resources and Evaluation (LREC-COLING 2024)
Month:
May
Year:
2024
Address:
Torino, Italia
Editors:
Nicoletta Calzolari, Min-Yen Kan, Veronique Hoste, Alessandro Lenci, Sakriani Sakti, Nianwen Xue
Venues:
LREC | COLING
SIG:
Publisher:
ELRA and ICCL
Note:
Pages:
17397–17408
Language:
URL:
https://aclanthology.org/2024.lrec-main.1512
DOI:
Bibkey:
Cite (ACL):
Nobuyuki Iokawa and Hitomi Yanaka. 2024. Visual-Textual Entailment with Quantities Using Model Checking and Knowledge Injection. In Proceedings of the 2024 Joint International Conference on Computational Linguistics, Language Resources and Evaluation (LREC-COLING 2024), pages 17397–17408, Torino, Italia. ELRA and ICCL.
Cite (Informal):
Visual-Textual Entailment with Quantities Using Model Checking and Knowledge Injection (Iokawa & Yanaka, LREC-COLING 2024)
Copy Citation:
PDF:
https://aclanthology.org/2024.lrec-main.1512.pdf