Aikaterini-Lida Kalouli

Also published as: Aikaterini-lida Kalouli


2023

pdf bib
When Truth Matters - Addressing Pragmatic Categories in Natural Language Inference (NLI) by Large Language Models (LLMs)
Reto Gubelmann | Aikaterini-lida Kalouli | Christina Niklaus | Siegfried Handschuh
Proceedings of the 12th Joint Conference on Lexical and Computational Semantics (*SEM 2023)

In this paper, we focus on the ability of large language models (LLMs) to accommodate different pragmatic sentence types, such as questions, commands, as well as sentence fragments for natural language inference (NLI). On the commonly used notion of logical inference, nothing can be inferred from a question, an order, or an incomprehensible sentence fragment. We find MNLI, arguably the most important NLI dataset, and hence models fine-tuned on this dataset, insensitive to this fact. Using a symbolic semantic parser, we develop and make publicly available, fine-tuning datasets designed specifically to address this issue, with promising results. We also make a first exploration of ChatGPT’s concept of entailment.

pdf bib
Curing the SICK and Other NLI Maladies
Aikaterini-Lida Kalouli | Hai Hu | Alexander F. Webb | Lawrence S. Moss | Valeria de Paiva
Computational Linguistics, Volume 49, Issue 1 - March 2023

Against the backdrop of the ever-improving Natural Language Inference (NLI) models, recent efforts have focused on the suitability of the current NLI datasets and on the feasibility of the NLI task as it is currently approached. Many of the recent studies have exposed the inherent human disagreements of the inference task and have proposed a shift from categorical labels to human subjective probability assessments, capturing human uncertainty. In this work, we show how neither the current task formulation nor the proposed uncertainty gradient are entirely suitable for solving the NLI challenges. Instead, we propose an ordered sense space annotation, which distinguishes between logical and common-sense inference. One end of the space captures non-sensical inferences, while the other end represents strictly logical scenarios. In the middle of the space, we find a continuum of common-sense, namely, the subjective and graded opinion of a “person on the street.” To arrive at the proposed annotation scheme, we perform a careful investigation of the SICK corpus and we create a taxonomy of annotation issues and guidelines. We re-annotate the corpus with the proposed annotation scheme, utilizing four symbolic inference systems, and then perform a thorough evaluation of the scheme by fine-tuning and testing commonly used pre-trained language models on the re-annotated SICK within various settings. We also pioneer a crowd annotation of a small portion of the MultiNLI corpus, showcasing that it is possible to adapt our scheme for annotation by non-experts on another NLI corpus. Our work shows the efficiency and benefits of the proposed mechanism and opens the way for a careful NLI task refinement.

2022

pdf bib
Proceedings of the 3rd Natural Logic Meets Machine Learning Workshop (NALOMA III)
Aikaterini-Lida Kalouli | Stergios Chatzikyriakidis
Proceedings of the 3rd Natural Logic Meets Machine Learning Workshop (NALOMA III)

pdf bib
Negation, Coordination, and Quantifiers in Contextualized Language Models
Aikaterini-Lida Kalouli | Rita Sevastjanova | Christin Beck | Maribel Romero
Proceedings of the 29th International Conference on Computational Linguistics

With the success of contextualized language models, much research explores what these models really learn and in which cases they still fail. Most of this work focuses on specific NLP tasks and on the learning outcome. Little research has attempted to decouple the models’ weaknesses from specific tasks and focus on the embeddings per se and their mode of learning. In this paper, we take up this research opportunity: based on theoretical linguistic insights, we explore whether the semantic constraints of function words are learned and how the surrounding context impacts their embeddings. We create suitable datasets, provide new insights into the inner workings of LMs vis-a-vis function words and implement an assisting visual web interface for qualitative analysis.

2021

pdf bib
Is that really a question? Going beyond factoid questions in NLP
Aikaterini-Lida Kalouli | Rebecca Kehlbeck | Rita Sevastjanova | Oliver Deussen | Daniel Keim | Miriam Butt
Proceedings of the 14th International Conference on Computational Semantics (IWCS)

Research in NLP has mainly focused on factoid questions, with the goal of finding quick and reliable ways of matching a query to an answer. However, human discourse involves more than that: it contains non-canonical questions deployed to achieve specific communicative goals. In this paper, we investigate this under-studied aspect of NLP by introducing a targeted task, creating an appropriate corpus for the task and providing baseline models of diverse nature. With this, we are also able to generate useful insights on the task and open the way for future research in this direction.

pdf bib
KonTra at CMCL 2021 Shared Task: Predicting Eye Movements by Combining BERT with Surface, Linguistic and Behavioral Information
Qi Yu | Aikaterini-Lida Kalouli | Diego Frassinelli
Proceedings of the Workshop on Cognitive Modeling and Computational Linguistics

This paper describes the submission of the team KonTra to the CMCL 2021 Shared Task on eye-tracking prediction. Our system combines the embeddings extracted from a fine-tuned BERT model with surface, linguistic and behavioral features, resulting in an average mean absolute error of 4.22 across all 5 eye-tracking measures. We show that word length and features representing the expectedness of a word are consistently the strongest predictors across all 5 eye-tracking measures.

pdf bib
Proceedings of the 1st and 2nd Workshops on Natural Logic Meets Machine Learning (NALOMA)
Aikaterini-Lida Kalouli | Lawrence S. Moss
Proceedings of the 1st and 2nd Workshops on Natural Logic Meets Machine Learning (NALOMA)

pdf bib
Explaining Contextualization in Language Models using Visual Analytics
Rita Sevastjanova | Aikaterini-Lida Kalouli | Christin Beck | Hanna Schäfer | Mennatallah El-Assady
Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers)

Despite the success of contextualized language models on various NLP tasks, it is still unclear what these models really learn. In this paper, we contribute to the current efforts of explaining such models by exploring the continuum between function and content words with respect to contextualization in BERT, based on linguistically-informed insights. In particular, we utilize scoring and visual analytics techniques: we use an existing similarity-based score to measure contextualization and integrate it into a novel visual analytics technique, presenting the model’s layers simultaneously and highlighting intra-layer properties and inter-layer differences. We show that contextualization is neither driven by polysemy nor by pure context variation. We also provide insights on why BERT fails to model words in the middle of the functionality continuum.

2020

pdf bib
Hy-NLI: a Hybrid system for Natural Language Inference
Aikaterini-Lida Kalouli | Richard Crouch | Valeria de Paiva
Proceedings of the 28th International Conference on Computational Linguistics

Despite the advances in Natural Language Inference through the training of massive deep models, recent work has revealed the generalization difficulties of such models, which fail to perform on adversarial datasets with challenging linguistic phenomena. Such phenomena, however, can be handled well by symbolic systems. Thus, we propose Hy-NLI, a hybrid system that learns to identify an NLI pair as linguistically challenging or not. Based on that, it uses its symbolic or deep learning component, respectively, to make the final inference decision. We show how linguistically less complex cases are best solved by robust state-of-the-art models, like BERT and XLNet, while hard linguistic phenomena are best handled by our implemented symbolic engine. Our thorough evaluation shows that our hybrid system achieves state-of-the-art performance across mainstream and adversarial datasets and opens the way for further research into the hybrid direction.

pdf bib
XplaiNLI: Explainable Natural Language Inference through Visual Analytics
Aikaterini-Lida Kalouli | Rita Sevastjanova | Valeria de Paiva | Richard Crouch | Mennatallah El-Assady
Proceedings of the 28th International Conference on Computational Linguistics: System Demonstrations

Advances in Natural Language Inference (NLI) have helped us understand what state-of-the-art models really learn and what their generalization power is. Recent research has revealed some heuristics and biases of these models. However, to date, there is no systematic effort to capitalize on those insights through a system that uses these to explain the NLI decisions. To this end, we propose XplaiNLI, an eXplainable, interactive, visualization interface that computes NLI with different methods and provides explanations for the decisions made by the different approaches.

2019

pdf bib
GKR: Bridging the Gap between Symbolic/structural and Distributional Meaning Representations
Aikaterini-Lida Kalouli | Richard Crouch | Valeria de Paiva
Proceedings of the First International Workshop on Designing Meaning Representations

Three broad approaches have been attempted to combine distributional and structural/symbolic aspects to construct meaning representations: a) injecting linguistic features into distributional representations, b) injecting distributional features into symbolic representations or c) combining structural and distributional features in the final representation. This work focuses on an example of the third and less studied approach: it extends the Graphical Knowledge Representation (GKR) to include distributional features and proposes a division of semantic labour between the distributional and structural/symbolic features. We propose two extensions of GKR that clearly show this division and empirically test one of the proposals on an NLI dataset with hard compositional pairs.

pdf bib
Explaining Simple Natural Language Inference
Aikaterini-Lida Kalouli | Annebeth Buis | Livy Real | Martha Palmer | Valeria de Paiva
Proceedings of the 13th Linguistic Annotation Workshop

The vast amount of research introducing new corpora and techniques for semi-automatically annotating corpora shows the important role that datasets play in today’s research, especially in the machine learning community. This rapid development raises concerns about the quality of the datasets created and consequently of the models trained, as recently discussed with respect to the Natural Language Inference (NLI) task. In this work we conduct an annotation experiment based on a small subset of the SICK corpus. The experiment reveals several problems in the annotation guidelines, and various challenges of the NLI task itself. Our quantitative evaluation of the experiment allows us to assign our empirical observations to specific linguistic phenomena and leads us to recommendations for future annotation tasks, for NLI and possibly for other tasks.

pdf bib
Composing Noun Phrase Vector Representations
Aikaterini-Lida Kalouli | Valeria de Paiva | Richard Crouch
Proceedings of the 4th Workshop on Representation Learning for NLP (RepL4NLP-2019)

Vector representations of words have seen an increasing success over the past years in a variety of NLP tasks. While there seems to be a consensus about the usefulness of word embeddings and how to learn them, it is still unclear which representations can capture the meaning of phrases or even whole sentences. Recent work has shown that simple operations outperform more complex deep architectures. In this work, we propose two novel constraints for computing noun phrase vector representations. First, we propose that the semantic and not the syntactic contribution of each component of a noun phrase should be considered, so that the resulting composed vectors express more of the phrase meaning. Second, the composition process of the two phrase vectors should apply suitable dimensions’ selection in a way that specific semantic features captured by the phrase’s meaning become more salient. Our proposed methods are compared to 11 other approaches, including popular baselines and a neural net architecture, and are evaluated across 6 tasks and 2 datasets. Our results show that these constraints lead to more expressive phrase representations and can be applied to other state-of-the-art methods to improve their performance.

pdf bib
ParHistVis: Visualization of Parallel Multilingual Historical Data
Aikaterini-Lida Kalouli | Rebecca Kehlbeck | Rita Sevastjanova | Katharina Kaiser | Georg A. Kaiser | Miriam Butt
Proceedings of the 1st International Workshop on Computational Approaches to Historical Language Change

The study of language change through parallel corpora can be advantageous for the analysis of complex interactions between time, text domain and language. Often, those advantages cannot be fully exploited due to the sparse but high-dimensional nature of such historical data. To tackle this challenge, we introduce ParHistVis: a novel, free, easy-to-use, interactive visualization tool for parallel, multilingual, diachronic and synchronic linguistic data. We illustrate the suitability of the components of the tool based on a use case of word order change in Romance wh-interrogatives.

2018

pdf bib
GKR: the Graphical Knowledge Representation for semantic parsing
Aikaterini-Lida Kalouli | Richard Crouch
Proceedings of the Workshop on Computational Semantics beyond Events and Roles

This paper describes the first version of an open-source semantic parser that creates graphical representations of sentences to be used for further semantic processing, e.g. for natural language inference, reasoning and semantic similarity. The Graphical Knowledge Representation which is output by the parser is inspired by the Abstract Knowledge Representation, which separates out conceptual and contextual levels of representation that deal respectively with the subject matter of a sentence and its existential commitments. Our representation is a layered graph with each sub-graph holding different kinds of information, including one sub-graph for concepts and one for contexts. Our first evaluation of the system shows an F-score of 85% in accurately representing sentences as semantic graphs.

pdf bib
A Multilingual Approach to Question Classification
Aikaterini-Lida Kalouli | Katharina Kaiser | Annette Hautli-Janisz | Georg A. Kaiser | Miriam Butt
Proceedings of the Eleventh International Conference on Language Resources and Evaluation (LREC 2018)

pdf bib
Named Graphs for Semantic Representation
Richard Crouch | Aikaterini-Lida Kalouli
Proceedings of the Seventh Joint Conference on Lexical and Computational Semantics

A position paper arguing that purely graphical representations for natural language semantics lack a fundamental degree of expressiveness, and cannot deal with even basic Boolean operations like negation or disjunction. Moving from graphs to named graphs leads to representations that stand some chance of having sufficient expressive power. Named ℱℒ0 graphs are of particular interest.

2017

pdf bib
Textual Inference: getting logic from humans
Aikaterini-Lida Kalouli | Livy Real | Valeria de Paiva
Proceedings of the 12th International Conference on Computational Semantics (IWCS) — Short papers

pdf bib
Correcting Contradictions
Aikaterini-Lida Kalouli | Valeria de Paiva | Livy Real
Proceedings of the Computing Natural Language Inference Workshop