Rishabh Jain


2022

pdf bib
Extending Logic Explained Networks to Text Classification
Rishabh Jain | Gabriele Ciravegna | Pietro Barbiero | Francesco Giannini | Davide Buffelli | Pietro Lio
Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing

Recently, Logic Explained Networks (LENs) have been proposed as explainable-by-design neural models providing logic explanations for their predictions.However, these models have only been applied to vision and tabular data, and they mostly favour the generation of global explanations, while local ones tend to be noisy and verbose.For these reasons, we propose LEN<sup>p</sup>, improving local explanations by perturbing input words, and we test it on text classification. Our results show that (i) LEN<sup>p</sup> provides better local explanations than LIME in terms of sensitivity and faithfulness, and (ii) its logic explanations are more useful and user-friendly than the feature scoring provided by LIME as attested by a human survey.

2019

pdf bib
On Model Stability as a Function of Random Seed
Pranava Madhyastha | Rishabh Jain
Proceedings of the 23rd Conference on Computational Natural Language Learning (CoNLL)

In this paper, we focus on quantifying model stability as a function of random seed by investigating the effects of the induced randomness on model performance and the robustness of the model in general. We specifically perform a controlled study on the effect of random seeds on the behaviour of attention, gradient-based and surrogate model based (LIME) interpretations. Our analysis suggests that random seeds can adversely affect the consistency of models resulting in counterfactual interpretations. We propose a technique called Aggressive Stochastic Weight Averaging (ASWA) and an extension called Norm-filtered Aggressive Stochastic Weight Averaging (NASWA) which improves the stability of models over random seeds. With our ASWA and NASWA based optimization, we are able to improve the robustness of the original model, on average reducing the standard deviation of the model’s performance by 72%.

2018

pdf bib
Experiments on Morphological Reinflection: CoNLL-2018 Shared Task
Rishabh Jain | Anil Kumar Singh
Proceedings of the CoNLL–SIGMORPHON 2018 Shared Task: Universal Morphological Reinflection