Katrin Kirchhoff


2024

pdf bib
SpeechGuard: Exploring the Adversarial Robustness of Multi-modal Large Language Models
Raghuveer Peri | Sai Muralidhar Jayanthi | Srikanth Ronanki | Anshu Bhatia | Karel Mundnich | Saket Dingliwal | Nilaksh Das | Zejiang Hou | Goeric Huybrechts | Srikanth Vishnubhotla | Daniel Garcia-Romero | Sundararajan Srinivasan | Kyu Han | Katrin Kirchhoff
Findings of the Association for Computational Linguistics ACL 2024

Integrated Speech and Large Language Models (SLMs) that can follow speech instructions and generate relevant text responses have gained popularity lately. However, the safety and robustness of these models remains largely unclear. In this work, we investigate the potential vulnerabilities of such instruction-following speech-language models to adversarial attacks and jailbreaking. Specifically, we design algorithms that can generate adversarial examples to jailbreak SLMs in both white-box and black-box attack settings without human involvement. Additionally, we propose countermeasures to thwart such jailbreaking attacks. Our models, trained on dialog data with speech instructions, achieve state-of-the-art performance on spoken question-answering task, scoring over 80% on both safety and helpfulness metrics. Despite safety guardrails, experiments on jailbreaking demonstrate the vulnerability of SLMs to adversarial perturbations and transfer attacks, with average attack success rates of 90% and 10% respectively when evaluated on a dataset of carefully designed harmful questions spanning 12 different toxic categories. However, we demonstrate that our proposed countermeasures reduce the attack success significantly.

2023

pdf bib
Rethinking the Role of Scale for In-Context Learning: An Interpretability-based Case Study at 66 Billion Scale
Hritik Bansal | Karthik Gopalakrishnan | Saket Dingliwal | Sravan Bodapati | Katrin Kirchhoff | Dan Roth
Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)

Language models have been shown to perform better with an increase in scale on a wide variety of tasks via the in-context learning paradigm. In this paper, we investigate the hypothesis that the ability of a large language model to in-context learn-perform a task is not uniformly spread across all of its underlying components. Using a 66 billion parameter language model (OPT-66B) across a diverse set of 14 downstream tasks, we find this is indeed the case: ~70% of the attention heads and ~20% of the feed forward networks can be removed with minimal decline in task performance. We find substantial overlap in the set of attention heads (un)important for in-context learning across tasks and number of in-context examples. We also address our hypothesis through a task-agnostic lens, finding that a small set of attention heads in OPT-66B score highly on their ability to perform primitive induction operations associated with in-context learning, namely, prefix matching and copying. These induction heads overlap with task-specific important heads, reinforcing arguments by Olsson et al. (2022) regarding induction head generality to more sophisticated behaviors associated with in-context learning. Overall, our study provides several insights that indicate large language models may be under-trained for in-context learning and opens up questions on how to pre-train language models to more effectively perform in-context learning.

2022

pdf bib
Self-supervised Representation Learning for Speech Processing
Hung-yi Lee | Abdelrahman Mohamed | Shinji Watanabe | Tara Sainath | Karen Livescu | Shang-Wen Li | Shu-wen Yang | Katrin Kirchhoff
Proceedings of the 2022 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies: Tutorial Abstracts

There is a trend in the machine learning community to adopt self-supervised approaches to pre-train deep networks. Self-supervised representation learning (SSL) utilizes proxy supervised learning tasks, for example, distinguishing parts of the input signal from distractors, or generating masked input segments conditioned on the unmasked ones, to obtain training data from unlabeled corpora. BERT and GPT in NLP and SimCLR and BYOL in CV are famous examples in this direction. These approaches make it possible to use a tremendous amount of unlabeled data available on the web to train large networks and solve complicated tasks. Thus, SSL has the potential to scale up current machine learning technologies, especially for low-resourced, under-represented use cases, and democratize the technologies. Recently self-supervised approaches for speech processing are also gaining popularity. There are several workshops in relevant topics hosted at ICML 2020 (https://icml-sas.gitlab.io/), NeurIPS 2020 (https://neurips-sas-2020.github.io/), and AAAI 2022 (https://aaai-sas-2022.github.io/). However, there is no previous tutorial about a similar topic based on the authors’ best knowledge. Due to the growing popularity of SSL, and the shared mission of the areas in bringing speech and language technologies to more use cases with better quality and scaling the technologies for under-represented languages, we propose this tutorial to systematically survey the latest SSL techniques, tools, datasets, and performance achievement in speech processing. The proposed tutorial is highly relevant to the special theme of ACL about language diversity. One of the main focuses of the tutorial is leveraging SSL to reduce the dependence of speech technologies on labeled data, and to scale up the technologies especially for under-represented languages and use cases.

2021

pdf bib
Align-Refine: Non-Autoregressive Speech Recognition via Iterative Realignment
Ethan A. Chi | Julian Salazar | Katrin Kirchhoff
Proceedings of the 2021 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies

Non-autoregressive encoder-decoder models greatly improve decoding speed over autoregressive models, at the expense of generation quality. To mitigate this, iterative decoding models repeatedly infill or refine the proposal of a non-autoregressive model. However, editing at the level of output sequences limits model flexibility. We instead propose *iterative realignment*, which by refining latent alignments allows more flexible edits in fewer steps. Our model, Align-Refine, is an end-to-end Transformer which iteratively realigns connectionist temporal classification (CTC) alignments. On the WSJ dataset, Align-Refine matches an autoregressive baseline with a 14x decoding speedup; on LibriSpeech, we reach an LM-free test-other WER of 9.0% (19% relative improvement on comparable work) in three iterations. We release our code at https://github.com/amazon-research/align-refine.

pdf bib
ASR Adaptation for E-commerce Chatbots using Cross-Utterance Context and Multi-Task Language Modeling
Ashish Shenoy | Sravan Bodapati | Katrin Kirchhoff
Proceedings of the 4th Workshop on e-Commerce and NLP

Automatic Speech Recognition (ASR) robustness toward slot entities are critical in e-commerce voice assistants that involve monetary transactions and purchases. Along with effective domain adaptation, it is intuitive that cross utterance contextual cues play an important role in disambiguating domain specific content words from speech. In this paper, we investigate various techniques to improve contextualization, content word robustness and domain adaptation of a Transformer-XL neural language model (NLM) to rescore ASR N-best hypotheses. To improve contextualization, we utilize turn level dialogue acts along with cross utterance context carry over. Additionally, to adapt our domain-general NLM towards e-commerce on-the-fly, we use embeddings derived from a finetuned masked LM on in-domain data. Finally, to improve robustness towards in-domain content words, we propose a multi-task model that can jointly perform content word detection and language modeling tasks. Compared to a non-contextual LSTM LM baseline, our best performing NLM rescorer results in a content WER reduction of 19.2% on e-commerce audio test set and a slot labeling F1 improvement of 6.4%.

2020

pdf bib
Masked Language Model Scoring
Julian Salazar | Davis Liang | Toan Q. Nguyen | Katrin Kirchhoff
Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics

Pretrained masked language models (MLMs) require finetuning for most NLP tasks. Instead, we evaluate MLMs out of the box via their pseudo-log-likelihood scores (PLLs), which are computed by masking tokens one by one. We show that PLLs outperform scores from autoregressive language models like GPT-2 in a variety of tasks. By rescoring ASR and NMT hypotheses, RoBERTa reduces an end-to-end LibriSpeech model’s WER by 30% relative and adds up to +1.7 BLEU on state-of-the-art baselines for low-resource translation pairs, with further gains from domain adaptation. We attribute this success to PLL’s unsupervised expression of linguistic acceptability without a left-to-right bias, greatly improving on scores from GPT-2 (+10 points on island effects, NPI licensing in BLiMP). One can finetune MLMs to give scores without masking, enabling computation in a single inference pass. In all, PLLs and their associated pseudo-perplexities (PPPLs) enable plug-and-play use of the growing number of pretrained MLMs; e.g., we use a single cross-lingual model to rescore translations in multiple languages. We release our library for language model scoring at https://github.com/awslabs/mlm-scoring.

pdf bib
Robust Prediction of Punctuation and Truecasing for Medical ASR
Monica Sunkara | Srikanth Ronanki | Kalpit Dixit | Sravan Bodapati | Katrin Kirchhoff
Proceedings of the First Workshop on Natural Language Processing for Medical Conversations

Automatic speech recognition (ASR) systems in the medical domain that focus on transcribing clinical dictations and doctor-patient conversations often pose many challenges due to the complexity of the domain. ASR output typically undergoes automatic punctuation to enable users to speak naturally, without having to vocalize awkward and explicit punctuation commands, such as “period”, “add comma” or “exclamation point”, while truecasing enhances user readability and improves the performance of downstream NLP tasks. This paper proposes a conditional joint modeling framework for prediction of punctuation and truecasing using pretrained masked language models such as BERT, BioBERT and RoBERTa. We also present techniques for domain and task specific adaptation by fine-tuning masked language models with medical domain data. Finally, we improve the robustness of the model against common errors made in ASR by performing data augmentation. Experiments performed on dictation and conversational style corpora show that our proposed model achieves 5% absolute improvement on ground truth text and 10% improvement on ASR outputs over baseline models under F1 metric.

2019

pdf bib
Simple, Fast, Accurate Intent Classification and Slot Labeling for Goal-Oriented Dialogue Systems
Arshit Gupta | John Hewitt | Katrin Kirchhoff
Proceedings of the 20th Annual SIGdial Meeting on Discourse and Dialogue

With the advent of conversational assistants, like Amazon Alexa, Google Now, etc., dialogue systems are gaining a lot of traction, especially in industrial setting. These systems typically consist of Spoken Language understanding component which, in turn, consists of two tasks - Intent Classification (IC) and Slot Labeling (SL). Generally, these two tasks are modeled together jointly to achieve best performance. However, this joint modeling adds to model obfuscation. In this work, we first design framework for a modularization of joint IC-SL task to enhance architecture transparency. Then, we explore a number of self-attention, convolutional, and recurrent models, contributing a large-scale analysis of modeling paradigms for IC+SL across two datasets. Finally, using this framework, we propose a class of ‘label-recurrent’ models that otherwise non-recurrent, with a 10-dimensional representation of the label history, and show that our proposed systems are easy to interpret, highly accurate (achieving over 30% error reduction in SL over the state-of-the-art on the Snips dataset), as well as fast, at 2x the inference and 2/3 to 1/2 the training time of comparable recurrent models, thus giving an edge in critical real-world systems.

2018

pdf bib
Context Models for OOV Word Translation in Low-Resource Languages
Angli Liu | Katrin Kirchhoff
Proceedings of the 13th Conference of the Association for Machine Translation in the Americas (Volume 1: Research Track)

2016

pdf bib
Unsupervised Resolution of Acronyms and Abbreviations in Nursing Notes Using Document-Level Context Models
Katrin Kirchhoff | Anne M. Turner
Proceedings of the Seventh International Workshop on Health Text Mining and Information Analysis

2015

pdf bib
Morphological Modeling for Machine Translation of English-Iraqi Arabic Spoken Dialogs
Katrin Kirchhoff | Yik-Cheung Tam | Colleen Richey | Wen Wang
Proceedings of the 2015 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies

2014

pdf bib
Submodularity for Data Selection in Machine Translation
Katrin Kirchhoff | Jeff Bilmes
Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP)

2013

pdf bib
Proceedings of the 2013 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies
Lucy Vanderwende | Hal Daumé III | Katrin Kirchhoff
Proceedings of the 2013 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies

pdf bib
Using Document Summarization Techniques for Speech Data Subset Selection
Kai Wei | Yuzong Liu | Katrin Kirchhoff | Jeff Bilmes
Proceedings of the 2013 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies

pdf bib
Integrated post-editing and translation management for lay user communities
Adrian Laurenzi | Megumu Brownstein | Anne M. Turner | Katrin Kirchhoff
Proceedings of the 2nd Workshop on Post-editing Technology and Practice

2012

pdf bib
Unsupervised Translation Disambiguation for Cross-Domain Statistical Machine Translation
Mei Yang | Katrin Kirchhoff
Proceedings of the 10th Conference of the Association for Machine Translation in the Americas: Research Papers

Most attempts at integrating word sense disambiguation with statistical machine translation have focused on supervised disambiguation approaches. These approaches are of limited use when the distribution of the test data differs strongly from that of the training data; however, word sense errors tend to be especially common under these conditions. In this paper we present different approaches to unsupervised word translation disambiguation and apply them to the problem of translating conversational speech under resource-poor training conditions. Both human and automatic evaluation metrics demonstrate significant improvements resulting from our technique.

pdf bib
Evaluating User Preferences in Machine Translation Using Conjoint Analysis
Katrin Kirchhoff | Daniel Capurro | Anne Turner
Proceedings of the 16th Annual Conference of the European Association for Machine Translation

2010

pdf bib
Hand Gestures in Disambiguating Types of You Expressions in Multiparty Meetings
Tyler Baldwin | Joyce Chai | Katrin Kirchhoff
Proceedings of the SIGDIAL 2010 Conference

pdf bib
Contextual Modeling for Meeting Translation Using Unsupervised Word Sense Disambiguation
Mei Yang | Katrin Kirchhoff
Proceedings of the 23rd International Conference on Computational Linguistics (Coling 2010)

2009

pdf bib
Graph-based Learning for Statistical Machine Translation
Andrei Alexandrescu | Katrin Kirchhoff
Proceedings of Human Language Technologies: The 2009 Annual Conference of the North American Chapter of the Association for Computational Linguistics

pdf bib
The University of Washington machine translation system for IWSLT 2009
Mei Yang | Amittai Axelrod | Kevin Duh | Katrin Kirchhoff
Proceedings of the 6th International Workshop on Spoken Language Translation: Evaluation Campaign

This paper describes the University of Washington’s system for the 2009 International Workshop on Spoken Language Translation (IWSLT) evaluation campaign. Two systems were developed, one each for the BTEC Chinese-to-English and Arabic-to-English tracks. We describe experiments with different preprocessing and alignment combination schemes. Our main focus this year was on exploring a novel semi-supervised approach to N-best list reranking; however, this method yielded inconclusive results.

2008

pdf bib
The University of Washington Machine Translation System for ACL WMT 2008
Amittai Axelrod | Mei Yang | Kevin Duh | Katrin Kirchhoff
Proceedings of the Third Workshop on Statistical Machine Translation

pdf bib
Beyond Log-Linear Models: Boosted Minimum Error Rate Training for N-best Re-ranking
Kevin Duh | Katrin Kirchhoff
Proceedings of ACL-08: HLT, Short Papers

2007

pdf bib
Semi-automatic error analysis for large-scale statistical machine translation
Katrin Kirchhoff | Owen Rambow | Nizar Habash | Mona Diab
Proceedings of Machine Translation Summit XI: Papers

pdf bib
The University of Washington machine translation system for the IWSLT 2007 competition
Katrin Kirchhoff | Mei Yang
Proceedings of the Fourth International Workshop on Spoken Language Translation

This paper presents the University of Washington’s submission to the 2007 IWSLT benchmark evaluation. The UW system participated in two data tracks, Italian-to-English and Arabic-to-English. Our main focus was on incorporating out-of-domain data, which contributed to improvements for both language pairs in both the clean text and ASR output conditions. In addition, we compared supervised and semi-supervised preprocessing schemes for the Arabic-to-English task and found that the semi-supervised scheme performs competitively with the supervised algorithm while using a fraction of the run-time.

pdf bib
Data-Driven Graph Construction for Semi-Supervised Graph-Based Learning in NLP
Andrei Alexandrescu | Katrin Kirchhoff
Human Language Technologies 2007: The Conference of the North American Chapter of the Association for Computational Linguistics; Proceedings of the Main Conference

2006

pdf bib
Lexicon Acquisition for Dialectal Arabic Using Transductive Learning
Kevin Duh | Katrin Kirchhoff
Proceedings of the 2006 Conference on Empirical Methods in Natural Language Processing

pdf bib
The University of Washington machine translation system for IWSLT 2006
Katrin Kirchhoff | Kevin Duh | Chris Lim
Proceedings of the Third International Workshop on Spoken Language Translation: Evaluation Campaign

pdf bib
Factored Neural Language Models
Andrei Alexandrescu | Katrin Kirchhoff
Proceedings of the Human Language Technology Conference of the NAACL, Companion Volume: Short Papers

pdf bib
Phrase-Based Backoff Models for Machine Translation of Highly Inflected Languages
Mei Yang | Katrin Kirchhoff
11th Conference of the European Chapter of the Association for Computational Linguistics

pdf bib
Ambiguity Reduction for Machine Translation: Human-Computer Collaboration
Marcus Sammer | Kobi Reiter | Stephen Soderland | Katrin Kirchhoff | Oren Etzioni
Proceedings of the 7th Conference of the Association for Machine Translation in the Americas: Technical Papers

Statistical Machine Translation (SMT) accuracy degrades when there is only a limited amount of training, or when the training is not from the same domain or genre of text as the target application. However, cross-domain applications are typical of many real world tasks. We demonstrate that SMT accuracy can be improved in a cross-domain application by using a controlled language (CL) interface to help reduce lexical ambiguity in the input text. Our system, CL-MT, presents a monolingual user with a choice of word senses for each content word in the input text. CL-MT temporarily adjusts the underlying SMT system's phrase table, boosting the scores of translations that include the word senses preferred by the user and lowering scores for disfavored translations. We demonstrate that this improves translation adequacy in 33.8% of the sentences in Spanish to English translation of news stories, where the SMT system was trained on proceedings of the European Parliament.

2005

pdf bib
POS Tagging of Dialectal Arabic: A Minimally Supervised Approach
Kevin Duh | Katrin Kirchhoff
Proceedings of the ACL Workshop on Computational Approaches to Semitic Languages

pdf bib
Improved Language Modeling for Statistical Machine Translation
Katrin Kirchhoff | Mei Yang
Proceedings of the ACL Workshop on Building and Using Parallel Texts

pdf bib
Proceedings of Human Language Technology Conference and Conference on Empirical Methods in Natural Language Processing
Raymond Mooney | Chris Brew | Lee-Feng Chien | Katrin Kirchhoff
Proceedings of Human Language Technology Conference and Conference on Empirical Methods in Natural Language Processing

pdf bib
The Vocal Joystick: A Voice-Based Human-Computer Interface for Individuals with Motor Impairments
Jeff A. Bilmes | Xiao Li | Jonathan Malkin | Kelley Kilanski | Richard Wright | Katrin Kirchhoff | Amar Subramanya | Susumu Harada | James Landay | Patricia Dowden | Howard Chizeck
Proceedings of Human Language Technology Conference and Conference on Empirical Methods in Natural Language Processing

2004

pdf bib
Automatic Diacritization of Arabic for Acoustic Modeling in Speech Recognition
Dimitra Vergyri | Katrin Kirchhoff
Proceedings of the Workshop on Computational Approaches to Arabic Script-based Languages

pdf bib
Automatic Learning of Language Model Structure
Kevin Duh | Katrin Kirchhoff
COLING 2004: Proceedings of the 20th International Conference on Computational Linguistics

2003

pdf bib
Directions For Multi-Party Human-Computer Interaction Research
Katrin Kirchhoff | Mari Ostendorf
Proceedings of the HLT-NAACL 2003 Workshop on Research Directions in Dialogue Processing

pdf bib
Factored Language Models and Generalized Parallel Backoff
Jeff A. Bilmes | Katrin Kirchhoff
Companion Volume of the Proceedings of HLT-NAACL 2003 - Short Papers