Jeff Bilmes

Also published as: Jeff A. Bilmes


2024

pdf bib
An End-to-End Submodular Framework for Data-Efficient In-Context Learning
Lilly Kumari | Shengjie Wang | Arnav Das | Tianyi Zhou | Jeff Bilmes
Findings of the Association for Computational Linguistics: NAACL 2024

Recent advancements in natural language tasks leverage the emergent In-Context Learning (ICL) ability of pretrained Large Language Models (LLMs). ICL enables LLMs to perform new tasks by utilizing a limited number of input-output examples as prompts. While ICL circumvents the costly step of finetuning LLMs, its effectiveness is heavily dependent on the quality and ordering of provided examples (called exemplars). In this work, we propose a two-stage data-efficient framework Div-S3 for exemplar selection for ICL. The first stage focuses on data annotation and employs a pool-based active learning approach to select a set of Diverse and informative exemplars from the target tasks’ unlabeled pool. Given a test input/query, the second stage uses Submodular Span Summarization (S3) to select the most relevant and non-redundant exemplars from the annotated pool of a limited budget. On 7 different NLP datasets and 5 LLMs of varying complexities, we show Div-S3 outperforms (1) existing active learning-based methods for data annotation for ICL and (2) similarity-based methods for test query-specific exemplars retrieval.

pdf bib
An Experimental Design Framework for Label-Efficient Supervised Finetuning of Large Language Models
Gantavya Bhatt | Yifang Chen | Arnav Das | Jifan Zhang | Sang Truong | Stephen Mussmann | Yinglun Zhu | Jeff Bilmes | Simon Du | Kevin Jamieson | Jordan Ash | Robert Nowak
Findings of the Association for Computational Linguistics ACL 2024

Supervised finetuning (SFT) on instruction datasets has played a crucial role in achieving the remarkable zero-shot generalization capabilities observed in modern large language models (LLMs). However, the annotation efforts required to produce high quality responses for instructions are becoming prohibitively expensive, especially as the number of tasks spanned by instruction datasets continues to increase. Active learning is effective in identifying useful subsets of samples to annotate from an unlabeled pool, but its high computational cost remains a barrier to its widespread applicability in the context of LLMs. To mitigate the annotation cost of SFT and circumvent the computational bottlenecks of active learning, we propose using experimental design. Experimental design techniques select the most informative samples to label, and typically maximize some notion of uncertainty and/or diversity. In our work, we implement a framework that evaluates several existing and novel experimental design techniques and find that these methods consistently yield significant gains in label efficiency with little computational overhead. On generative tasks, to reach the same generalization performance, our methods save 50% of the annotation cost compared to random sampling.

2015

pdf bib
Summarization of Multi-Document Topic Hierarchies using Submodular Mixtures
Ramakrishna Bairi | Rishabh Iyer | Ganesh Ramakrishnan | Jeff Bilmes
Proceedings of the 53rd Annual Meeting of the Association for Computational Linguistics and the 7th International Joint Conference on Natural Language Processing (Volume 1: Long Papers)

2014

pdf bib
Submodularity for Data Selection in Machine Translation
Katrin Kirchhoff | Jeff Bilmes
Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP)

2013

pdf bib
Using Document Summarization Techniques for Speech Data Subset Selection
Kai Wei | Yuzong Liu | Katrin Kirchhoff | Jeff Bilmes
Proceedings of the 2013 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies

2011

pdf bib
A Class of Submodular Functions for Document Summarization
Hui Lin | Jeff Bilmes
Proceedings of the 49th Annual Meeting of the Association for Computational Linguistics: Human Language Technologies

pdf bib
Word Alignment via Submodular Maximization over Matroids
Hui Lin | Jeff Bilmes
Proceedings of the 49th Annual Meeting of the Association for Computational Linguistics: Human Language Technologies

2010

pdf bib
Multi-document Summarization via Budgeted Maximization of Submodular Functions
Hui Lin | Jeff Bilmes
Human Language Technologies: The 2010 Annual Conference of the North American Chapter of the Association for Computational Linguistics

2009

pdf bib
Compiling a Massive, Multilingual Dictionary via Probabilistic Inference
Mausam | Stephen Soderland | Oren Etzioni | Daniel Weld | Michael Skinner | Jeff Bilmes
Proceedings of the Joint Conference of the 47th Annual Meeting of the ACL and the 4th International Joint Conference on Natural Language Processing of the AFNLP

2008

pdf bib
Soft-Supervised Learning for Text Classification
Amarnag Subramanya | Jeff Bilmes
Proceedings of the 2008 Conference on Empirical Methods in Natural Language Processing

2007

pdf bib
Generalized Graphical Abstractions for Statistical Machine Translation
Karim Filali | Jeff Bilmes
Human Language Technologies 2007: The Conference of the North American Chapter of the Association for Computational Linguistics; Companion Volume, Short Papers

pdf bib
Virtual Evidence for Training Speech Recognizers Using Partially Labeled Data
Amarnag Subramanya | Jeff Bilmes
Human Language Technologies 2007: The Conference of the North American Chapter of the Association for Computational Linguistics; Companion Volume, Short Papers

2006

pdf bib
Proceedings of the Workshop on Computationally Hard Problems and Joint Inference in Speech and Language Processing
Ryan McDonald | Charles Sutton | Hal Daumé III | Andrew McCallum | Fernando Pereira | Jeff Bilmes
Proceedings of the Workshop on Computationally Hard Problems and Joint Inference in Speech and Language Processing

pdf bib
Proceedings of the Human Language Technology Conference of the NAACL, Main Conference
Robert C. Moore | Jeff Bilmes | Jennifer Chu-Carroll | Mark Sanderson
Proceedings of the Human Language Technology Conference of the NAACL, Main Conference

pdf bib
Backoff Model Training using Partially Observed Data: Application to Dialog Act Tagging
Gang Ji | Jeff Bilmes
Proceedings of the Human Language Technology Conference of the NAACL, Main Conference

pdf bib
Proceedings of the Human Language Technology Conference of the NAACL, Companion Volume: Short Papers
Robert C. Moore | Jeff Bilmes | Jennifer Chu-Carroll | Mark Sanderson
Proceedings of the Human Language Technology Conference of the NAACL, Companion Volume: Short Papers

2005

pdf bib
A Dynamic Bayesian Framework to Model Context and Memory in Edit Distance Learning: An Application to Pronunciation Classification
Karim Filali | Jeff Bilmes
Proceedings of the 43rd Annual Meeting of the Association for Computational Linguistics (ACL’05)

pdf bib
Part-of-Speech Tagging using Virtual Evidence and Negative Training
Sheila M. Reynolds | Jeff A. Bilmes
Proceedings of Human Language Technology Conference and Conference on Empirical Methods in Natural Language Processing

pdf bib
The Vocal Joystick: A Voice-Based Human-Computer Interface for Individuals with Motor Impairments
Jeff A. Bilmes | Xiao Li | Jonathan Malkin | Kelley Kilanski | Richard Wright | Katrin Kirchhoff | Amar Subramanya | Susumu Harada | James Landay | Patricia Dowden | Howard Chizeck
Proceedings of Human Language Technology Conference and Conference on Empirical Methods in Natural Language Processing

2003

pdf bib
Factored Language Models and Generalized Parallel Backoff
Jeff A. Bilmes | Katrin Kirchhoff
Companion Volume of the Proceedings of HLT-NAACL 2003 - Short Papers