Joyce Chai

Also published as: Joyce Y. Chai, Joyce Yue Chai


2023

pdf bib
World-to-Words: Grounded Open Vocabulary Acquisition through Fast Mapping in Vision-Language Models
Ziqiao Ma | Jiayi Pan | Joyce Chai
Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)

The ability to connect language units to their referents in the physical world, referred to as grounding, is crucial to learning and understanding grounded meanings of words. While humans demonstrate fast mapping in new word learning, it remains unclear whether modern vision-language models can truly represent language with their grounded meanings, and how grounding may further bootstrap new word learning. To this end, we introduce Grounded Open Vocabulary Acquisition (GOVA) to examine grounding and bootstrapping in open-world language learning. As an initial attempt, we propose World-to-Words (W2W), a novel visually-grounded language model by pre-training on image-text pairs highlighting grounding as an objective. Through extensive experiments and analysis, we demonstrate that W2W is a more coherent and fast grounded word learner, and that the grounding ability acquired during pre-training helps the model to learn unseen words more rapidly and robustly.

pdf bib
In-Context Analogical Reasoning with Pre-Trained Language Models
Xiaoyang Hu | Shane Storks | Richard Lewis | Joyce Chai
Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)

Analogical reasoning is a fundamental capacity of human cognition that allows us to reason abstractly about novel situations by relating them to past experiences. While it is thought to be essential for robust reasoning in AI systems, conventional approaches require significant training and/or hard-coding of domain knowledge to be applied to benchmark tasks. Inspired by cognitive science research that has found connections between human language and analogy-making, we explore the use of intuitive language-based abstractions to support analogy in AI systems. Specifically, we apply large pre-trained language models (PLMs) to visual Raven’s Progressive Matrices (RPM), a common relational reasoning test. By simply encoding the perceptual features of the problem into language form, we find that PLMs exhibit a striking capacity for zero-shot relational reasoning, exceeding human performance and nearing supervised vision-based methods. We explore different encodings that vary the level of abstraction over task features, finding that higher-level abstractions further strengthen PLMs’ analogical reasoning. Our detailed analysis reveals insights on the role of model complexity, in-context learning, and prior knowledge in solving RPM tasks.

pdf bib
NLP Reproducibility For All: Understanding Experiences of Beginners
Shane Storks | Keunwoo Yu | Ziqiao Ma | Joyce Chai
Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)

As natural language processing (NLP) has recently seen an unprecedented level of excitement, and more people are eager to enter the field, it is unclear whether current research reproducibility efforts are sufficient for this group of beginners to apply the latest developments. To understand their needs, we conducted a study with 93 students in an introductory NLP course, where students reproduced the results of recent NLP papers. Surprisingly, we find that their programming skill and comprehension of research papers have a limited impact on their effort spent completing the exercise. Instead, we find accessibility efforts by research authors to be the key to success, including complete documentation, better coding practice, and easier access to data files. Going forward, we recommend that NLP researchers pay close attention to these simple aspects of open-sourcing their work, and use insights from beginners’ feedback to provide actionable ideas on how to better support them.

pdf bib
Human Inspired Progressive Alignment and Comparative Learning for Grounded Word Acquisition
Yuwei Bao | Barrett Lattimer | Joyce Chai
Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)

Human language acquisition is an efficient, supervised, and continual process. In this work, we took inspiration from how human babies acquire their first language, and developed a computational process for word acquisition through comparative learning. Motivated by cognitive findings, we generated a small dataset that enables the computation models to compare the similarities and differences of various attributes, learn to filter out and extract the common information for each shared linguistic label. We frame the acquisition of words as not only the information filtration process, but also as representation-symbol mapping. This procedure does not involve a fixed vocabulary size, nor a discriminative objective, and allows the models to continually learn more concepts efficiently. Our results in controlled experiments have shown the potential of this approach for efficient continual learning of grounded words.

pdf bib
Towards A Holistic Landscape of Situated Theory of Mind in Large Language Models
Ziqiao Ma | Jacob Sansom | Run Peng | Joyce Chai
Findings of the Association for Computational Linguistics: EMNLP 2023

Large Language Models (LLMs) have generated considerable interest and debate regarding their potential emergence of Theory of Mind (ToM). Several recent inquiries reveal a lack of robust ToM in these models and pose a pressing demand to develop new benchmarks, as current ones primarily focus on different aspects of ToM and are prone to shortcuts and data leakage. In this position paper, we seek to answer two road-blocking questions: (1) How can we taxonomize a holistic landscape of machine ToM? (2) What is a more effective evaluation protocol for machine ToM? Following psychological studies, we taxonomize machine ToM into 7 mental state categories and delineate existing benchmarks to identify under-explored aspects of ToM. We argue for a holistic and situated evaluation of ToM to break ToM into individual components and treat LLMs as an agent who is physically situated in environments and socially situated in interactions with humans. Such situated evaluation provides a more comprehensive assessment of mental states and potentially mitigates the risk of shortcuts and data leakage. We further present a pilot study in a grid world setup as a proof of concept. We hope this position paper can facilitate future research to integrate ToM with LLMs and offer an intuitive means for researchers to better position their work in the landscape of ToM.

pdf bib
MetaReVision: Meta-Learning with Retrieval for Visually Grounded Compositional Concept Acquisition
Guangyue Xu | Parisa Kordjamshidi | Joyce Chai
Findings of the Association for Computational Linguistics: EMNLP 2023

Humans have the ability to learn novel compositional concepts by recalling primitive concepts acquired from past experience and generalizing these primitive concepts to novel compositions. Inspired by the above human’s compositional learning procedure, in this paper, we propose MetaReVision, a retrievalenhanced meta-learning model to solve the visually grounded compositional concept learning problem. The proposed MetaReVision consists of a retrieval module and a meta-learning module which are designed to incorporate retrieved primitive concepts as supporting set to meta-train visual-language models for grounded compositional concept recognition. Through meta-learning from episodes constructed by the retriever, MetaReVision learns a generic compositional representation that can be fast updated to recognize novel composi tional concepts. We create CompCOCO and CompFlickr to benchmark the grounded compositional concept learning. Our experimental results show MetaReVision outperforms other competitive baselines and the retrieval module does plays an important role in this compositional learning process.

pdf bib
Can Foundation Models Watch, Talk and Guide You Step by Step to Make a Cake?
Yuwei Bao | Keunwoo Yu | Yichi Zhang | Shane Storks | Itamar Bar-Yossef | Alex de la Iglesia | Megan Su | Xiao Zheng | Joyce Chai
Findings of the Association for Computational Linguistics: EMNLP 2023

Despite tremendous advances in AI, it remains a significant challenge to develop interactive task guidance systems that can offer situated, personalized guidance and assist humans in various tasks. These systems need to have a sophisticated understanding of the user as well as the environment, and make timely accurate decisions on when and what to say. To address this issue, we created a new multimodal benchmark dataset, Watch, Talk and Guide (WTaG) based on natural interaction between a human user and a human instructor. We further proposed two tasks: User and Environment Understanding, and Instructor Decision Making. We leveraged several foundation models to study to what extent these models can be quickly adapted to perceptually enabled task guidance. Our quantitative, qualitative, and human evaluation results show that these models can demonstrate fair performances in some cases with no task-specific training, but a fast and reliable adaptation remains a significant challenge. Our benchmark and baselines will provide a stepping stone for future work on situated task guidance.

pdf bib
Grounding Visual Illusions in Language: Do Vision-Language Models Perceive Illusions Like Humans?
Yichi Zhang | Jiayi Pan | Yuchen Zhou | Rui Pan | Joyce Chai
Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing

Vision-Language Models (VLMs) are trained on vast amounts of data captured by humans emulating our understanding of the world. However, known as visual illusions, human’s perception of reality isn’t always faithful to the physical world. This raises a key question: do VLMs have the similar kind of illusions as humans do, or do they faithfully learn to represent reality? To investigate this question, we build a dataset containing five types of visual illusions and formulate four tasks to examine visual illusions in state-of-the-art VLMs. Our findings have shown that although the overall alignment is low, larger models are closer to human perception and more susceptible to visual illusions. Our dataset and initial findings will promote a better understanding of visual illusions in humans and machines and provide a stepping stone for future computational models that can better align humans and machines in perceiving and communicating about the shared visual world. The code and data are available at [github.com/vl-illusion/dataset](https://github.com/vl-illusion/dataset).

pdf bib
From Heuristic to Analytic: Cognitively Motivated Strategies for Coherent Physical Commonsense Reasoning
Zheyuan Zhang | Shane Storks | Fengyuan Hu | Sungryull Sohn | Moontae Lee | Honglak Lee | Joyce Chai
Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing

Pre-trained language models (PLMs) have shown impressive performance in various language tasks. However, they are prone to spurious correlations, and often generate illusory information. In real-world applications, PLMs should justify decisions with formalized, coherent reasoning chains, but this challenge remains under-explored. Cognitive psychology theorizes that humans are capable of utilizing fast and intuitive *heuristic* thinking to make decisions based on past experience, then rationalizing the decisions through slower and deliberative *analytic* reasoning. We incorporate these interlinked dual processes in fine-tuning and in-context learning with PLMs, applying them to two language understanding tasks that require coherent physical commonsense reasoning. We show that our proposed Heuristic-Analytic Reasoning (HAR) strategies drastically improve the coherence of rationalizations for model decisions, yielding state-of-the-art results on Tiered Reasoning for Intuitive Physics (TRIP). We also find that this improved coherence is a direct result of more faithful attention to relevant language context in each step of reasoning. Our findings suggest that human-like reasoning strategies can effectively improve the coherence and reliability of PLM reasoning.

2022

pdf bib
Learning to Mediate Disparities Towards Pragmatic Communication
Yuwei Bao | Sayan Ghosh | Joyce Chai
Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)

Human communication is a collaborative process. Speakers, on top of conveying their own intent, adjust the content and language expressions by taking the listeners into account, including their knowledge background, personalities, and physical capabilities. Towards building AI agents with similar abilities in language communication, we propose a novel rational reasoning framework, Pragmatic Rational Speaker (PRS), where the speaker attempts to learn the speaker-listener disparity and adjust the speech accordingly, by adding a light-weighted disparity adjustment layer into working memory on top of speaker’s long-term memory system. By fixing the long-term memory, the PRS only needs to update its working memory to learn and adapt to different types of listeners. To validate our framework, we create a dataset that simulates different types of speaker-listener disparities in the context of referential games. Our empirical results demonstrate that the PRS is able to shift its output towards the language that listeners are able to understand, significantly improve the collaborative task outcome, and learn the disparity more efficiently than joint training.

pdf bib
DANLI: Deliberative Agent for Following Natural Language Instructions
Yichi Zhang | Jianing Yang | Jiayi Pan | Shane Storks | Nikhil Devraj | Ziqiao Ma | Keunwoo Yu | Yuwei Bao | Joyce Chai
Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing

Recent years have seen an increasing amount of work on embodied AI agents that can perform tasks by following human language instructions. However, most of these agents are reactive, meaning that they simply learn and imitate behaviors encountered in the training data. These reactive agents are insufficient for long-horizon complex tasks. To address this limitation, we propose a neuro-symbolic deliberative agent that, while following language instructions, proactively applies reasoning and planning based on its neural and symbolic representations acquired from past experience (e.g., natural language and egocentric vision). We show that our deliberative agent achieves greater than 70% improvement over reactive baselines on the challenging TEACh benchmark. Moreover, the underlying reasoning and planning processes, together with our modular framework, offer impressive transparency and explainability to the behaviors of the agent. This enables an in-depth understanding of the agent’s capabilities, which shed light on challenges and opportunities for future embodied agents for instruction following. The code is available at https://github.com/sled-group/DANLI.

pdf bib
DOROTHIE: Spoken Dialogue for Handling Unexpected Situations in Interactive Autonomous Driving Agents
Ziqiao Ma | Benjamin VanDerPloeg | Cristian-Paul Bara | Yidong Huang | Eui-In Kim | Felix Gervits | Matthew Marge | Joyce Chai
Findings of the Association for Computational Linguistics: EMNLP 2022

In the real world, autonomous driving agents navigate in highly dynamic environments full of unexpected situations where pre-trained models are unreliable. In these situations, what is immediately available to vehicles is often only human operators. Empowering autonomous driving agents with the ability to navigate in a continuous and dynamic environment and to communicate with humans through sensorimotor-grounded dialogue becomes critical. To this end, we introduce Dialogue On the ROad To Handle Irregular Events (DOROTHIE), a novel interactive simulation platform that enables the creation of unexpected situations on the fly to support empirical studies on situated communication with autonomous driving agents. Based on this platform, we created the Situated Dialogue Navigation (SDN), a navigation benchmark of 183 trials with a total of 8415 utterances, around 18.7 hours of control streams, and 2.9 hours of trimmed audio. SDN is developed to evaluate the agent’s ability to predict dialogue moves from humans as well as generate its own dialogue moves and physical navigation actions. We further developed a transformer-based baseline model for these SDN tasks. Our empirical results indicate that language guided-navigation in a highly dynamic environment is an extremely difficult task for end-to-end models. These results will provide insight towards future work on robust autonomous driving agents

2021

pdf bib
Zero-Shot Compositional Concept Learning
Guangyue Xu | Parisa Kordjamshidi | Joyce Chai
Proceedings of the 1st Workshop on Meta Learning and Its Applications to Natural Language Processing

In this paper, we study the problem of recognizing compositional attribute-object concepts within the zero-shot learning (ZSL) framework. We propose an episode-based cross-attention (EpiCA) network which combines merits of cross-attention mechanism and episode-based training strategy to recognize novel compositional concepts. Firstly, EpiCA bases on cross-attention to correlate conceptvisual information and utilizes the gated pooling layer to build contextualized representations for both images and concepts. The updated representations are used for a more indepth multi-modal relevance calculation for concept recognition. Secondly, a two-phase episode training strategy, especially the ransductive phase, is adopted to utilize unlabeled test examples to alleviate the low-resource learning problem. Experiments on two widelyused zero-shot compositional learning (ZSCL) benchmarks have demonstrated the effectiveness of the model compared with recent approaches on both conventional and generalized ZSCL settings.

pdf bib
Hierarchical Task Learning from Language Instructions with Unified Transformers and Self-Monitoring
Yichi Zhang | Joyce Chai
Findings of the Association for Computational Linguistics: ACL-IJCNLP 2021

pdf bib
Beyond the Tip of the Iceberg: Assessing Coherence of Text Classifiers
Shane Storks | Joyce Chai
Findings of the Association for Computational Linguistics: EMNLP 2021

As large-scale, pre-trained language models achieve human-level and superhuman accuracy on existing language understanding tasks, statistical bias in benchmark data and probing studies have recently called into question their true capabilities. For a more informative evaluation than accuracy on text classification tasks can offer, we propose evaluating systems through a novel measure of prediction coherence. We apply our framework to two existing language understanding benchmarks with different properties to demonstrate its versatility. Our experimental results show that this evaluation framework, although simple in ideas and implementation, is a quick, effective, and versatile measure to provide insight into the coherence of machines’ predictions.

pdf bib
Tiered Reasoning for Intuitive Physics: Toward Verifiable Commonsense Language Understanding
Shane Storks | Qiaozi Gao | Yichi Zhang | Joyce Chai
Findings of the Association for Computational Linguistics: EMNLP 2021

Large-scale, pre-trained language models (LMs) have achieved human-level performance on a breadth of language understanding tasks. However, evaluations only based on end task performance shed little light on machines’ true ability in language understanding and reasoning. In this paper, we highlight the importance of evaluating the underlying reasoning process in addition to end performance. Toward this goal, we introduce Tiered Reasoning for Intuitive Physics (TRIP), a novel commonsense reasoning dataset with dense annotations that enable multi-tiered evaluation of machines’ reasoning process. Our empirical results show that while large LMs can achieve high end performance, they struggle to support their predictions with valid supporting evidence. The TRIP dataset and our baseline results will motivate verifiable evaluation of commonsense reasoning and facilitate future research toward developing better language understanding and reasoning models.

pdf bib
MindCraft: Theory of Mind Modeling for Situated Dialogue in Collaborative Tasks
Cristian-Paul Bara | Sky CH-Wang | Joyce Chai
Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing

An ideal integration of autonomous agents in a human world implies that they are able to collaborate on human terms. In particular, theory of mind plays an important role in maintaining common ground during human collaboration and communication. To enable theory of mind modeling in situated interactions, we introduce a fine-grained dataset of collaborative tasks performed by pairs of human subjects in the 3D virtual blocks world of Minecraft. It provides information that captures partners’ beliefs of the world and of each other as an interaction unfolds, bringing abundant opportunities to study human collaborative behaviors in situated language communication. As a first step towards our goal of developing embodied AI agents able to infer belief states of collaborative partners in situ, we build and present results on computational models for several theory of mind tasks.

2020

pdf bib
Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics
Dan Jurafsky | Joyce Chai | Natalie Schluter | Joel Tetreault
Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics

pdf bib
Experience Grounds Language
Yonatan Bisk | Ari Holtzman | Jesse Thomason | Jacob Andreas | Yoshua Bengio | Joyce Chai | Mirella Lapata | Angeliki Lazaridou | Jonathan May | Aleksandr Nisnevich | Nicolas Pinto | Joseph Turian
Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP)

Language understanding research is held back by a failure to relate language to the physical world it describes and to the social interactions it facilitates. Despite the incredible effectiveness of language processing models to tackle tasks after being trained on text alone, successful linguistic communication relies on a shared experience of the world. It is this shared experience that makes utterances meaningful. Natural language processing is a diverse field, and progress throughout its development has come from new representational theories, modeling techniques, data collection paradigms, and tasks. We posit that the present success of representation learning approaches trained on large, text-only corpora requires the parallel tradition of research on the broader physical and social context of language to address the deeper questions of communication.

2018

pdf bib
What Action Causes This? Towards Naive Physical Action-Effect Prediction
Qiaozi Gao | Shaohua Yang | Joyce Chai | Lucy Vanderwende
Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)

Despite recent advances in knowledge representation, automated reasoning, and machine learning, artificial agents still lack the ability to understand basic action-effect relations regarding the physical world, for example, the action of cutting a cucumber most likely leads to the state where the cucumber is broken apart into smaller pieces. If artificial agents (e.g., robots) ever become our partners in joint tasks, it is critical to empower them with such action-effect understanding so that they can reason about the state of the world and plan for actions. Towards this goal, this paper introduces a new task on naive physical action-effect prediction, which addresses the relations between concrete actions (expressed in the form of verb-noun pairs) and their effects on the state of the physical world as depicted by images. We collected a dataset for this task and developed an approach that harnesses web image data through distant supervision to facilitate learning for action-effect prediction. Our empirical results have shown that web data can be used to complement a small number of seed examples (e.g., three examples for each action) for model learning. This opens up possibilities for agents to learn physical action-effect relations for tasks at hand through communication with humans with a few examples.

pdf bib
Commonsense Justification for Action Explanation
Shaohua Yang | Qiaozi Gao | Sari Sadiya | Joyce Chai
Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing

To enable collaboration and communication between humans and agents, this paper investigates learning to acquire commonsense evidence for action justification. In particular, we have developed an approach based on the generative Conditional Variational Autoencoder(CVAE) that models object relations/attributes of the world as latent variables and jointly learns a performer that predicts actions and an explainer that gathers commonsense evidence to justify the action. Our empirical results have shown that, compared to a typical attention-based model, CVAE achieves significantly higher performance in both action prediction and justification. A human subject study further shows that the commonsense evidence gathered by CVAE can be communicated to humans to achieve a significantly higher common ground between humans and agents.

2017

pdf bib
Interactive Learning of Grounded Verb Semantics towards Human-Robot Communication
Lanbo She | Joyce Chai
Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)

To enable human-robot communication and collaboration, previous works represent grounded verb semantics as the potential change of state to the physical world caused by these verbs. Grounded verb semantics are acquired mainly based on the parallel data of the use of a verb phrase and its corresponding sequences of primitive actions demonstrated by humans. The rich interaction between teachers and students that is considered important in learning new skills has not yet been explored. To address this limitation, this paper presents a new interactive learning approach that allows robots to proactively engage in interaction with human partners by asking good questions to learn models for grounded verb semantics. The proposed approach uses reinforcement learning to allow the robot to acquire an optimal policy for its question-asking behaviors by maximizing the long-term reward. Our empirical results have shown that the interactive learning approach leads to more reliable models for grounded verb semantics, especially in the noisy environment which is full of uncertainties. Compared to previous work, the models acquired from interactive learning result in a 48% to 145% performance gain when applied in new situations.

2016

pdf bib
Jointly Learning Grounded Task Structures from Language Instruction and Visual Demonstration
Changsong Liu | Shaohua Yang | Sari Saba-Sadiya | Nishant Shukla | Yunzhong He | Song-Chun Zhu | Joyce Chai
Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing

pdf bib
Grounded Semantic Role Labeling
Shaohua Yang | Qiaozi Gao | Changsong Liu | Caiming Xiong | Song-Chun Zhu | Joyce Y. Chai
Proceedings of the 2016 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies

pdf bib
Incremental Acquisition of Verb Hypothesis Space towards Physical World Interaction
Lanbo She | Joyce Chai
Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)

pdf bib
Physical Causality of Action Verbs in Grounded Language Understanding
Qiaozi Gao | Malcolm Doering | Shaohua Yang | Joyce Chai
Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)

2015

pdf bib
Proceedings of the 2015 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies
Rada Mihalcea | Joyce Chai | Anoop Sarkar
Proceedings of the 2015 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies

2014

pdf bib
Back to the Blocks World: Learning New Actions through Situated Human-Robot Dialogue
Lanbo She | Shaohua Yang | Yu Cheng | Yunyi Jia | Joyce Chai | Ning Xi
Proceedings of the 15th Annual Meeting of the Special Interest Group on Discourse and Dialogue (SIGDIAL)

pdf bib
Probabilistic Labeling for Efficient Referential Grounding based on Collaborative Discourse
Changsong Liu | Lanbo She | Rui Fang | Joyce Y. Chai
Proceedings of the 52nd Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers)

2013

pdf bib
Towards Situated Dialogue: Revisiting Referring Expression Generation
Rui Fang | Changsong Liu | Lanbo She | Joyce Y. Chai
Proceedings of the 2013 Conference on Empirical Methods in Natural Language Processing

pdf bib
Modeling Collaborative Referring for Situated Referential Grounding
Changsong Liu | Rui Fang | Lanbo She | Joyce Chai
Proceedings of the SIGDIAL 2013 Conference

2012

pdf bib
Autonomous Self-Assessment of Autocorrections: Exploring Text Message Dialogues
Tyler Baldwin | Joyce Chai
Proceedings of the 2012 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies

pdf bib
Semantic Role Labeling of Implicit Arguments for Nominal Predicates
Matthew Gerber | Joyce Y. Chai
Computational Linguistics, Volume 38, Issue 4 - December 2012

pdf bib
Towards Mediating Shared Perceptual Basis in Situated Dialogue
Changsong Liu | Rui Fang | Joyce Chai
Proceedings of the 13th Annual Meeting of the Special Interest Group on Discourse and Dialogue

2011

pdf bib
A Joint Model of Implicit Arguments for Nominal Predicates
Matthew Gerber | Joyce Chai | Robert Bart
Proceedings of the ACL 2011 Workshop on Relational Models of Semantics

pdf bib
Proceedings of the SIGDIAL 2011 Conference
Joyce Y. Chai | Johanna D. Moore | Rebecca J. Passonneau | David R. Traum
Proceedings of the SIGDIAL 2011 Conference

pdf bib
Beyond Normalization: Pragmatics of Word Form in Text Messages
Tyler Baldwin | Joyce Chai
Proceedings of 5th International Joint Conference on Natural Language Processing

2010

pdf bib
Beyond NomBank: A Study of Implicit Arguments for Nominal Predicates
Matthew Gerber | Joyce Chai
Proceedings of the 48th Annual Meeting of the Association for Computational Linguistics

pdf bib
Fusing Eye Gaze with Speech Recognition Hypotheses to Resolve Exophoric References in Situated Dialogue
Zahar Prasov | Joyce Y. Chai
Proceedings of the 2010 Conference on Empirical Methods in Natural Language Processing

pdf bib
Towards Conversation Entailment: An Empirical Investigation
Chen Zhang | Joyce Chai
Proceedings of the 2010 Conference on Empirical Methods in Natural Language Processing

pdf bib
Hand Gestures in Disambiguating Types of You Expressions in Multiparty Meetings
Tyler Baldwin | Joyce Chai | Katrin Kirchhoff
Proceedings of the SIGDIAL 2010 Conference

2009

pdf bib
The Role of Implicit Argumentation in Nominal SRL
Matthew Gerber | Joyce Chai | Adam Meyers
Proceedings of Human Language Technologies: The 2009 Annual Conference of the North American Chapter of the Association for Computational Linguistics

pdf bib
The Role of Interactivity in Human-Machine Conversation for Automatic Word Acquisition
Shaolin Qu | Joyce Chai
Proceedings of the SIGDIAL 2009 Conference

pdf bib
What do We Know about Conversation Participants: Experiments on Conversation Entailment
Chen Zhang | Joyce Chai
Proceedings of the SIGDIAL 2009 Conference

2008

pdf bib
Incorporating Temporal and Semantic Information with Eye Gaze for Automatic Word Acquisition in Multimodal Conversational Systems
Shaolin Qu | Joyce Chai
Proceedings of the 2008 Conference on Empirical Methods in Natural Language Processing

2007

pdf bib
An Exploration of Eye Gaze in Spoken Language Processing for Multimodal Conversational Interfaces
Shaolin Qu | Joyce Chai
Human Language Technologies 2007: The Conference of the North American Chapter of the Association for Computational Linguistics; Proceedings of the Main Conference

pdf bib
Automated Vocabulary Acquisition and Interpretation in Multimodal Conversational Systems
Yi Liu | Joyce Chai | Rong Jin
Proceedings of the 45th Annual Meeting of the Association of Computational Linguistics

2006

pdf bib
Towards Conversational QA: Automatic Identification of Problematic Situations and User Intent
Joyce Y. Chai | Chen Zhang | Tyler Baldwin
Proceedings of the COLING/ACL 2006 Main Conference Poster Sessions

2005

pdf bib
A Salience Driven Approach to Robust Input Interpretation in Multimodal Conversational Systems
Joyce Y. Chai | Shaolin Qu
Proceedings of Human Language Technology Conference and Conference on Empirical Methods in Natural Language Processing

2004

pdf bib
Optimization in Multimodal Interpretation
Joyce Y. Chai | Pengyu Hong | Michelle X. Zhou | Zahar Prasov
Proceedings of the 42nd Annual Meeting of the Association for Computational Linguistics (ACL-04)

pdf bib
Discourse Structure for Context Question Answering
Joyce Y. Chai | Rong Jin
Proceedings of the Workshop on Pragmatics of Question Answering at HLT-NAACL 2004

pdf bib
Performance Evaluation and Error Analysis for Multimodal Reference Resolution in a Conversation System
Joyce Y. Chai | Zahar Prasov | Pengyu Hong
Proceedings of HLT-NAACL 2004: Short Papers

2003

pdf bib
Combining Semantic and Temporal Constraints for Multimodal Integration in Conversation Systems
Joyce Y. Chai | Pengyu Hong | Michelle X. Zhou
Proceedings of the HLT-NAACL 2003 Workshop on Research Directions in Dialogue Processing

2002

pdf bib
Semantics-based Representation for Multimodal Interpretation in Conversational Systems
Joyce Chai
COLING 2002: The 19th International Conference on Computational Linguistics

2001

pdf bib
A Conversational Interface for Online Shopping
Joyce Chai | Veronika Horvath | Nanda Kambhatla | Nicolas Nicolov | Margo Stys-Budzikowska
Proceedings of the First International Conference on Human Language Technology Research

pdf bib
Conversational Sales Assistant for Online Shopping
Margo Budzikowska | Joyce Chai | Sunil Govindappa | Veronika Horvath | Nanda Kambhatla | Nicolas Nicolov | Wlodek Zadrozny
Proceedings of the First International Conference on Human Language Technology Research

2000

pdf bib
Dynamic User Level and Utility Measurement for Adaptive Dialog in a Help-Desk System
Preetam Maloor | Joyce Chai
1st SIGdial Workshop on Discourse and Dialogue

pdf bib
Evaluation of a Generic Lexical Semantic Resource in Information Extraction
Joyce Yue Chai
Proceedings of the Second International Conference on Language Resources and Evaluation (LREC’00)

1997

pdf bib
Duke’s Trainable Information and Meaning Extraction System (Duke TIMES)
Amit Bagga | Joyce Yue Chai
Fifth Conference on Applied Natural Language Processing: Descriptions of System Demonstrations and Videos

pdf bib
Corpus Based Statistical Generalization Tree in Rule Optimization
Joyce Yue Chai | Alan W. Biermann
Fifth Workshop on Very Large Corpora

pdf bib
The Use of Lexical Semantics in Information Extraction
Joyce Yue Chai | Alan W. Biermann
Automatic Information Extraction and Building of Lexical Semantic Resources for NLP Applications

pdf bib
A Trainable Message Understanding System
Amit Bagga | Joyce Yue Chai
CoNLL97: Computational Natural Language Learning