2024
pdf
bib
abs
Where Do We Go From Here? Multi-scale Allocentric Relational Inferencefrom Natural Spatial Descriptions
Tzuf Paz-Argaman
|
John Palowitch
|
Sayali Kulkarni
|
Jason Baldridge
|
Reut Tsarfaty
Proceedings of the 18th Conference of the European Chapter of the Association for Computational Linguistics (Volume 1: Long Papers)
The concept of acquired spatial knowledge is crucial in spatial cognitive research, particularly when it comes to communicating routes. However, NLP navigation studies often overlook the impact of acquired knowledge on textual descriptions. Current navigation studies concentrate on egocentric local descriptions (e.g., ‘it will be on your right’) that require reasoning over the agent’s local perception. These instructions are typically given in a sequence of steps, with each action-step explicitly mentioned and followed by a landmark that the agent can use to verify that they are on the correct path (e.g., ‘turn right and then you will see...’). In contrast, descriptions based on knowledge acquired through a map provide a complete view of the environment and capture its compositionality. These instructions typically contain allocentric relations, are non-sequential, with implicit actions and multiple spatial relations without any verification (e.g., ‘south of Central Park and a block north of a police station’). This paper introduces the Rendezvous (RVS) task and dataset, which includes 10,404 examples of English geospatial instructions for reaching a target location using map-knowledge. Our analysis reveals that RVS exhibits a richer use of spatial allocentric relations, and requires resolving more spatial relations simultaneously compared to previous text-based navigation benchmarks.
pdf
bib
abs
Into the Unknown: Generating Geospatial Descriptions for New Environments
Tzuf Paz-Argaman
|
John Palowitch
|
Sayali Kulkarni
|
Reut Tsarfaty
|
Jason Baldridge
Findings of the Association for Computational Linguistics ACL 2024
Similar to vision-and-language navigation (VLN) tasks that focus on bridging the gap between vision and language for embodied navigation, the new Rendezvous (RVS) task requires reasoning over allocentric spatial relationships using non-sequential navigation instructions and maps. However, performance substantially drops in new environments with no training data.Using opensource descriptions paired with coordinates (e.g., Wikipedia) provides training data but suffers from limited spatially-oriented text resulting in low geolocation resolution. We propose a large-scale augmentation method for generating high-quality synthetic data for new environments using readily available geospatial data. Our method constructs a grounded knowledge-graph, capturing entity relationships. Sampled entities and relations (“shop north of school”) generate navigation instructions via (i) generating numerous templates using context-free grammar (CFG) to embed specific entities and relations; (ii) feeding the entities and relation into a large language model (LLM) for instruction generation. A comprehensive evaluation on RVS, showed that our approach improves the 100-meter accuracy by 45.83% on unseen environments. Furthermore, we demonstrate that models trained with CFG-based augmentation achieve superior performance compared with those trained with LLM-based augmentation, both in unseen and seen environments. These findings suggest that the potential advantages of explicitly structuring spatial information for text-based geospatial reasoning in previously unknown, can unlock data-scarce scenarios.
2021
pdf
bib
abs
Multi-Level Gazetteer-Free Geocoding
Sayali Kulkarni
|
Shailee Jain
|
Mohammad Javad Hosseini
|
Jason Baldridge
|
Eugene Ie
|
Li Zhang
Proceedings of Second International Combined Workshop on Spatial Language Understanding and Grounded Communication for Robotics
We present a multi-level geocoding model (MLG) that learns to associate texts to geographic coordinates. The Earth’s surface is represented using space-filling curves that decompose the sphere into a hierarchical grid. MLG balances classification granularity and accuracy by combining losses across multiple levels and jointly predicting cells at different levels simultaneously. It obtains large gains without any gazetteer metadata, demonstrating that it can effectively learn the connection between text spans and coordinates—and thus makes it a gazetteer-free geocoder. Furthermore, MLG obtains state-of-the-art results for toponym resolution on three English datasets without any dataset-specific tuning.
2019
pdf
bib
abs
Learning Dense Representations for Entity Retrieval
Daniel Gillick
|
Sayali Kulkarni
|
Larry Lansing
|
Alessandro Presta
|
Jason Baldridge
|
Eugene Ie
|
Diego Garcia-Olano
Proceedings of the 23rd Conference on Computational Natural Language Learning (CoNLL)
We show that it is feasible to perform entity linking by training a dual encoder (two-tower) model that encodes mentions and entities in the same dense vector space, where candidate entities are retrieved by approximate nearest neighbor search. Unlike prior work, this setup does not rely on an alias table followed by a re-ranker, and is thus the first fully learned entity retrieval model. We show that our dual encoder, trained using only anchor-text links in Wikipedia, outperforms discrete alias table and BM25 baselines, and is competitive with the best comparable results on the standard TACKBP-2010 dataset. In addition, it can retrieve candidates extremely fast, and generalizes well to a new dataset derived from Wikinews. On the modeling side, we demonstrate the dramatic value of an unsupervised negative mining algorithm for this task.