Leveraging the foundation built in the prior workshops SPLU-RoboNLP 2019 and SpLU-2018 and focusing on the gaps identified therein, we propose the third workshop on Spatial Language Understanding. One of the essential functions of natural language is to express spatial relationships between objects. Spatial language understanding is useful in many research areas and real-world applications including robotics, navigation, geographic information systems, traffic management, human-machine interaction, query answering and translation systems. Compared to other semantically specialized linguistic tasks, standardizing tasks related to spatial language seem to be more challenging as it is harder to obtain an agreeable set of concepts and relationships and a formal spatial meaning representation that is domain independent and that allows quantitative and qualitative reasoning. This has made research results on spatial language learning and reasoning diverse, task-specific and, to some extent, not comparable. Attempts to arrive at a common set of basic concepts and relationships as well as making existing corpora inter-operable, however, can help avoid duplicated efforts within as well as across fields and instead focus on further developments in the respective fields for automatic learning and reasoning. Existing qualitative and quantitative representation and reasoning models can be used for investigation of interoperability of machine learning and reasoning over spatial semantics. Research endeavors in this area could provide insights into many challenges of language understanding in general. Spatial semantics is also very well-connected and relevant to visualization of natural language and grounding language into perception, central to dealing with configurations in the physical world and motivating a combination of vision and language for richer spatial understanding. In the third round of the SpLU workshop, we will focus on the same major topics as:

  1. Spatial language meaning representation (continuous, symbolic)
  2. Spatial language learning
  3. Spatial language reasoning
  4. Spatial Language Grounding and Combining vision and language
  5. Applications of Spatial Language Understanding: QA, dialogue systems, Navigation, etc.

Spatial language meaning representation includes research related to cognitive and linguistically motivated spatial semantic representations, spatial knowledge representation and spatial ontologies, qualitative and quantitative representation models used for formal meaning representation, spatial annotation schemes and efforts for creating specialized corpora. Spatial language learning considers both symbolic and sub-symbolic (with continuous representations) techniques and computational models for spatial information extraction, semantic parsing, spatial co-reference within a global context that includes discourse and pragmatics from data or formal models. For the reasoning aspect, the workshop emphasizes the role of qualitative and quantitative formal representations in helping spatial reasoning based on natural language and the possibility of learning such representations from data; and whether we need these formal representations to support reasoning or there are other alternative ideas. For the multi-modality aspect, answers to questions such as the following will be discussed: (1) Which representations are appropriate for different modalities and which ones are modality independent? (2) How can we exploit visual information for spatial language learning and reasoning? All related applications are welcome, including text to scene conversion, spatial and visual question answering, spatial understanding in multi-modal setting for robotics and navigation tasks and language grounding. The workshop aims to encourage discussions across fields dealing with spatial language along with other modalities. The desired outcome is identification of shared as well as unique challenges, problems and future directions across the fields and various application domains related to spatial language understanding.

The specific topics include but are not limited to:

Invited Speakers

Schedule (EST)

Please note all the talks will be played on Zoom here.

8:00-9:00 AM QA/Poster Workshop Organizers
9:00-9:10 AM Opening Talk Parisa Kordjamshidi
9:10-10:00 AM Invited Talk James Pustejovsky
10:00-10:56 AM Paper Presentations (1,2,3,11)
10:56-11:05 AM Break
11:05-11:55 AM Invited Talk Julia Hockenmaier
11:55-12:51 PM Paper Presentations (4,5,12,13)
12:51-1:00 PM Break
1:00-1:50 PM Invited Talk Yoav Artzi
1:50-2:46 PM Paper Presentations (6,7,8,14)
2:46-3:45 PM QA/Poster Workshop Organizers
3:45-4:35 PM Invited Talk Bonnie J. Dorr
4:35-5:31 PM Paper Presentations (9,10,15,16)
5:31-5:45 PM Break
5:45-6:35 PM Invited Talk Douwe Kiela
6:35-7:03 PM Paper Presentations (17,18)
7:03-8:00 PM Panel Discussion
8:00-9:00 PM QA/Poster Workshop Organizers

Accepted Papers (Proceedings)

  1. An Element-wise Visual-enhanced BiLSTM-CRF Model for Location Name Recognition. Paper.
    Takuya Komada and Takashi Inui
  2. BERT-based Spatial Information Extraction. Paper.
    Hyeong Jin Shin, Jeong Yeon Park, Dae Bum Yuk and Jae Sung Lee
  3. A Cognitively Motivated Approach to Spatial Information Extraction. Paper.
    Chao Xu, Emmanuelle-Anna Dietz Saldanha, Dagmar Gromann and Beihai Zhou
  4. They are not all alike: answering different spatial questions requires different grounding strategies. Paper.
    Alberto Testoni, Claudio Greco, Tobias Bianchi, Mauricio Mazuecos, Agata Marcante, Luciana Benotti and Raffaella Bernardi
  5. Categorisation, Typicality and Object-Specific Features in Spatial Referring Expressions. Paper.
    Adam Richard-Bollans, Anthony Cohn and Lucía Gómez Álvarez
  6. A Hybrid Deep Learning Approach for Spatial Trigger Extraction from Radiology Reports. Paper.
    Surabhi Datta and Kirk Roberts
  7. Retouchdown: Releasing Touchdown on StreetLearn as a Public Resource for Language Grounding Tasks in Street View. Paper.
    Harsh Mehta, Yoav Artzi, Jason Baldridge, Eugene Ie and Piotr Mirowski

Accepted Non-archival Submissions

  1. SpaRTQA: A Textual Question Answering Benchmark for Spatial Reasoning.
    Roshanak Mirzaee, Hossein Rajaby Faghihi and Parisa Kordjamshidi
  2. Geocoding with multi-level loss for spatial language representation.
    Sayali Kulkarni, Shailee Jain, Mohammad Javad Hosseini, Jason Baldridge, Eugene Ie and Li Zhang
  3. Vision-and-Language Navigation by Reasoning over Spatial Configurations.
    Yue Zhang, Quan Guo and Parisa Kordjamshidi

Accepted Findings Submissions

  1. Language-Conditioned Feature Pyramids for Visual Selection Tasks. Paper.
    Taichi Iki and Akiko Aizawa
  2. A Linguistic Analysis of Visually Grounded Dialogues Based on Spatial Expressions. Paper.
    Takuma Udagawa, Takato Yamazaki and Akiko Aizawa
  3. Visually-Grounded Planning without Vision: Language Models Infer Detailed Plans from High-level Instructions. Paper.
    Peter A. Jansen
  4. Decoding Language Spatial Relations to 2D Spatial Arrangements. Paper.
    Gorjan Radevski, Guillem Collell, Marie-Francine Moens and Tinne Tuytelaars
  5. LiMiT: The Literal Motion in Text Dataset. Paper.
    Irene Manotas, Ngoc Phuoc An Vo and Vadim Sheinin
  6. ARRAMON: A Joint Navigation-Assembly Instruction Interpretation Task in Dynamic Environmentsi. Paper.
    Hyounghun Kim, Abhay Zala, Graham Burri, Hao Tan and Mohit Bansal
  7. Robust and Interpretable Grounding of Spatial References with Relation Networks. Paper.
    Tsung-Yen Yang, Andrew S. Lan and Karthik Narasimhan
  8. RMM: A Recursive Mental Model for Dialogue Navigation. Paper.
    Homero Roman Roman, Yonatan Bisk, Jesse Thomason, Asli Celikyilmaz and Jianfeng Gao

Submission Procedure

We encourage contributions with technical papers (EMNLP style, 8 pages without references) or shorter papers on position statements describing previously unpublished work or demos (EMNLP style, 4 pages maximum). EMNLP Style files are available [Here]. Please make submissions via Softconf [Here].

Non-Archival option: EMNLP workshops are traditionally archival. To allow dual submission of work to SpLU and other conferences/journals, we are also including a non-archival track. Space permitting, these submissions will still participate and present their work in the workshop, will be hosted on the workshop website, but will not be included in the official proceedings. Please submit through softconf but indicate that this is a cross submission at the bottom of the submission form:
Submission type

Important Dates

Organizing Committee

  • Parisa Kordjamshidi
  • Michigan State University kordjams@msu.edu
  • Archna Bhatia
  • Institute for Human and Machine Cognition abhatia@ihmc.us
  • Malihe Alikhani
  • University of Pittsburgh malihe@pitt.edu
  • Jason Baldridge
  • Google jasonbaldridge@google.com
  • Mohit Bansal
  • UNC Chapel Hill mbansal@cs.unc.edu
  • Marie-Francine Moens
  • KU Leuven sien.moens@cs.kuleuven.be
    Contact: splu-2020@googlegroups.com

    Program Committee

  • Steven Bethard
  • The University of Arizona
  • Raffaella Bernardi
  • University of Trento
  • Mehul Bhatt
  • Örebro University - CoDesign Lab
  • Yonatan Bisk
  • Carnegie Mellon University
  • Johan Bos
  • University of Groningen
  • Asli Celikyilmaz
  • Microsoft Research
  • Joyce Chai
  • University of Michigan
  • Angel Xuan Chang
  • Simon Fraser University
  • Anthony Cohn
  • University of Leeds
  • Guillem Collell
  • KU Leuven
  • Simon Dobnik
  • University of Gothenburg
  • Bonnie J. Dorr
  • Institute for Human and Machine Cognition
  • Ekaterina Egorova
  • University of Zurich
  • Zoe Falomir
  • Universitat Bremen
  • Francis Ferraro
  • University of Maryland Baltimore
  • Lucian Galescu
  • Institute for Human and Machine Cognition
  • Mehdi Ghanimifard
  • University of Gothenburg
  • Julia Hockenmaier
  • University of Illinois at Urbana-Champaign
  • Lei Li
  • Bytedance
  • Bruno Martins
  • University of Lisbon
  • Srini Narayanan
  • Google Inc.
  • Mari Broman Olsen
  • Lionbridge AI
  • Martijn van Otterlo
  • Open University (The Netherlands)
  • Ian Perera
  • Institute for Human and Machine Cognition
  • Kirk Roberts
  • UT Health
  • Manolis Savva
  • Stanford University
  • Kristin Stock
  • Massey University
  • Jesse Thomason
  • University of Washington
  • Clare Voss
  • ARL
    *If you are interested to join the program committee and participate in reviewing submissions please Email the organizers at splu-2020@googlegroups.com. Please mention your prior reviewing experience and a link to your publication records in your Email.