One of the essential functions of natural language is to express spatial relationships between objects. Linguistic constructs can encode highly complex, relational structures of objects, spatial relations between them, and patterns of motion through space relative to some reference point. Spatial language understanding is useful in many areas of research endeavors relating to and/or making use of human language, including robotics, navigation, geographic information systems, traffic management, natural language understanding and translation, and query answering systems. Compared to other semantically specialized linguistic tasks, standardizing tasks related to spatial language seems to be more challenging as it is harder to obtain an agreeable set of concepts and relationships and a formal spatial meaning representation that is domain independent, as an example this could be compared to temporal relations. This has made research results on spatial language learning and reasoning diverse, task-specific and, to some extent, not comparable. While formal meaning representation is a general issue for language understanding, formalizing spatial concepts and building formal reasoning models based on those constitute challenging research problems with a wealth of prior foundational research that can be exploited and linked to language understanding. Existing qualitative and quantitative representation and reasoning models can be used for investigation of interoperabiltiy of machine learning and reasoning over spatial semantics. Research endeavors in this area could provide insights into many challenges of language understanding in general. Spatial semantics is also very well-connected and relevant to visualization of natural language, central to dealing with configurations in the physical world and motivating a combination of vision and language for richer spatial understanding.
This workshop highlights some of the above aspects of computational spatial language understanding including the following four areas:
- Spatial Language Meaning Representation (Continuous, Symbolic)
- Spatial Language Learning
- Spatial Language Reasoning
- Combining Vision and Language for Spatial Understanding
Spatial Language Meaning Representation and Ontologies includes cognitive and linguistically motivated spatial knowledge representation and ontologies, qualitative and quantitative representation models used for meaning representation of language, related annotation schemes and efforts for creating specialized corpora. Moreover, continuous meaning representations for spatial concepts is another aspect to be highlighted in the workshop. Spatial Language Learning considers symbolic and sub-symbolic techniques and computational models for spatial information extraction, semantic parsing, spatial co-reference within a global context that includes discourse and pragmatics from data or formal models.
Regarding the reasoning aspect, the workshop emphasizes the role of qualitative and quantitative formal representations in helping spatial reasoning based on natural language and the possibility of learning such representations from data; whether we need these formal representations to support reasoning or there are other alternative ideas.
For the multimodality aspect, answers to questions such as the following will be discussed: (1) Which representations are appropriate for different modalities and which ones are modality independent? (2) How can we exploit visual information for spatial language learning and reasoning? All related applications are welcome, including text to scene conversion, spatial and visual question answering, spatial understanding in multimodal setting for robotics and navigation tasks, etc.
The workshop aims to initiate discussions across fields dealing with spatial language along with other modalities. The desired outcome is identification of shared as well as unique challenges, problems and future directions across the fields and various application domains related to spatial language understanding. The specific areas include but are not limited to:
- Spatial meaning representations, continuous representations, ontologies, annotation schemes, linguistic corpora.
- Spatial information extraction from natural language.
- Spatial information extraction in robotics, multimodal environments, navigational instructions.
- Text mining for spatial information in GIS systems.
- Spatial information in query answering systems, answering locative questions, such as where-questions.
- Spatial information for visual question answering
- Quantitative and qualitative reasoning with spatial information
- Spatial reasoning based on natural language
- Spatial reasoning based on multimodal information (vision and language)
- Extraction of spatial common sense knowledge
- Visualization of spatial language in 2-D and 3-D
- Spatial natural language generation
- Spatial language grounding
Anthony G Cohn, University of Leeds
Title: Natural Language Acquisition and Grounding for Embodied Robotic Systems
Abstract: We present a cognitively plausible novel framework capable of learning the grounding in visual semantics and the grammar of natural language commands given to a robot in a table top environment. The input to the system consists of video clips of a manually controlled robot arm, paired with natural language commands describing the action. No prior knowledge is assumed about the meaning of words, or the structure of the language, except that there are different classes of words (corresponding to observable actions, spatial relations, and objects and their observable properties). The learning process automatically clusters the continuous perceptual spaces into concepts corresponding to linguistic input. A novel relational graph representation is used to build connections between language and vision. As well as the grounding of language to perception, the system also induces a set of probabilistic grammar rules. The knowledge learned is used to parse new commands involving previously unseen objects.
James F. Allen, IHMC, University of Rochester
Title: Understanding Spatial Expressions
We encourage contributions with either a technical paper (NAACL style, 8 pages without references), a position statement (NAACL style, 4 pages maximum) or an abstract of a published work. NAACL Style files available here. Please make submissions via Softconf here.
- Submission Deadline:
2 March 2018 15 March 2018
2 April 2018 4 April 2018
- Camera Ready deadline: 16 April 2018
- Workshop Day: 6 June, 2018
||Keynote Talk. Natural Language Acquisition and Grounding for Embodied Robotic Systems
|| Anthony G. Cohn
||Accepted Paper. Exploring the Functional and Geometric Bias of Spatial Relations Using Neural Language Models
||Simon Dobnik, Mehdi Ghanimifard and John Kelleher
||Accepted Paper. Building and Learning Structures in a Situated Blocks World Through Deep Language
Understanding ||Ian Perera, James Allen, Choh Man Teng and Lucian Galescu
||Accepted Paper. Computational Models for Spatial Prepositions||Georgiy Platonov and Lenhart Schubert
||Accepted Paper. Lexical Conceptual Structure of Literal and Metaphorical Spatial Language: A
Case Study of "Push"||Bonnie Dorr and Mari Olsen
||Accepted Paper. Representing Spatial Relations in FrameNet ||Miriam R L Petruck and Michael J Ellsworth
||Keynote Talk. Understanding Spatial Expressions
||James F. Allen
||Accepted Paper. Points, Paths, and Playscapes: Large-scale Spatial Language Understanding Tasks
Set in the Real World ||Jason Baldridge, Tania Bedrax-Weiss, Daphne Luong, Srini Narayanan, Bo Pang,
Fernando Pereira, Radu Soricut, Michael Tseng and Yuan Zhang
||Accepted Paper. Anaphora Resolution for Improving Spatial Relation Extraction from Text||Umar Manzoor and Parisa Kordjamshidi
||Accepted Paper. The Case for Systematically Derived Spatial Language Usage||Bonnie Dorr and Clare Voss
||Panel: James Pustejovsky, Marie-Francine Moens, James F. Allen, Bonnie Dorr, Anthony
G. Cohn ||
- Lexical Conceptual Structure of Literal and Metaphorical Spatial Language: A Case Study of "Push", Bonnie Dorr and Mari Olsen.
- A Semantic Analysis of Reference to Spatial Properties, Norman Sondheimer. (published work)
- The Case for Systematically Derived Spatial Language Usage, Bonnie Dorr and Clare Voss.
- Building and Learning Structures in a Situated Blocks World Through Deep Language Understanding, Ian Perera, James Allen, Choh Man Teng and Lucian Galescu.
- Computational Models for Spatial Prepositions, Georgiy Platonov and Lenhart Schubert.
- Exploring the functional and geometric bias of spatial relations using neural language models, Simon Dobnik, Mehdi Ghanimifard and John Kelleher.
- Points, Paths, and Playscapes: Large-scale Spatial Language Understanding Tasks Set in the Real World, Jason Baldridge, Tania Bedrax-Weiss, Daphne Luong, Srini Narayanan, Bo Pang, Fernando Pereira, Radu Soricut, Michael Tseng and Yuan Zhang.
- Representing Spatial Relations in FrameNet, Miriam R L Petruck and Michael J Ellsworth.
- Anaphora Resolution for Improving Spatial Relation Extraction from Text, Umar Manzoor and Parisa Kordjamshidi.
John A. Bateman ||Universitat Bremen
|Anthony Cohn||University of Leeds
|Steven Bethard|| The University of Arizona
Raffaella Bernardi||University of Trento
|Mehul Bhatt||Örebro University, University of Bremen
Yonatan Bisk ||University of Washington
|Johan Bos||University of Groningen
|Joyce Chai||Michigan State University
|Angel Xuan Chang||Stanford University
|Guillem Collell|| KU Leuven
|Zoe Falomir||Universitat Bremen
|Julia Hockenmaier||University of Illinois at Urbana-Champaign
|Kirk Roberts||The University of Texas
|Manolis Savva||Princeton University
|Martijn van Otterlo||Vrije Universiteit Amsterdam
|Bruno Martins||University of Lisbon
|Mari Broman Olsen||Microsoft