Establishing Common Grounds between Humans and Agents

For humans and computers to communicate via natural language text, it is necessary that the understanding (interpretation) of the given texts be shared. This study addresses issues that arise in creating a common ground for natural language understanding.


A Natural Language Corpus of Common Grounding under Continuous and Partially-Observable Context

Example dialogue from our task

Common grounding is the process of creating, repairing and updating mutual understandings, which is a critical aspect of sophisticated human communication. However, traditional dialogue systems have limited capability of establishing common ground, and we also lack task formulations which introduce natural difficulty in terms of common grounding while enabling easy evaluation and analysis of complex models. In this work, we propose a minimal dialogue task which requires advanced skills of common grounding under continuous and partially-observable context. Based on this task formulation, we collected a largescale dataset of 6,760 dialogues which fulfills essential requirements of natural language corpora. Our analysis of the dataset revealed important phenomena related to common grounding that need to be considered. Finally, we evaluate and analyze baseline neural models on a simple subtask that requires recognition of the created common ground. We show that simple baseline models perform decently but leave room for further improvement. Overall, we show that our proposed task will be a fundamental testbed where we can train, evaluate, and analyze dialogue system’s ability for sophisticated common grounding. ((Udagawa et al.: accepted at AAAI-2019, AAAI-2020))