Skip to main content
School of Electronic Engineering and Computer Science

BABBLE: Automatically inducing incremental dialogue systems from minimal data

14 November 2016

Time: 1:00 - 2:00pm
Venue: ITL Top Floor Meeting Room

We present a method for inducing incremental dialogue systems from very small amounts of dialogue data, avoiding the use of dialogue acts.

Speaker: Dr Arash Eshghi, Heriot Watt University

We present a method for inducing incremental dialogue systems from very small amounts of dialogue data, avoiding the use of dialogue acts. This is achieved by combining an incremental, semantic grammar formalism - Dynamic Syntax and Type Theory with Records (DS-TTR) - with Reinforcement Learning for word (action) selection, where language generation and dialogue management are treated as a joint decision/optimisation problem, and where the MDP model is constructed automatically.

We show, using an implemented system, that this method enables a wide range of dialogue variations to be automatically captured, even when the system is trained from only a single dialogue. The variants include question-answer pairs, over- and under-answering, self- and other-corrections, clarification interaction, split-utterances, and ellipsis.

For example, we show that a single training dialogue supports over 8000 new dialogues in the same domain. This generalisation property results from the structural knowledge and constraints present within the grammar, and highlights in-principle limitations of recent state-of-the-art systems that are built using machine learning techniques only.

About the Speaker

Dr Arash Eshghi is currently a member of the Interaction Lab (video about what we do here), School of Mathematical and Computer Sciences in Heriot-Watt University where he works as researcher on the Babble Project. Previously, he was a member of the Cognitive Science Research Group in Queen Mary University of London where he completed his PhD.

The focus of his work is in dialogue modelling: understanding people’s everyday use of language in interaction with others, and building better computational models of it. His completed his PhD on processes whereby people reach mutual understanding in everyday conversation.

Later he started working on Dynamic Syntax - a formal/computational model of how people produce and understand language incrementally, word-by-word. He’s now applying this model, in combination with machine learning techniques to build better, more natural conversational systems; and systems that can learn language from interaction with a human partner.

Back to top