For lists of possible undergraduate, postgraduate and PhD research topics, see my teaching page.

Grants / Funding

Research Students & Post-docs



Research Topics

Incrementality in Dialogue

Dialogue is incremental -- people don't speak (or listen) in complete, stand-alone sentences, but build up meaning bit-by-bit in an interactive process. We can interrupt each other, continue each other's utterances, and engage in an incremental process of feedback and repair. I'm involved in the DynDial project investigating how this happens (through corpus and experimental work) and how we can model it (in grammatical frameworks and dialogue systems). As part of this we've built various parsers and dialogue systems using Dynamic Syntax; see here.

Open-Domain Dialogue Understanding

Understanding dialogue is a genuinely hard problem; we humans are good at it, partly because we have a pretty good idea of what might make sense in a given context. Most computer dialogue systems make use of the same insight: as they work in a restricted domain, they can map from words to sensible meaning representations based on what's sensible or possible in that domain; and as they're involved in the dialogue themselves, they can always ask for clarification if they need it. When the problem is to understand what people are saying as an overhearer, without knowing much of the domain, it's harder; but this is exactly the problem we encounter if we want to build something like an automatic meeting assistant (like the system we developed at Stanford on the CALO project for detecting decisions and action items -- see here). I'm interested in developing robust techniques for detecting high-level topic structure, low-level aspects like addressing, and important conversational structures like decision-making and action item assignment.

Multi-Device Dialogue Systems

I am also working on an in-car spoken dialogue system project, to help people interact with the increasingly complex multiple devices in their car (stereo, phone, navigation & information systems) without having to divert their eyes or hands from the more critical job of driving. This brings a couple of interesting questions into the equation: firstly, how do we know which device is being addressed at any time (especially given the perennial problem of noisy speech recognition); and secondly, how do we even know if the system is being addressed at all, rather than a passenger? Amongst other things, we're approaching this by combining deep & shallow information (e.g. parse structures with topic classifiers) for increased robustness, while working on intelligent clarification and confirmation strategies.

Clarification Requests

While at King's College London I worked on the ROSSINI project, and my PhD thesis investigated clarification questions: what types people use when, how they should be interpreted, how they can be treated or used by a dialogue system, and what they tell us about semantics in general. I'm still working on this area, particularly on suitable semantic representations, in collaboration with Jonathan Ginzburg. As part of my thesis I built a prototype dialogue system, CLARIE, designed to be able to (a) interpret users clarification questions and respond suitably, and (b) ask clarification questions in order to learn new words and phrases. One of the things I'm currently working on (with Raquel Fernández) is extending it to incorporate an element of machine learning: using classifiers to determine optimum methods of fragment resolution. In the mean time, you can try the basic (rule-based) thesis version here, but be warned that the grammar is very limited - it might be worth getting in touch with me first.