My main research interest is in multimodal interaction and its application to language learning. Multimodality is concerned with the integration of several modalities of communication such as speech, handwriting, drawing, gaze and 3D hand gestures in the user interface. A good combination of these modalities has the potential to greatly improve the flexibility, robustness, efficiency and naturalness of human-machine interaction. A multimodal system can also take advantage of the properties of each individual modality and of their possible combinations to reduce error recognition rates (e.g. for speech, handwriting and hand gesture recognition) and prevent the false interpretation of messages (modality co-operation).
Currently, I am working on:
(1) Developing tools for the design and implementation of multimodal applications
e.g. Marie-Luce Bourguet 'Towards a Taxonomy of Error Handling Strategies in Recognition-Based Multimodal Human-Computer Interfaces' in Signal Processing Journal, vol. 86, no. 12, December 2006.
(2) Computer assisted language learning for young bilinguals
e.g. Marie-Luce Bourguet, Manjit Plaha & Nick Bryan-Kinns, 'Computer Assisted Learning for Young Bilinguals' in Academic Exchange Quarterly vol. 9, no. 3, September 2005.