I am currently interested in the application of social network analysis to a whole host of interesting problems. In the TRIDEC project, we are looking at application of SNA to disaster data. One of my students, Maryam Fatemi is investigating the use of community detection algorithms and recommendation systems.

The image shown to the left depicts the co-present communities as detected by bluetooth of the Reality Mining dataset.

Social Network Analysis

Event detection, adaptive crawling, and sense making in social media is a fascinating area of research. Related to the TRIDEC project, we are currently analysing Twitter datasets (over 60 million tweets) collected during the London 2012 Olympics. This research covers a host of different topics including event detection, social event characterisation and modelling and adaptive data collection.

Social Media

We are developing an interactive app which allows audiences of large events like festivals which incorporates aspects like location based information, social media and movement information. Apart from offering our users a good time, we are also looking at characterising participants by their movements, co-presence, etc. 

Getting jiggy: Crowds, the audience and what makes them tick.

Can we generalise these behaviours to "event types"?

Can we correlate social behaviours to the clique crowd models?

Can we influence behaviours causing a) change of membership b) change of behaviour c) utilisation of space?

This page contains a small selection of stuff I have been working on lately with my students/project/RAs/colleagues (Listed in historical order - most recent first):

Developed by Xinyue Wang, the adaptive crawler automatically updates search terms, to generate a better, more comprehensive set of search terms based upon correlating the traffic patterns of new key words against the original predefined words. We validated the Twitter crawler on the Olympic 2012, Glastonbury (UK) music festival 2013 events.


Capturing emergent information: getting the most of your Twitter Crawls.

An efficient, open-source, client-server system that supports both iOS and Android mobile devices. Developed by Kleomenis Katevas, SensingKit is capable of continuous sensing the device’s motion (Accelerometer, Gyroscope, Magnetometer), location

SensingKit: A Multi-Platform Mobile Sensing Framework for Large-Scale Experiments.

(GPS) and proximity to other smartphones (Bluetooth Smart / iBeacon). We believe that this platform will be beneficial to all researchers and developers who need to perform mobile sensing in their applications and experiments.


Image by MKHMarketing

In this project we explore techniques to create computational models that predict player experience in Augmented Reality mobile games for content creation. We use player behaviour in game levels to model player experience using supervised learning and explore to what extent these player models can be used to create content or automatically tune game parameters that are personalised for an enjoyable game experience.

Player experience in Augmented Reality

The image shows the gameplay of an Augmented Reality Treasure Hunt Game developed as part of Vivek Warriar’s PhD work that was developed as part of this study.

'Memory Break' is a Virtual Reality game that uses affective and performance metrics to adapt the game’s difficulty level in real-time to keep players in an optimal affective state or flow state. The game employs two machine learning algorithms to detect the player’s arousal and valence levels extracting features from different physiological and motion sensors. Developed by Daniel Gabana during his PhD research, the aim of 'Memory Break’  is to train the working memory (WM) of participants while challenging them and adjusting the difficulty level to hold them engaged and immersed.

Affective Gaming for Working Memory Training in Virtual Reality

VR Immersion, Cognition and Serious Games.

Furthering our work in VR, Jack Ratcliffe explores he impact of different immersion technologies on the efficacy of dialogue-based, automatic speech recognition language learning environments. He is examining whether variables in the degree of immersive technologies (2D, 3D, head-mounted

VR) used within an automated conversational Japanese learning environment effects learning efficacy.

Image to come.