Dr. Ryan Layne

I am a postdoctoral researcher working within the Risk Information Management Group with Dr. Timothy Hospedales.

My research and publications broadly focus on the topic of human re-identification from surveillance video data, mostly exploiting machine learning techniques. Aside from this, I'm interested in defence research using robotics, artificial intelligence and machine learning. I'm also fond of the philosophy of artificial intelligence and cellular automata.

My PhD was with the Computer Vision Group at Queen Mary University of London, supervised by Prof. Shaogang Gong and Dr. Tao Xiang, and co-supervised by Dr. Timothy Hospedales.

Questions or comments on our research?

Updates:

Publications

"Investigating Open-World Person Re-identification Using a Drone"

R. Layne, T.M. Hospedales and S. Gong Workshop on Visual Surveillance and Re-identification, ECCV, Switzerland, 2014 Download Paper as PDF Download MRP dataset View Abstract
Person re-identification is now one of the most topical and intensively studied problems in computer vision due to its challenging nature and critical role, underpinning many multi-camera surveillance tasks. A fundamental assumption in almost all existing re-identification research is that cameras are in fixed emplacements, allowing the modelling of camera and inter-camera properties to improve re-identification. In this paper, we present an introductory study pushing re-identification in a different direction: re-identification on a mobile platform such as a drone. We formalise some variants of the standard formulation for re-identification that are more relevant for mobile re-identification.

We introduce the first dataset for mobile re-identification, and we use these to elucidate the unique challenges of mobile re-identification. Finally, we re-evaluate some conventional wisdom about re-id models in the light of these challenges and suggest future avenues for research in this area.

"Re-identification: Hunting Attributes in the Wild"

R. Layne, T.M. Hospedales and S. Gong British Machine Vision Conference, Nottingham, England, 2014 * Oral presentation
Download Paper as PDF View Slides Download dataset (soon) View Abstract
Person re-identification is a crucial capability underpinning many applications of public-space video surveillance. Recent studies have shown the value of learning semantic attributes as a discriminative representation for re-identification. However, existing attribute representations do not generalise across camera deployments. Thus, this strategy currently requires the prohibitive effort of annotating a vector of person attributes for each individual in a large training set for each given deployment/dataset. In this paper we take a different approach and automatically discover a semantic attribute ontology, and learn an effective associated representation by crawling large volumes of internet data. In addition to eliminating the necessity for per-dataset annotation, by training on a much larger and more diverse array of examples this representation is more view-invariant and generalisable than attributes trained at conventional small scales.

We show that these automatically discovered attributes provide a valuable representation that significantly improves re-identification performance on a variety of challenging datasets.

"Attributes-based Re-identification"

R. Layne, T.M. Hospedales and S. Gong In Gong, Cristani, Yan, Loy (Eds.), Person Re-Identification, Springer, December 2013 Book chapter as PDF Download annotations, data, and example script Full text at Springer.com View Abstract
Automated person re-identification using only visual information from public-space CCTV video is challenging for many reasons, such as poor resolution or challenges involved in dealing with camera calibration. More critical still, the majority of clothing worn in public spaces tends to be non-discriminative and therefore of limited disambiguation value. Most re-identification techniques developed so far have relied on low-level visual-feature matching approaches that aim to return matching gallery detections earlier in the ranked list of results. However, for many applications an initial probe image may not be available, or a low-level feature representation may not be sufficiently invariant to viewing condition changes as well as being discriminative for re- identification.

In this chapter, we show how mid-level ``semantic attributes'' can be computed for person description. We further show how this attribute-based description can be used in synergy with low-level feature descriptions to improve re-identification accuracy when an attribute-centric distance measure is employed. Moreover, we discuss a ``zero-shot'' scenario in which a visual probe is unavailable but re-identification can still be performed with user-provided semantic attribute description.

"Domain Transfer for Person Re-identification"

R. Layne, T.M. Hospedales and S. Gong In Proc. ACM International Conference on Multimedia, Workshop on Analysis and Retrieval of Tracked Events and Motion in Imagery Streams (ARTEMIS 2013) Download Paper as PDF View Abstract
Automatic person re-identification in is a crucial capability underpinning many applications in public space video surveillance. It is challenging due to intra-class variation in person appearance when observed in different views, together with limited inter-class variability. Various recent approaches have made great progress in re-identification performance using discriminative learning techniques. However, these approaches are fundamentally limited by the requirement of extensive annotated training data for every pair of views. For practical re-identification, this is an unreasonable assumption, as annotating extensive volumes of data for every pair of cameras to be re-identified may be impossible or prohibitively expensive.

In this paper we move toward relaxing this strong assumption by investigating flexible multi-source transfer of re-identification models across camera pairs. Specifically, we show how to leverage prior re-identification models learned for a set of source view pairs (domains), and flexibly combine these to obtain good re-identification performance in a target view pair (domain) with greatly reduced training data requirements in the target domain.

"Towards Person Identification and Re-Identification With Attributes"

R. Layne, T.M. Hospedales and S. Gong ECCV Workshop on Re-identification (Re-Id), Florence, Italy, 2012 Download Paper as PDF View Abstract
Visual identification of an individual in a crowded environment observed by a distributed camera network is critical to a variety of tasks including commercial space management, border control, and crime prevention. Automatic re-identification of a human from public space CCTV video is challenging due to spatiotemporal visual feature variations and strong visual similarity in people's appearance, compounded by low-resolution and poor quality video data. Relying on re-identification using a probe image is limiting, as a linguistic description of an individual's profile may often be the only available cues.

In this work, we show how mid-level semantic attributes can be used synergistically with low-level features for both identification and re-identification. Specifically, we learn an attribute-centric representation to describe people, and a metric for comparing attribute profiles to disambiguate individuals. This differs from existing approaches to re-identification which rely purely on bottom-up statistics of low-level features: it allows improved robustness to view and lighting; and can be used for identification as well as re-identification. Experiments demonstrate the flexibility and effectiveness of our approach compared to existing feature representations when applied to benchmark datasets.

"Person Re-Identification by Attributes"

R. Layne, T.M. Hospedales and S. Gong British Machine Vision Conference, Surrey, England, 2012 Download Paper as PDF Download Annotations View Abstract
Visually identifying a target individual reliably in a crowded environment observed by a distributed camera network is critical to a variety of tasks in managing business information, border control, and crime prevention. Automatic re-identification of a human candidate from public space CCTV video is challenging due to spatiotemporal visual feature variations and strong visual similarity between different people, compounded by low-resolution and poor quality video data.

In this work, we propose a novel method for re-identification that learns a selection and weighting of mid-level semantic attributes to describe people. Specifically, the model learns an attribute-centric, parts-based feature representation. This differs from and complements existing low-level features for re-identification that rely purely on bottom-up statistics for feature selection, which are limited in discriminating and identifying reliably visual appearances of target people appearing in different camera views under certain degrees of occlusion due to crowdedness. Our experiments demonstrate the effectiveness of our approach compared to existing feature representations when applied to benchmarking datasets.

About

Education

      PhD, Computer Vision, Queen Mary University of London
      Thesis: Real-world Human Re-identification: Attributes, and Beyond  Download Thesis
      Supervisors: Prof. Shaogang Gong, Dr. Tao Xiang, and Dr. Timothy Hospedales.
      Thesis Defence: Prof. Paolo Remagnino (Kingston), Dr. Anil Bharath (Imperial)

      MSc, Cognitive Computing, Goldsmiths University of London
      Thesis: A Dynamic Connectionist Planning System Without Symbols
      Supervisor: Prof. Mark Bishop
      Thesis Defence: Prof. Kevin Warwick

      BSc, Psychology, Roehampton University of London
      Dissertation: The Regret Effect: Avoidance of Next-best Choices
      Supervisor: Dr. Amanda Holmes

Contacting me

r.d.Click here if you're human@qmul.ac.uk