QMUL Multiview Face Dataset


Queen Mary University of London Multiview Face Dataset consists of automatically aligned, cropped and normalised face images of 48 people, of which 37 people are in greyscale of 100x100 image size and 11 people in colour of 56x56 image size. Each person has 133 facial images covering a viewsphere of +/-90 degrees in yaw and +/-30 degrees in tilt at 10 degrees increment. An example is shown below.


The dataset can be downloaded from here (40MB).

An example of tracking face continously using a model built from this dataset is shown in this video (2.3MB). More details are in J. Sherrah and S. Gong. "Fusion of Perceptual Cues for Robust Tracking of Head Pose and Position". Pattern Recognition, Vol. 34, No. 8, pp. 1565-1572,  2001.

References:

1.    S. Gong, S. McKenna and A. Psarrou. Dynamic Vision: From Images to Face Recognition, 364 pages, Imperial College Press, May 2000.

2.      S. Gong, S. McKenna, and J.J. Collins. "An Investigation into Face Pose Distributions". In Proc. IEEE International Conference on Automatic Face and Gesture Recognition, pp. 265-270, Killington, Vermont, USA, October 1996.

3.    S. Gong, Eng-Jon Ong and S. McKenna. "Learning to Associate Faces across Views in Vector Space of Similarities to Prototypes". In Proc. British Machine Vision Conference, pp. 54-63, Southampton, UK, September 1998.