Modelling Faces Dynamically in a Spatio-Temporal Context

Yongmin Li, Shaogang Gong and Heather Liddell
  1. Multi-View Dynamic Face Models

  2. Modelling faces under large pose variation and dynamically over time in video sequences are two challenging problems in face recognition and facial analysis. To address these problems, a comprehensive novel multi-view dynamic face model is presented in this work. The model consists of a 3D shape model, a shape-and-pose-free texture model, and an affine geometrical model. By fitting the model over a face image or a video sequence, the identity and geometrical information of a face are extracted separately. The former is crucial to face recognition and facial analysis. The latter can be used for face tracking and alignment.

  3. Extracting Discriminant Features of Faces Using Kernel Discriminant Analysis

  4. PCA, LDA and KPCA have been widely used in pattern recognition. But PCA and LDA are limited to linear applications while KPCA seeks to capture the overall rather than the discriminant variance of all patterns though it is a non-linear technique. To efficiently extract the non-linear discriminant features of multi-class patterns with severe non-linearity, the Kernel Discriminant Analysis (KDA) is developed in this work. We applied this method to multi-view face recognition, and significant improvement has been achieved both in robustness and accuracy.

  5. Video-Based Face Recognition Using Identity Surfaces

  6. Recognising faces across views is more challenging than that from a fixed view, e.g. the frontal-view, because of the severe non-linearity caused by rotation in depth, self-occlusion, and self-shading. To model the variance from rotation in depth, we construct the identity surfaces of faces in a discriminant feature space from a sparse sample of multi-view face images. Then face recognition can be performed by computing the pattern distances to the identity surfaces or the trajectory distances between the object and model trajectories tracked from a video sequence. Experimental results depict that this approach provides an accurate recognition rate while using trajectory distances achieves a more robust performance since the trajectories encode the spatio-temporal information and contain accumulated evidence about the moving faces in a video input.

  7. Support Vector Machine Based Multi-View Face Detection and Pose Estimation

  8. A view-based Support Vector Machine (SVM) face model is presented in this work. Face detection is performed in the way of exhaustive scan. First pose is estimated using SVM regression. Then a piece-wise multi-view face model, which is comprised of a set of SVM-based classifiers on different views, is employed for face detection. Significant improvement in accuracy and reduction in computation are achieved since the pose information is explicitly used to select the appropriate classifier.

  9. Combining Support Vector Machine and Eigenspace Modelling for Multi-View Face Detection

  10. The eigenface method and the Support Vector Machine (SVM) method are two widely used techniques in face detection. The former seeks to estimate the probability distribution of face patterns as a unimodal Gaussian. It is fast but less accurate since the model may be too simplistic, especially for multi-view face patterns whose distribution is usually irregular. On the other hand, the SVM method, which tries to model the boundary between face and non-face patterns, is more accurate but slower. In this work, a combination of the two methods is presented to achieve an improved overall performance by speeding up the computation and maintain the accuracy on an acceptable level.

Maintained by Yongmin Li   28/02/2001