Next: Real-time Identity Recognition of Up: Summary of Objectives and Previous: Real-Time Face DetectionTracking

Real-Time View Alignment for Appearance-based Recognition

Object recognition in dynamic scenes using view-based representation requires establishing image correspondences in successive frames of a moving object which may undergo both affine and viewpoint transformations. However, to obtain consistent dense image correspondence is both problematic and expensive since changes in viewpoint result in self-occlusions which prohibit complete sets of image correspondences from being established. Practically, only sparse correspondence can be established quickly for a carefully chosen set of feature points. To realise near real-time performance, however, an entirely different approach is preferable which does not depend on reliable feature detection and tracking. Holistic texture-only-templates can be used. This assumes that the object of interest is approximately rigid (non-rigid facial motion has little to do with identity) and therefore permits a relatively simplistic parametric model to be used. Furthermore, if the model is also built based on data from a large set of viewpoints, it can in theory recover pose change as well. Likewise, if it is trained under different illuminations, it can perform in changing lighting conditions. We developed an novel real-time approach to view alignment for appearance-based recognition [1, 2]. In our approach, we used an integrated scheme for view alignment which takes the following considerations into account:

  1. the use of both shape and texture in eigenspace in a simple manner relaxes the rigidity assumption without introducing too much computational cost,
  2. a process for effective bootstrapping,
  3. parameter recovery with selective attention,
  4. affine parameter estimation using a dynamically updated, viewpoint centred eigenspace,
  5. parameter prediction