Contextual weighting of patches for local matching in still-to-video face recognition

Still-to-video face recognition (FR) systems for watchlist screening seek to recognize individuals of interest given faces captured over a network of video surveillance cameras. Screening faces against a watchlist is a challenging application because only a limited number of reference stills is available per individual during enrollment, and the appearance of face captures in videos changes from camera to camera, due to variations in illumination, pose, blur, scale, expression and occlusion. In order to improve the robustness of FR systems, several local matching techniques have been proposed that rely on static or dynamic weighting of patches. However, these approaches are not suitable for watchlist screening applications where the capturing conditions vary significantly over different camera fields of view (FoV). In this paper, a new dynamic weighting technique is proposed for weighting facial patches based on video data collected a priori from the specific operational domain (camera FoV) and on image quality assessment. Results obtained on videos from the Chokepoint dataset indicate that the proposed approach can significantly outperform the reference local matching methods because patch weights tend to grow for discriminant facial regions.