Replayed Video Attack Detection Based on Motion Blur Analysis
Face presentation attacks are the main threats to face recognition systems, and many presentation attack detection (PAD) methods have been proposed in recent years. Although these methods have achieved significant performance in some specific intrusion modes, difficulties still exist in addressing replayed video attacks. That is because the replayed fake faces contain a variety of aliveness signals, such as eye blinking and facial expression changes. Replayed video attacks occur when attackers try to invade biometric systems by presenting face videos in front of the cameras, and these videos are often launched by a liquid-crystal display (LCD) screen. Due to the smearing effects and movements of LCD, videos captured from the real and replayed fake faces present different motion blurs, which are reflected mainly in blur intensity variation and blur width. Based on these descriptions, a motion blur analysis-based method is proposed to deal with the replayed video attack problem. We first present a 1D convolutional neural network (CNN) for motion blur intensity variation description in the time domain, which consists of a serial of 1D convolutional and pooling filters. Then, a local similar pattern (LSP) feature is introduced to extract blur width. Finally, features extracted from 1D CNN and LSP are fused to detect the replayed video attacks. Extensive experiments on two standard face PAD databases, i.e., relay-attack and OULU-NPU, indicate that our proposed method based on the motion blur analysis significantly outperforms the state-of-the-art methods and shows excellent generalization capability.