Learning to detect genuine versus posed pain from facial expressions using residual generative adversarial networks

We present a novel approach based on Residual Generative Adversarial Network (R-GAN) to discriminate genuine pain expression from posed pain expression by magnifying the subtle changes in the face. In addition to the adversarial task, the discriminator network in R-GAN estimates the intensity level of the pain. Moreover, we propose a novel Weighted Spatiotemporal Pooling (WSP) to capture and encode the appearance and dynamic of a given video sequence into an image map. In this way, we are able to transform any video into an image map embedding subtle variations in the facial appearance and dynamics. This allows using any pre-trained model on still images for video analysis. Our extensive experiments show that our proposed framework achieves promising results compared to state-of-the-art approaches on three benchmark databases, i.e., UNBC-McMaster Shoulder Pain, BioVid Head Pain, and STOIC.