Face Video Generation from a Single Image and Landmarks

In this paper, we are concerned with the challenging problem of producing a full image sequence of a deformable face given only an image and generic facial motions encoded by a set of sparse landmarks. To this end, we build upon recent breakthroughs in image-to-image translation such as pix2pix, CycleGAN and StarGAN which learn Deep Convolutional Neural Networks (DCNNs) that learn to map aligned pairs of images between different domains (i.e., having different labels) and propose a new architecture which is not driven any more by labels but by spatial maps, facial landmarks. In particular, we propose the MotionGAN which transforms an input face image into a new one according to a heatmap of target landmarks. We show that it is possible to create very realistic face videos using a single image and a set of target landmarks. Furthermore, our method can be used to edit a facial image with arbitrary motions according to landmarks (e.g., expression, speech, etc.). This provides much more flexibility to face editing, expression transfer, facial video creation, etc. than models based on discrete expressions, audio or action units.

Songsri-in Kritaphat, Zafeiriou Stefanos

A4 Article in conference proceedings

2020 15th IEEE International Conference on Automatic Face and Gesture Recognition (FG 2020)

K. Songsri-in and S. Zafeiriou, "Face Video Generation from a Single Image and Landmarks," 2020 15th IEEE International Conference on Automatic Face and Gesture Recognition (FG 2020), 2020, pp. 69-76, doi: 10.1109/FG47880.2020.00104

https://doi.org/10.1109/FG47880.2020.00104 http://urn.fi/urn:nbn:fi-fe2022032124239