SwapGAN

Fashion style transfer has attracted significant attention because it both has interesting scientific challenges and it is also important to the fashion industry. This paper focuses on addressing a practical problem in fashion style transfer, person-to-person clothing swapping, which aims to visualize what the person would look like with the target clothes worn on another person instead of dressing them physically. This problem remains challenging due to varying pose deformations between different person images. In contrast to traditional nonparametric methods that blend or warp the target clothes for the reference person, in this paper we propose a multistage deep generative approach named SwapGAN that exploits three generators and one discriminator in a unified framework to fulfill the task end-to-end. The first and second generators are conditioned on a human pose map and a segmentation map, respectively, so that we can simultaneously transfer the pose style and the clothes style. In addition, the third generator is used to preserve the human body shape during the image synthesis process. The discriminator needs to distinguish two fake image pairs from the real image pair. The entire SwapGAN is trained by integrating the adversarial loss and the mask-consistency loss. The experimental results on the DeepFashion dataset demonstrate the improvements of SwapGAN over other existing approaches through both quantitative and qualitative evaluations. Moreover, we conduct ablation studies on SwapGAN and provide a detailed analysis about its effectiveness.

Liu Yu, Chen Wei, Liu Li, Lew Michael S.

A1 Journal article – refereed

Y. Liu, W. Chen, L. Liu and M. S. Lew, "SwapGAN: A Multistage Generative Approach for Person-to-Person Fashion Style Transfer," in IEEE Transactions on Multimedia, vol. 21, no. 9, pp. 2209-2222, Sept. 2019. doi: 10.1109/TMM.2019.2897897

https://doi.org/10.1109/TMM.2019.2897897 http://urn.fi/urn:nbn:fi-fe201902256190