LatentMag: Self-Supervised 3D Magnification for Micro Expressions via Latent Extrapolation

Micro-expressions (MEs) are subtle and brief facial movements that reveal genuine emotional states but are often imperceptible due to their low intensity. While motion magnification has proven effective for enhancing ME visibility in 2D settings, its extension to 3D remains largely unexplored. In this work, we present LatentMag, the first controllable 3D micro-expression magnification framework. Unlike traditional editing methods that rely on fixed labels or expression targets, our approach models expression intensity as a relative, input-dependent signal. We adopt registered 3D meshes as our representation, enabling vertex-level correspondence and interpretable displacement analysis. To guide magnification, we introduce a geometric prior that models amplification as a spatially adaptive transformation, where the change in pairwise distance between points on the output mesh scales with that observed between the input shapes, ensuring natural, localized deformation. We operationalize this prior in a generative framework by disentangling a latent intensity code, whose extrapolation drives controllable shape amplification. Trained in a self-supervised manner using unlabeled mesh sequences, LatentMag generalizes well to unseen identities and expressions, offering a novel solution that bridges geometric interpretability with realistic 3D expression modeling.

Wei Mengting, Jiang Xingxun, Chen Haoyu, Li Yante, Zhao Guoying

A1 Journal article (refereed), original research

, ,

M. Wei, X. Jiang, H. Chen, Y. Li and G. Zhao, "LatentMag: Self-Supervised 3D Magnification for Micro Expressions via Latent Extrapolation," in IEEE Transactions on Affective Computing, doi: 10.1109/TAFFC.2025.3640826.

https://doi.org/10.1109/TAFFC.2025.3640826