Project

ACMod – Affective Computing Models: from Facial Expression to Mind-Reading

Across cultures, understanding facial expressions is crucial for effective communication. However, consistent cross-cultural disagreements challenge the universality hypothesis of facial expressions. While automatic facial expression analysis has made strides, understanding emotions in diverse, realistic environments remains limited. In this context, the MSCA-funded ACMod project will advance facial behaviour modelling across cultures. This addresses gaps in human-computer interaction, robotics and virtual reality. By combining psychology and computer science, ACMod aims to digitise emotions and develop 4D facial models. The project fosters collaboration between EU and East Asian partners, enhancing research and knowledge exchange. ACMod seeks to revolutionise communication with intelligent agents in both real and virtual settings.

  • Cordis: cordis.europa.eu/project/id/101130271
  • ID: 101130271
  • Project type: TMA-MSCA-SE
  • Duration: 1 March 2024 – 29 February 2028
  • Our expertise: Emotional facial expressions, social interaction, digital avatar, human computer interaction, 3D face modelling
  • Our role: Coordinator
  • Contact person: Guoying Zhao