Êtes-vous un étudiant de l'EPFL à la recherche d'un projet de semestre?
Travaillez avec nous sur des projets en science des données et en visualisation, et déployez votre projet sous forme d'application sur Graph Search.
We propose a deep neural network based image-to-image translation for domain adaptation, which aims at finding translations between image domains. Despite recent GAN based methods showing promising results in image-to-image translation, they are prone to fail at preserving semantic information and maintaining image details during translation, which reduces their practicality on tasks such as facial expression synthesis. In this paper, we learn a framework with two training objectives: first, we propose a multi-domain image synthesis model, yielding a better recognition performance compared to other GAN based methods, with a focus on the data augmentation process; second, we explore the use of domain adaptation to transform the visual appearance of the images from different domains, with the detail of face characteristics (e.g., identity) well preserved. Doing so, the expression recognition model learned from the source domain can be generalized to the translated images from target domain, without the need for re-training a model for new target domain. Extensive experiments demonstrate that ExprADA shows significant improvements in facial expression recognition accuracy compared to state-of-the-art domain adaptation methods.