Are you an EPFL student looking for a semester project?
Work with us on data science and visualisation projects, and deploy your project as an app on top of Graph Search.
Domain generalization (DG) tackles the problem of learning a model that generalizes to data drawn from a target domain that was unseen during training. A major trend in this area consists of learning a domain-invariant representation by minimizing the discrepancy across multiple source domains. This strategy, however, does not apply to the challenging yet realistic single-source scenario. In this paper, in contrast to existing methods that focus on domain discrepancy, we exploit the fact that discrepancies also arise across samples from the same class. We therefore develop a unified framework for both multi-source and single-source DG that exploits contrastive learning to maximize the gap between samples from the same class, either from different domains or from the same one, while separating the samples from different classes. Our results on standard multi-source and single-source DG benchmark datasets demonstrate the benefits of our method over the state-of-the-art ones in both settings.
David Atienza Alonso, Alireza Amirshahi, Jonathan Dan, Adriano Bernini, William Cappelletti, Luca Benini, Una Pale
Olga Fink, Ismail Nejjar, Mengjie Zhao