Are you an EPFL student looking for a semester project?
Work with us on data science and visualisation projects, and deploy your project as an app on top of Graph Search.
Loop closure detection helps simultaneous localization and mapping systems reduce map and state uncertainty via recognizing previously visited places along the path of a mobile robot. However, visual loop closure detection is susceptible to scenes with dynamic objects and changes in illumination, background, and weather conditions. This paper introduces PlaceNet, a novel plug-and-play model for visual loop closure detection. PlaceNet is a multi-scale deep autoencoder network augmented with a semantic fusion layer for scene understanding. The main idea of PlaceNet is to learn where not to look in a dynamic scene full of moving objects, i.e., avoid being distracted by dynamic objects to focus on the scene landmarks instead. We train PlaceNet to identify dynamic objects in scenes via learning a grayscale semantic map indicating the position of static and moving objects in the image. PlaceNet generates semantic-aware deep features that are robust to dynamic environments and scale invariant. We evaluated our method on different challenging indoor and outdoor benchmarks. To conclude, PlaceNet demonstrated competitive results compared to the state-of-the-art methods over various datasets used in our experiments.
Christophe Ballif, Pierre-Jean Alet, Arttu Matias Tuomiranta
Ankita Singhvi, Kazuki Sakamoto