Publication

PlaceNet: A multi-scale semantic-aware model for visual loop closure detection

Résumé

Loop closure detection helps simultaneous localization and mapping systems reduce map and state uncertainty via recognizing previously visited places along the path of a mobile robot. However, visual loop closure detection is susceptible to scenes with dynamic objects and changes in illumination, background, and weather conditions. This paper introduces PlaceNet, a novel plug-and-play model for visual loop closure detection. PlaceNet is a multi-scale deep autoencoder network augmented with a semantic fusion layer for scene understanding. The main idea of PlaceNet is to learn where not to look in a dynamic scene full of moving objects, i.e., avoid being distracted by dynamic objects to focus on the scene landmarks instead. We train PlaceNet to identify dynamic objects in scenes via learning a grayscale semantic map indicating the position of static and moving objects in the image. PlaceNet generates semantic-aware deep features that are robust to dynamic environments and scale invariant. We evaluated our method on different challenging indoor and outdoor benchmarks. To conclude, PlaceNet demonstrated competitive results compared to the state-of-the-art methods over various datasets used in our experiments.

À propos de ce résultat
Cette page est générée automatiquement et peut contenir des informations qui ne sont pas correctes, complètes, à jour ou pertinentes par rapport à votre recherche. Il en va de même pour toutes les autres pages de ce site. Veillez à vérifier les informations auprès des sources officielles de l'EPFL.