Are you an EPFL student looking for a semester project?
Work with us on data science and visualisation projects, and deploy your project as an app on top of Graph Search.
Location is a piece of information that empowers almost any type of application. In contrast to the outdoors, where global navigation satellite systems provide geo-spatial positioning, there are still millions of square meters of indoor space that are unaccounted for by location sensing technology. Moreover, predictions show that people’s activities are likely to shift more and more towards urban and indoor environments– the United Nations predict that by 2020, over 80% of the world’s population will live in cities. Meanwhile, indoor localization is a problem that is not simply solved: people, indoor furnishings, walls and building structures—in the eyes of a positioning sensor, these are all obstacles that create a very challenging environment. Many sensory modalities have difficulty in overcoming such harsh conditions when used alone. For this reason, and also because we aim for a portable, miniaturizable, cost-effective solution, with centimeter-level accuracy, we choose to solve the indoor localization problem with a hybrid approach that consists of two complementary components: ultra-wideband localization, and collaborative localization. In pursuit of the final, hybrid product, our research leads us to ask what benefits collaborative localization can provide to ultra-wideband localization—and vice versa. The road down this path includes diving into these orthogonal sub-domains of indoor localization to produce two independent localization solutions, before finally combining them to conclude our work. As for all systems that can be quantitatively examined, we recognize that the quality of our final product is defined by the rigor of our evaluation process. Thus, a core element of our work is the experimental setup, which we design in a modular fashion, and which we complexify incrementally according to the various stages of our studies. With the goal of implementing an evaluation system that is systematic, repeatable, and controllable, our approach is centered around the mobile robot. We harness this platform to emulate mobile targets, and track it in real-time with a highly reliable ground truth positioning system. Furthermore, we take advantage of the miniature size of our mobile platform, and include multiple entities to form a multi-robot system. This augmented setup then allows us to use the same experimental rigor to evaluate our collaborative localization strategies. Finally, we exploit the consistency of our experiments to perform cross-comparisons of the various results throughout the presented work. Ultra-wideband counts among the most interesting technologies for absolute indoor localization known to date. Owing to its fine delay resolution and its ability to penetrate through various materials, ultra-wideband provides a potentially high ranging accuracy, even in cluttered, non-line-of-sight environments. However, despite its desirable traits, the resolution of non-line-of-sight signals remains a hard problem. In other words, if a non-line-of-sight signal is not recognized as such, it leads to significant errors in the position estimate. Our work improves upon state-of-the-art by addressing the peculiarities of ultra-wideband signal propagation with models that capture the spatiality as well as the multimodal nature of the error statistics. Simultaneously, we take care to develop an underlying error model that is compact and that can be calibrated by means of efficient algorithms. In order to facilitate the usage of our multimodal error model, we use a localization algorithm that is based on particle filters. Our collaborative localization strategy distinguishes itself from prior work by emphasizing cost-efficiency, full decentralization, and scalability. The localization method is based on relative positioning and uses two quantities: relative range and relative bearing. We develop a relative robot detection model that integrates these measurements, and is embedded in our particle filter based localization framework. In addition to the robot detection model, we consider an algorithmic component, namely a reciprocal particle sampling routine, which is designed to facilitate the convergence of a robot’s position estimate. Finally, in order to reduce the complexity of our collaborative localization algorithm, and in order to reduce the amount of positioning data to be communicated between the robots, we develop a particle clustering method, which is used in conjunction with our robot detection model. The final stage of our research investigates the combined roles of collaborative localization and ultra-wideband localization. Numerous experiments are able to validate our overall localization strategy, and show that the performance can be significantly improved when using two complementary sensory modalities. Since the fusion of ultra-wideband positioning sensors with exteroceptive sensors has hardly been considered so far, our studies present pioneering work in this domain. Several insights indicate that collaboration—even if through noisy sensors—is a useful tool to reduce localization errors. In particular, we show that our collaboration strategy can provide the means to minimize the localization error, given that the collaborative design parameters are optimally tuned. Our final results show median localization errors below 10 cm in cluttered environments.
Dario Floreano, Fabrizio Schiano
Adam James Scholefield, Frederike Dümbgen, Michalina Wanda Pacholska