Vision-based inertial-aided navigation is gaining ground due to its many potential applications. In previous decades, the integration of vision and inertial sensors was monopolised by the defence industry due to its complexity and unrealistic economic burden. After the technology advancement, high-quality hardware and computing power became reachable for the investigation and realisation of various applications. In this thesis, a mapping system by vision-aided inertial navigation was developed for areas where GNSS signals are unreachable, for example, indoors, tunnels, city canyons, forests, etc. In this framework, a methodology on the integration of vision and inertial sensors was presented, analysed and tested when the only available information at the beginning is a number of features with known location/coordinates (with no GNSS signals accessibility), thus employing the method of "SLAM: Simultaneous Localisation And Mapping". SLAM is a term used in the robotics community to describe the problem of mapping the environment and at the same time using this map to determine (or to help in determining) the location of the mapping device. In addition to this, a link between the robotics and geomatics community was established where briefly the similarities and differences were outlined in terms of handling the navigation and mapping problem. Albeit many differences, the goal is common: developing a "navigation and mapping system" that is not bounded to the limits imposed by the used sensors. Classically, terrestrial robotics SLAM is approached using LASER scanners to locate the robot relative to a structured environment and to map this environment at the same time. However, outdoors robotics SLAM is not feasible with LASER scanners alone due to the environment's roughness and absence of simple geometric features. Recently in the robotics community, the use of visual methods, integrated with inertial sensors, has gained an interest. These visual methods rely on one or more cameras (or video) and make use of a single Kalman Filter with a state vector containing the map and the robot coordinates. This concept introduces high non-linearity and complications to the filter, which then needs to run at high rates (more than 20 Hz) with simplified navigation and mapping models. In this study, SLAM is developed using the Geomatics Engineering approach. Two filters are used in parallel: the Least-Squares Adjustment (LSA) for feature coordinates determination and the Kalman Filter (KF) for navigation correction. For this, a mobile mapping system (independent of GPS) is introduced by employing two CCD cameras (one metre apart) and one IMU. Conceptually, the outputs of the LSA photogrammetric resection (position and orientation) are used as the external measurements for the inertial KF. The filtered position and orientation are subsequently employed in the Photogrammetric intersection to map the surrounding features that are used as control points for the resect
Jan Skaloud, Gabriel François Laupré
Jan Skaloud, Pasquale Longobardi