SLAMM: Visual monocular SLAM with continuous mapping using multiple maps

This paper presents the concept of Simultaneous Localization and Multi-Mapping (SLAMM). It is a system that ensures continuous mapping and information preservation despite failures in tracking due to corrupted frames or sensor’s malfunction; making it suitable for real-world applications. It works w...

Full description

Saved in:
Bibliographic Details
Main Authors: Daoud, Hayyan Afeef, Sabri, Aznul Qalid Md, Loo, Chu Kiong, Mansoor, Ali Mohammed
Format: Article
Published: Public Library of Science 2018
Subjects:
Online Access:http://eprints.um.edu.my/22170/
https://doi.org/10.1371/journal.pone.0195878
Tags: Add Tag
No Tags, Be the first to tag this record!
Description
Summary:This paper presents the concept of Simultaneous Localization and Multi-Mapping (SLAMM). It is a system that ensures continuous mapping and information preservation despite failures in tracking due to corrupted frames or sensor’s malfunction; making it suitable for real-world applications. It works with single or multiple robots. In a single robot scenario the algorithm generates a new map at the time of tracking failure, and later it merges maps at the event of loop closure. Similarly, maps generated from multiple robots are merged without prior knowledge of their relative poses; which makes this algorithm flexible. The system works in real time at frame-rate speed. The proposed approach was tested on the KITTI and TUM RGB-D public datasets and it showed superior results compared to the state-of-the-arts in calibrated visual monocular keyframe-based SLAM. The mean tracking time is around 22 milliseconds. The initialization is twice as fast as it is in ORB-SLAM, and the retrieved map can reach up to 90 percent more in terms of information preservation depending on tracking loss and loop closure events. For the benefit of the community, the source code along with a framework to be run with Bebop drone are made available at https://github.com/hdaoud/ORBSLAMM.