RGB-D Visual SLAM Algorithm for Mobile Robots
CSTR:
Author:
Affiliation:

Clc Number:

Fund Project:

  • Article
  • |
  • Figures
  • |
  • Metrics
  • |
  • Reference
  • |
  • Related
  • |
  • Cited by
  • |
  • Materials
  • |
  • Comments
    Abstract:

    In view of the problems of low accuracy and poor real-time in the research of visual simultaneous localization and mapping, a RGB-D vision SLAM algorithm for indoor mobile robots was proposed. Firstly, feature points of RGB image were extracted by using oriented fast and rotated brief (ORB) algorithm, and matching point pair set was obtained by the bidirectional K-nearest neighbor (KNN) feature matching method based on fast library for approximate nearest neighbors (FLANN). The improved random sampling consistency algorithm (RE-RANSAC) was used to eliminate false matching points and estimate the 6D motion transformation model between two adjacent images, as the initial transformation model of GICP algorithm. The generalized iterative closest point algorithm (GICP) was used to obtain the optimized motion transformation model, and then the pose diagram was obtained. In order to improve the positioning accuracy, a random closed-loop detection link was introduced to reduce the cumulative error in the robot positioning process, and the pose diagram was optimized by using the general graph optimization (G2O) method to obtain the global optimal pose diagram and camera motion trajectory, and the global color dense point cloud map was finally generated. For the tested FR1 data sets, the minimum positioning error of the algorithm was 0.011m, the average positioning error was 0.0245m, and the average processing time of each frame was 0.032s, which can meet the requirement of rapid positioning and mapping of mobile robots.

    Reference
    Related
    Cited by
Get Citation
Share
Article Metrics
  • Abstract:
  • PDF:
  • HTML:
  • Cited by:
History
  • Received:May 05,2018
  • Revised:
  • Adopted:
  • Online: October 10,2018
  • Published:
Article QR Code