Abstract:The traditional robot V-SLAM frontend positioning algorithm is based on manually set feature point extraction and descriptor local matching for positioning. Due to the subjectivity of manual setting, the extraction method will have poor robustness and weak adaptability to complex scenes (scene brightness changes, the introduction of noise, motion blur) and the low accuracy of local descriptor matching. For this reason, a front-end positioning algorithm (SuperPoint Brief and K-means visual location, SBK-VL) was proposed. The algorithm firstly used an improved p-probability-SuperPoint deep learning framework extracted feature points to solve the problem of low robustness of feature points and weak adaptability to complex scenes; secondly, a combination of global information (feature point clustering) and local information (Brief descriptor) was proposed. Descriptors can reduce the mismatch of traditional descriptors and improve the problem of low matching accuracy. The experimental results showed that the average matching accuracy rate was 92.71%. Finally, replacing the SBK-VL with the front end of ORB-SLAM2, a Ransac random sampling method was used to test the pose, and the absolute trajectory error index was used. Relative trajectory error index and average tracking time were compared with that of ORB-SLAM2 algorithm and GCNv2-SLAM algorithm. The experimental results showed that the algorithm had better equalization performance. On the one hand, it can improve the complex scene adaptability and estimation accuracy of the classic V-SLAM algorithm. On the other hand, it had better real-time performance and computational cost than the traditional deep learning SLAM algorithm.