基于相机与激光雷达融合的温室机器人行间导航方法
作者:
作者单位:

作者简介:

通讯作者:

中图分类号:

基金项目:

国家重点研发计划项目(2020AAA0108103)、中国科学院机器人与智能制造创新研究院自主项目(C2021002)和中国科学院合肥物质科学研究院院长基金项目(YZJJZX202013)


Inter-rows Navigation Method of Greenhouse Robot Based on Fusion of Camera and LiDAR
Author:
Affiliation:

Fund Project:

  • 摘要
  • |
  • 图/表
  • |
  • 访问统计
  • |
  • 参考文献
  • |
  • 相似文献
  • |
  • 引证文献
  • |
  • 资源附件
  • |
  • 文章评论
    摘要:

    针对温室颠簸不平、枝叶遮挡道路的复杂环境,开展基于相机与激光雷达数据融合的机器人行间导航方法研究。首先,利用改进的U-Net模型实现图像道路区域的准确快速分割;其次,通过融合图像分割结果进行地面点云预分割,减少地面起伏造成的点云倾斜;然后,采用改进的KMeans算法实现作物行点云快速聚类,并将聚类中心作为作物行主干区域点,降低枝叶遮挡对作物行中线提取的影响;最后,采用RANSAC算法拟合两侧作物行方程并计算出导航线。通过实验评估导航线精度,在测试集中94%以上数据帧可以准确实现提取导航线,平均角度误差不高于1.45°,满足温室机器人沿作物行自主导航行驶要求。

    Abstract:

    Aiming at the complex greenhouse environment where the ground is bumpy and the branches and leaves block the road, the research on the inter-rows navigation method of greenhouse robot based on the fusion of camera and LiDAR data was carried out. Firstly, the improved U-Net model was used to realize the accurate and fast segmentation of image road area. Secondly, the ground point cloud was pre-segmented by fusing the image segmentation result to reduce the incline of the point cloud data caused by the ground bumpiness. Then, the improved KMeans algorithm was used to realize the rapid clustering of the crop row point cloud, and the cluster centers were used as the main area points of crop rows to reduce the influence of branches and leaves blocking the road on extraction of crop row centerline. Finally, the RANSAC algorithm was used to fit the crop row equations on both sides and calculate the navigation lines. The navigation line accuracy was evaluated by experiment, the validation work was conducted in two greenhouse scenarios at three typical greenhouse robot operation speeds. The experimental results showed that the performance and timing of the segmented images met the requirements of subsequent point cloud pre-segmentation;the experiment of point cloud data frames by bumpy environment can effectively calibrate the ground point cloud;compared with the raster height difference segmentation of ground point cloud, the segmentation effect was better and the time consumption of single frame processing was increased very little;on the test set, more than 94% of the data frames can accurately extract the navigation line and the average angle error was not higher than 1.45°. The research result can meet the greenhouse robot along the crop row autonomous navigation driving requirements.

    参考文献
    相似文献
    引证文献
引用本文

王杰,陈正伟,徐照胜,黄滋栋,经俊森,牛润新.基于相机与激光雷达融合的温室机器人行间导航方法[J].农业机械学报,2023,54(3):32-40. WANG Jie, CHEN Zhengwei, XU Zhaosheng, HUANG Zidong, JING Junsen, NIU Runxin. Inter-rows Navigation Method of Greenhouse Robot Based on Fusion of Camera and LiDAR[J]. Transactions of the Chinese Society for Agricultural Machinery,2023,54(3):32-40.

复制
分享
文章指标
  • 点击次数:
  • 下载次数:
  • HTML阅读次数:
  • 引用次数:
历史
  • 收稿日期:2022-04-20
  • 最后修改日期:
  • 录用日期:
  • 在线发布日期: 2023-03-10
  • 出版日期: