基于改进DeepLabv3+的火龙果园视觉导航路径识别方法
作者:
作者单位:

作者简介:

通讯作者:

中图分类号:

基金项目:

国家重点研发计划项目(2017YFD0700602)


Navigation Path Recognition between Dragon Orchard Using Improved DeepLabv3+ Network
Author:
Affiliation:

Fund Project:

  • 摘要
  • |
  • 图/表
  • |
  • 访问统计
  • |
  • 参考文献
  • |
  • 相似文献
  • |
  • 引证文献
  • |
  • 资源附件
  • |
  • 文章评论
    摘要:

    针对视觉导航系统应用在火龙果园环境中面临干扰因素多、图像背景复杂、复杂模型难以部署等问题,本文提出了一种基于改进DeepLabv3+网络的火龙果园视觉导航路径识别方法。首先,采用MobileNetV2取代传统DeepLabv3+的主干特征提取网络Xception,并将空间金字塔池化模块(Atrous spatial pyramid pooling, ASPP)中的空洞卷积替换成深度可分离卷积(Depthwise separable convolution,DSC),在提升模型检测速率的同时大幅减少了模型的参数量和内存占用量;其次,在特征提取模块处引入坐标注意力机制(Coordinate attention,CA),增强了模型的特征提取能力;最后,通过设计的导航路径提取算法对网络模型分割出的道路掩码区域拟合出导航路径。实验结果表明:改进后的DeepLabv3+的平均交并比和平均像素准确率分别达到95.80%和97.86%,相较原模型分别提升0.79、0.41个百分点。同时,模型内存占用量只有15.0MB,和原模型相比降低97.00%,与Pspnet和U-net模型相比则分别降低91.57%、 91.02%。另外,导航路径识别精度测试结果表明平均像素误差为22像素、平均距离误差7.58cm。已知所在果园道路宽度为3m,平均距离误差占比为2.53%。因此,本文研究方法可为解决火龙果园视觉导航任务提供有效参考。

    Abstract:

    Visual navigation has the advantages of low cost, wide applicability and high degree of intelligence, so it is widely used in orchard navigation tasks. Therefore, how to quickly and accurately identify the navigation path is a key step to achieve visual navigation. Aiming at the problems of multiple interference factors and complex image background in the application of visual navigation system in dragon orchard environment, a visual navigation path recognition method was proposed for dragon orchard based on improved DeepLabv3+ network. Firstly, the traditional DeepLabv3+ backbone feature extraction network was replaced by MobileNetV2 from Xception, and the atrous convolution in atrous spatial pyramid pooling (ASPP) was replaced with depthwise separable convolution(DSC). While improving the model detection rate, the number and memory footprint of model parameters were greatly reduced. Secondly, coordinate attention (CA) was introduced at the feature extraction module, which was helpful for the model to locate and identify road areas. Then, experiments were conducted on a self-built dragon orchard road dataset containing three different road conditions. The results showed that compared with the traditional DeepLabv3+, the MIoU and MPA of the improved DeepLabv3+ were increased by 0.79 percentage points and 0.41 percentage points, respectively, reaching 95.80% and 97.86%. Frames per second (FPS) was increased to 57.89f/s, and the number of parameters and memory footprint were reduced by 92.92% and 97.00%, respectively, to 3.87×106 and 15.0MB. The recognition results of the improved model on the orchard road were verified on the test set, indicating that the model had good robustness and anti-interference. In addition, comparing the proposed model with Pspnet and U-net networks, the results showed that the improved models offered significant advantages in detection rate, amount of parameters, and model size, making them more suitable for deployment to embedded devices. According to the segmentation results of the model, the edge information on both sides of the road was extracted, the road boundary line was fitted by the least squares method, and finally the navigation path was extracted by the angle bisector line fitting algorithm. The navigation path recognition accuracy was tested in three different road environments, and the test results showed that the average pixel error was 22 pixels and the average distance error was 7.58cm. The road width of the orchard in this test was 3m, and the average distance error accounted for only 2.53%. Therefore, the research result can provide an effective reference for the visual navigation task of dragon orchard.

    参考文献
    相似文献
    引证文献
引用本文

周学成,肖明玮,梁英凯,商枫楠,陈桥,罗陈迪.基于改进DeepLabv3+的火龙果园视觉导航路径识别方法[J].农业机械学报,2023,54(9):35-43. ZHOU Xuecheng, XIAO Mingwei, LIANG Yingkai, SHANG Fengnan, CHEN Qiao, LUO Chendi. Navigation Path Recognition between Dragon Orchard Using Improved DeepLabv3+ Network[J]. Transactions of the Chinese Society for Agricultural Machinery,2023,54(9):35-43.

复制
分享
文章指标
  • 点击次数:
  • 下载次数:
  • HTML阅读次数:
  • 引用次数:
历史
  • 收稿日期:2023-02-24
  • 最后修改日期:
  • 录用日期:
  • 在线发布日期: 2023-09-10
  • 出版日期: