基于改进U-Net的火龙果采摘图像分割和姿态估计方法
作者:
作者单位:

作者简介:

通讯作者:

中图分类号:

基金项目:

广东省农业科技创新“揭榜挂帅”项目(2022SDZG03-5)和岭南现代农业实验室科研项目(NZ2021038)


Image Segmentation and Pose Estimation Method for Pitaya Picking Robot Based on Enhanced U-Net
Author:
Affiliation:

Fund Project:

  • 摘要
  • |
  • 图/表
  • |
  • 访问统计
  • |
  • 参考文献
  • |
  • 相似文献
  • |
  • 引证文献
  • |
  • 资源附件
  • |
  • 文章评论
    摘要:

    为了实现火龙果采收自动化作业,提出一种基于改进U-Net的火龙果图像分割和姿态估计方法。首先,在U-Net 模型的跳跃连接(编码器与解码器部分特征图进行的连接操作)中引入通道和空间注意力机制模块(Concurrent spatial and channel squeeze and channel excitation, SCSE),同时将SCSE模块集成到残差模块(Double residual block, DRB)中,在增强网络提取有效特征能力的同时提高网络的收敛速度,得到一种基于注意力残差U-Net的火龙果图像分割网络。通过该网络分割出果实及其附生枝条的掩膜图像,利用图像处理技术和相机成像模型拟合出果实及其附生枝条的轮廓、果实质心、果实最小外接矩形框和三维边界框,进而结合果实及其附生枝条的位置关系进行火龙果三维姿态估计,并在火龙果种植园中获得一个测试集,以评价该算法的性能,最后在自然果园环境下进行实地采摘试验。试验结果表明,火龙果果实图像分割平均交并比(mIoU)和平均像素准确率(mPA)分别达到86.69%和93.89%,三维姿态估计平均误差为8.8°,火龙果采摘机器人在果园环境下的采摘成功率为86.7%,平均采摘时间为22.3s。

    Abstract:

    In order to achieve automation of pitaya harvesting, an improved U-Net based method for pitaya image segmentation and pose estimation was proposed. Firstly, a concurrent spatial and channel squeeze and channel exception (SCSE) module was introduced into the skip connection (connection operation between the encoder and decoder feature maps) of the U-Net model. At the same time, the SCSE module was integrated into the residual module double residual block (DRB) to enhance the network’s ability to extract effective features while improving its convergence speed, obtaining a pitaya image segmentation network based on attention residual U-Net. By using this network to segment mask images of fruits and their accompanying branches, image processing techniques and camera imaging models were used to fit the contours, centroids, minimum bounding rectangle boxes, and three-dimensional bounding boxes of fruits and their accompanying branches. Then based on the positional relationship of fruits and their accompanying branches, three-dimensional pose estimation of pitaya was performed. A test set was obtained in pitaya plantations to evaluate the performance of this algorithm. Finally, field picking experiments were conducted in a natural orchard environment. The experimental results showed that the average intersection and union ratio (mIoU) and the mean pixel accuracy (mPA) of image segmentation for pitaya fruit reached 86.69% and 93.89%, respectively. The average error of threedimensional pose estimation was 8.8°. The success rate of pitaya fruit picking robot in orchard environment was 86.7%, and the average picking time was 22.3s. The research results indicated that this method can provide technical support for developing an intelligent pitaya picking robot to achieve automated and precise picking.

    参考文献
    相似文献
    引证文献
引用本文

朱立学,赖颖杰,张世昂,伍荣达,邓文乾,郭晓耿.基于改进U-Net的火龙果采摘图像分割和姿态估计方法[J].农业机械学报,2023,54(11):180-188. ZHU Lixue, LAI Yingjie, ZHANG Shiang, WU Rongda, DENG Wenqian, GUO Xiaogeng. Image Segmentation and Pose Estimation Method for Pitaya Picking Robot Based on Enhanced U-Net[J]. Transactions of the Chinese Society for Agricultural Machinery,2023,54(11):180-188.

复制
分享
文章指标
  • 点击次数:
  • 下载次数:
  • HTML阅读次数:
  • 引用次数:
历史
  • 收稿日期:2023-08-03
  • 最后修改日期:
  • 录用日期:
  • 在线发布日期: 2023-11-10
  • 出版日期: