基于改进YOLOv5m的采摘机器人苹果采摘方式实时识别
作者:
作者单位:

作者简介:

通讯作者:

中图分类号:

基金项目:

陕西省科技重大专项(2020zdzx03-04-01)


Real-time Apple Picking Pattern Recognition for Picking Robot Based on Improved YOLOv5m
Author:
Affiliation:

Fund Project:

  • 摘要
  • |
  • 图/表
  • |
  • 访问统计
  • |
  • 参考文献
  • |
  • 相似文献
  • |
  • 引证文献
  • |
  • 资源附件
  • |
  • 文章评论
    摘要:

    为准确识别果树上的不同苹果目标,并区分不同枝干遮挡情形下的果实,从而为机械手主动调整位姿以避开枝干对苹果的遮挡进行果实采摘提供视觉引导,提出了一种基于改进YOLOv5m面向采摘机器人的苹果采摘方式实时识别方法。首先,改进设计了BottleneckCSP-B特征提取模块并替换原YOLOv5m骨干网络中的BottleneckCSP模块,实现了原模块对图像深层特征提取能力的增强与骨干网络的轻量化改进;然后,将SE模块嵌入到所改进设计的骨干网络中,以更好地提取不同苹果目标的特征;进而改进了原YOLOv5m架构中输入中等尺寸目标检测层的特征图的跨接融合方式,提升了果实的识别精度;最后,改进了网络的初始锚框尺寸,避免了对图像里较远种植行苹果的识别。结果表明,所提出的改进模型可实现对图像中可直接采摘、迂回采摘(苹果上、下、左、右侧采摘)和不可采摘果实的识别,识别召回率、准确率、mAP和F1值分别为85.9%、81.0%、80.7%和83.4%。单幅图像的平均识别时间为0.025s。对比了所提出的改进算法与原YOLOv5m、YOLOv3和EfficientDet-D0算法在测试集上对6类苹果采摘方式的识别效果,结果表明,所提出的算法比其他3种算法识别的mAP分别高出了5.4、22、20.6个百分点。改进模型的体积为原始YOLOv5m模型体积的89.59%。该方法可为机器人的采摘手主动避开枝干对果实的遮挡,以不同位姿采摘苹果提供技术支撑,可降低苹果的采摘损失。

    Abstract:

    In order to accurately identify the different fruit targets on apple trees, and automatically distinguish the fruit occluded by different branches, providing visual guidance for the mechanical picking end-effector to actively adjust the pose of apple picking to avoid the shelter of the branches, a real-time recognition method of apple picking pattern based on improved YOLOv5m for picking robot was proposed. Firstly, BottleneckCSP module was designed and improved to BottleneckCSP-B module which was used to replace the BottleneckCSP module in backbone architecture of original YOLOv5m network. The ability of image deep feature extraction of the original BottleneckCSP module was enhanced, and the original YOLOv5m backbone network was lightweight designed and improved. Secondly, SE module was inserted to the proposed improved backbone network, to better extract the features of different apple targets. Thirdly, the bonding fusion mode of feature maps, which were input to the target detection layer of medium size in the original YOLOv5m network, were improved, and the recognition accuracy of apple was improved. Finally, the initial anchor box sizes of the original network were improved, avoiding the misrecognition of apples in farther plant row. The experimental results indicated that the graspable, circuitous-graspable (up-graspable, down-graspable, left-graspable, right-graspable) and ungraspable apples could be identified effectively by using the proposed improved model in the study. The recognition recall, precision, mAP and F1 were 85.9%, 81.0%, 80.7% and 83.4%, respectively. The average recognition time was 0.025s per image. Contrasted with original YOLOv5m, YOLOv3 and EfficientDet-D0 model, the mAP of the proposed improved YOLOv5m model was increased by 5.4 percentage points, 22 percentage points and 20.6 percentage points, respectively on test set. The size of the improved model was 89.59% of original YOLOv5m model. The proposed method can provide technical support for the picking end-effector of robot to pick apples in different poses avoiding the shelter of branches, to reduce the loss of apple picking.

    参考文献
    相似文献
    引证文献
引用本文

闫彬,樊攀,王美茸,史帅旗,雷小燕,杨福增.基于改进YOLOv5m的采摘机器人苹果采摘方式实时识别[J].农业机械学报,2022,53(9):28-38,59. YAN Bin, FAN Pan, WANG Meirong, SHI Shuaiqi, LEI Xiaoyan, YANG Fuzeng. Real-time Apple Picking Pattern Recognition for Picking Robot Based on Improved YOLOv5m[J]. Transactions of the Chinese Society for Agricultural Machinery,2022,53(9):28-38,59.

复制
分享
文章指标
  • 点击次数:
  • 下载次数:
  • HTML阅读次数:
  • 引用次数:
历史
  • 收稿日期:2022-04-01
  • 最后修改日期:
  • 录用日期:
  • 在线发布日期: 2022-09-10
  • 出版日期: