水稻收获无人驾驶运粮车粮厢图像轻量化分割模型研究
CSTR:
作者:
作者单位:

作者简介:

通讯作者:

中图分类号:

基金项目:

国家重点研发计划项目(2021YFD2000600)和农业装备技术全国重点实验室(华南农业大学)开放课题项目(SKLAET-202404)


Research on Lightweight Image Segmentation Model for Grain Tank of an Unmanned Grain Cart in Rice Harvesting
Author:
Affiliation:

Fund Project:

  • 摘要
  • |
  • 图/表
  • |
  • 访问统计
  • |
  • 参考文献
  • |
  • 相似文献
  • |
  • 引证文献
  • |
  • 资源附件
  • |
  • 文章评论
    摘要:

    针对目前无人驾驶水稻收获机向运粮车转卸稻谷时,依靠收获机和运粮车的北斗定位信息决策卸粮臂位置控制,对靶精度难以保证问题,提出一种粮厢图像视觉分割模型GTSM,为卸粮臂对靶提供粮厢位置参考信息。在DeepLabv3+结构基础上,使用轻量化主干ShuffleNetv2替换Xception,将ASPP模块中空洞卷积替换为深度可分离卷积,然后低秩分解为微因子分解卷积,以减小模型复杂度和提高运行速度;在浅层特征分支引入SE通道注意力机制,提高模型对粮厢边缘、纹理等低级特征利用能力。试验结果显示,GTSM平均交占比和平均像素准确率分别达到96.06%和98.69%,较基准DeepLabv3+分别提升0.78、0.67个百分点;同时,模型复杂度明显改善,参数量和内存占用量仅为原来的1/9,推理速度提高166%。试验结果表明,提出的GTSM兼顾分割精度和推理速度,可为田间运粮车粮厢自动化分割提供参考依据。

    Abstract:

    Aiming to address the issue of low targeting accuracy in controlling the unloading arm position during rice transfer from unmanned rice harvesters to grain transport vehicles, which relies on Beidou positioning information of the harvester and transport vehicle, a GTSM network for visual segmentation of grain compartment images was proposed to provide positional reference information for the unloading arm. Based on the DeepLabv3+ architecture, the lightweight ShuffleNetv2 backbone replaced Xception, and the atrous convolutions in the ASPP module were replaced with depthwise separable convolutions, followed by low-rank decomposition into micro-factorized convolutions to reduce model complexity and improve inference speed. Additionally, an SE channel attention mechanism was introduced in the shallow feature branch to enhance the model’s ability to utilize low-level features such as grain compartment edges and textures. Experimental results showed that GTSM achieved a mean intersection over union (mIoU) of 96.06% and a mean pixel accuracy (mPA) of 98.69%, representing improvements of 0.78 and 0.67 percentage points, respectively, over the baseline DeepLabv3+. Meanwhile, model complexity was significantly reduced, with parameter count and memory usage reduced to 1/9 of the original, and inference speed was increased by 166%. These results demonstrated that the proposed GTSM balanced segmentation accuracy and inference speed, providing a reference for automated grain compartment segmentation in field grain transport vehicles.

    参考文献
    相似文献
    引证文献
引用本文

赵润茂,黄嘉涛,满忠贤,罗锡文,胡炼,何杰,汪沛,黄培奎.水稻收获无人驾驶运粮车粮厢图像轻量化分割模型研究[J].农业机械学报,2025,56(6):196-204. ZHAO Runmao, HUANG Jiatao, MAN Zhongxian, LUO Xiwen, HU Lian, HE Jie, WANG Pei, HUANG Peikui. Research on Lightweight Image Segmentation Model for Grain Tank of an Unmanned Grain Cart in Rice Harvesting[J]. Transactions of the Chinese Society for Agricultural Machinery,2025,56(6):196-204.

复制
相关视频

分享
文章指标
  • 点击次数:
  • 下载次数:
  • HTML阅读次数:
  • 引用次数:
历史
  • 收稿日期:2025-05-03
  • 最后修改日期:
  • 录用日期:
  • 在线发布日期: 2025-06-10
  • 出版日期:
文章二维码