基于RGB与深度图像融合的生菜表型特征估算方法
CSTR:
作者:
作者单位:

作者简介:

通讯作者:

中图分类号:

基金项目:

国家自然科学基金项目(61762013)、上海市农业科技创新项目(2023-02-08-00-12-F04621)和农业农村部长三角智慧农业技术重点实验室开放课题(KSAT-YRD2023011)


Lettuce Phenotype Estimation Using Integrated RGB-Depth Image Synergy
Author:
Affiliation:

Fund Project:

  • 摘要
  • |
  • 图/表
  • |
  • 访问统计
  • |
  • 参考文献
  • |
  • 相似文献
  • |
  • 引证文献
  • |
  • 资源附件
  • |
  • 文章评论
    摘要:

    采用自动化手段对植物生长过程中的表型特征进行精准测量对于育种和栽培等应用具有重要意义。本文围绕工厂化生菜种植中的表型特征无损精准检测需求,通过融合深度相机采集的RGB图像和深度图像,利用改进的DeepLabv3+模型进行图像分割,并通过双模态回归网络对生菜表型特征进行估算。本文改进的分割模型的骨干网络由Xception替换为MobileViTv2,以增强其全局感知能力和性能;在回归网络中,提出了卷积双模态特征融合模块CMMCM,用于估算生菜的表型特征。在包含4个生菜品种的公开数据集上的实验结果表明,本文方法可对鲜质量、干质量、冠幅、叶面积和株高共5种生菜表型特征进行估算,决定系数分别达到0.922 2、0.931 4、0.862 0、0.935 9和 0.887 5。相较于未添加CMMCM和SE模块的RGB和深度图的表型参数估计基准ResNet-10(双模态),本文改进的模型决定系数分别提高2.54%、2.54%、1.48%、2.99%和4.88%,单幅图像检测耗时为44.8 ms,说明该方法对于双模态图像融合的生菜表型特征无损提取具有较高的准确性和实时性。

    Abstract:

    Accurate measurement of phenotypic traits in plant growth using automated methods is crucial for applications such as breeding and cultivation. Aiming to address the need for non-destructive, precise detection of phenotypic traits in factory-grown lettuce, by integrating RGB images and depth images collected by depth cameras, an improved DeepLabv3+ model was used for image segmentation, and a dual-modal regression network estimated the phenotypic traits of lettuce. The backbone of the improved segmentation model was replaced from Xception to MobileViTv2 to enhance its global perception capabilities and performance. In the regression network, a convolutional multi-modal feature fusion module (CMMCM) was proposed to estimate the phenotypic traits of lettuce. Experimental results on a public dataset containing four lettuce varieties showed that the method estimated five phenotypic traits—fresh weight, dry weight, canopy diameter, leaf area, and plant height—with determination coefficients of 0.922 2, 0.931 4, 0.862 0, 0.935 9, and 0.887 5, respectively. Compared with the RGB and depth image-based phenotypic parameter estimation benchmark ResNet-10 (Dual) without CMMCM and SE modules, the improved model increased the determination coefficients by 2.54%, 2.54%, 1.48%, 2.99%, and 4.88%, respectively, with an image detection time of 44.8 ms per image. This demonstrated that the method achieved high accuracy and real-time performance for non-destructive detection of lettuce phenotypic traits through dual-modal image fusion.

    参考文献
    相似文献
    引证文献
引用本文

陆声链,李沂杨,李帼,贾小泽,鞠青青,钱婷婷.基于RGB与深度图像融合的生菜表型特征估算方法[J].农业机械学报,2025,56(1):84-91,101. LU Shenglian, LI Yiyang, LI Guo, JIA Xiaoze, JU Qingqing, QIAN Tingting. Lettuce Phenotype Estimation Using Integrated RGB-Depth Image Synergy[J]. Transactions of the Chinese Society for Agricultural Machinery,2025,56(1):84-91,101.

复制
分享
文章指标
  • 点击次数:
  • 下载次数:
  • HTML阅读次数:
  • 引用次数:
历史
  • 收稿日期:2024-10-31
  • 最后修改日期:
  • 录用日期:
  • 在线发布日期: 2025-01-10
  • 出版日期:
文章二维码