基于改进YOLO v7的笼养鸡/蛋自动识别与计数方法
作者:
作者单位:

作者简介:

通讯作者:

中图分类号:

基金项目:

北京市平谷区博士农场项目、国家科技创新2030-“新一代人工智能”重大项目(2021ZD0113804)、北京市农林科学院改革发展专项、北京市农林科学院科研创新平台建设项目(PT2022-34)和北京市博士后基金项目(2022-ZZ-18)


Automatic Identification and Counting Method of Caged Hens and Eggs Based on Improved YOLO v7
Author:
Affiliation:

Fund Project:

  • 摘要
  • |
  • 图/表
  • |
  • 访问统计
  • |
  • 参考文献
  • |
  • 相似文献
  • |
  • 引证文献
  • |
  • 资源附件
  • |
  • 文章评论
    摘要:

    笼养模式下鸡/蛋自动识别与计数在低产能鸡判别及鸡舍智能化管理方面具有重要作用,针对鸡舍内光线不均、鸡只与笼之间遮挡及鸡蛋粘连等因素导致自动计数困难的问题,本研究以笼养鸡只与鸡蛋为研究对象,基于YOLO v7-tiny提出一种轻量型网络YOLO v7-tiny-DO用于鸡只与鸡蛋识别,并设计自动化分笼计数方法。首先,采用JRWT1412型无畸变相机与巡检设备搭建自动化数据采集平台,获取2146幅笼养鸡只图像用于构建数据集。然后,在YOLO v7-tiny网络基础上应用指数线性单元(Exponential linear unit,ELU)激活函数减少模型训练时间;将高效层聚合网络(Efficient layer aggregation network,ELAN)中的常规卷积替换为深度卷积减少模型参数量,并在其基础上添加深度过参数化组件(深度卷积)构建深度过参数化深度卷积层(Depthwise over-parameterized depthwise convolutional layer,DO-DConv),以提取目标深层特征;同时在特征融合模块引入坐标注意力机制(Coordinate attention mechanism,CoordAtt),提升模型对目标空间位置信息的感知能力。试验结果表明,YOLO v7-tiny-DO识别鸡只和鸡蛋的平均精确率(Average precision,AP)分别为96.9%与99.3%,与YOLO v7-tiny相比,鸡只与鸡蛋的AP分别提高3.2、1.4个百分点;改进后模型内存占用量为5.6MB,比原模型减小6.1MB,适合部署于算力相对有限的巡检机器人;YOLO v7-tiny-DO在局部遮挡、运动模糊和鸡蛋粘连情况下均能实现高精度识别与定位,在光线昏暗情况下识别结果优于其他模型,具有较强的鲁棒性。最后,将本文算法部署到NVIDIA Jetson AGX Xavier边缘计算设备,在实际场景下选取30个鸡笼开展计数测试,持续3d。结果表明,3个测试批次鸡只与鸡蛋的计数平均准确率均值分别为96.7%和96.3%,每笼平均绝对误差均值分别为0.13只鸡和0.09枚鸡蛋,可为规模化养殖场智能化管理提供参考。

    Abstract:

    In the cage mode, the elimination and death of laying hens will lead to changes in the number of hens and eggs production in the cage, so it is necessary to update the number of laying hens in the cage in a timely manner. Traditional machine vision methods recognized poultry by morphology or color, but their detection accuracy was low for complex scenarios such as uneven lighting in the cages, hens obscured by cages and the eggs adhesion. Therefore, based on deep learning and image processing, a lightweight network YOLO v7-tiny-DO was proposed for hens and eggs detection based on YOLO v7-tiny, and an automated counting method was designed. Firstly, the JRWT1412 distortion-free camera and the inspection equipment were used to build an automated data acquisition platform, and a total of 2146 images of caged hens and eggs were acquired as data sources. Then the exponential linear unit (ELU) was applied to the YOLO v7-tiny network to reduce the training time of the model;the regular convolution was replaced in efficient layer aggregation network (ELAN) with depthwise convolution to reduce the number of model parameters, and on this basis, a depthwise over-parameterized depthwise convolutional layer (DO-DConv) was constructed by adding a depthwise over-parametric component (depthwise convolution) to extract the deep features of hens and eggs. At the same time, coordinate attention mechanism (CoordAtt) was embedded into the feature fusion module to improve the model’s perception of the spatial location information of hens and eggs. The results showed that the average precision (AP) of YOLO v7-tiny-DO was 96.9% and 99.3% for hens and eggs respectively, and compared with that of YOLO v7-tiny, the AP of hens and eggs was increased by 3.2 percentage points and 1.4 percentage points, respectively. The model size of YOLO v7-tiny-DO was 5.6MB, which was 6.1MB less than the original model, and it was suitable to be deployed in the inspection robot which lacked computing power. YOLO v7-tiny-DO could achieve high-precision detection and localization under partial occlusion, motion blur and eggs adhesion, and outperformed other models in dim environment, with strong robustness. YOLO v7-tiny-DO recognized that the F1 score of hens and eggs were 97.0% and 98.4% respectively. Compared with the mainstream object detection networks such as Faster R-CNN, SSD, YOLO v4-tiny and YOLO v5n, the F1 score of hens were increased by 21.0 percentage points, 4.0 percentage points, 8.0 percentage points and 1.5 percentage points, respectively, and the F1 scores of eggs were increased by 31.4 percentage points, 25.4 percentage points, 6.4percentage points and 4.4 percentage points, respectively. And frame rates were increased by 95.2f/s, 34.8f/s, 18.4f/s and 8.4f/s, respectively. Finally, the algorithm was deployed to the NVIDIA Jetson AGX Xavier edge computing device and 30 cages were selected for counting tests in a real-world scenario for 3d. The results showed that the average precision of counting hens and eggs for the three test batches were 96.7% and 96.3%, respectively, and the mean absolute error were 0.13 hens and 0.09 eggs per cage, respectively, which can provide a reference for digital management of large-cale farms.

    参考文献
    相似文献
    引证文献
引用本文

赵春江,梁雪文,于合龙,王海峰,樊世杰,李斌.基于改进YOLO v7的笼养鸡/蛋自动识别与计数方法[J].农业机械学报,2023,54(7):300-312. ZHAO Chunjiang, LIANG Xuewen, YU Helong, WANG Haifeng, FAN Shijie, LI Bin. Automatic Identification and Counting Method of Caged Hens and Eggs Based on Improved YOLO v7[J]. Transactions of the Chinese Society for Agricultural Machinery,2023,54(7):300-312.

复制
分享
文章指标
  • 点击次数:
  • 下载次数:
  • HTML阅读次数:
  • 引用次数:
历史
  • 收稿日期:2022-12-01
  • 最后修改日期:
  • 录用日期:
  • 在线发布日期: 2023-07-10
  • 出版日期: