Multi-task Visual Perception Method in Dragon Orchards Based on OrchardYOLOP
CSTR:
Author:
Affiliation:

Clc Number:

Fund Project:

  • Article
  • |
  • Figures
  • |
  • Metrics
  • |
  • Reference
  • |
  • Related
  • |
  • Cited by
  • |
  • Materials
  • |
  • Comments
    Abstract:

    In the face of challenges such as complex terrains, fluctuating lighting, and unstructured environments, modern orchard robots require the efficient processing of a vast array of environmental information. Traditional algorithms that sequentially execute multiple single tasks are limited by computational power which are unable to meet these demands. Aiming to address the requirements for realtime performance and accuracy in multitasking autonomous driving robots within dragon fruit orchard environments. Building upon the YOLOP, focus attention convolution module was introduced, C2F and SPPF modules were employed, and the loss function for segmentation tasks was optimized, culminating in the OrchardYOLOP. Experiments demonstrated that OrchardYOLOP achieved a precision of 84.1% in target detection tasks, an mIoU of 89.7% in drivable area segmentation tasks, and an mIoU increased to 90.8% in fruit tree region segmentation tasks, with an inference speed of 33.33 frames per second and a parameter count of only 9.67×106. Compared with the YOLOP algorithm, not only did it meet the real-time requirements in terms of speed, but also it significantly improved accuracy, addressing key issues in multi-task visual perception in dragon fruit orchards and providing an effective solution for multi-task autonomous driving visual perception in unstructured environments.

    Reference
    Related
    Cited by
Get Citation
Share
Article Metrics
  • Abstract:
  • PDF:
  • HTML:
  • Cited by:
History
  • Received:June 04,2024
  • Revised:
  • Adopted:
  • Online: November 10,2024
  • Published:
Article QR Code