Semantic Segmentation Algorithm Based Multi-headed Self-attention for Tea Picking Points
CSTR:
Author:
Affiliation:

Clc Number:

Fund Project:

  • Article
  • |
  • Figures
  • |
  • Metrics
  • |
  • Reference
  • |
  • Related
  • |
  • Cited by
  • |
  • Materials
  • |
  • Comments
    Abstract:

    Tea picking point localization is one of the key technologies for selective tea picking. In the tea tree picking scenario, there are problems such as small scale of picking points, large background interference and complex lighting conditions, which lead to the problem of accurate segmentation of tea picking points. A semantic segmentation model based on multi-headed self-attentive mechanism combined with multi-scale feature fusion, RMHSA-NeXt, was constructed for the accurate segmentation of picking points in tea garden scenes. The attention module based on residuals and multi-headed self-attention mechanism was constructed to focus the model’s attention on the segmentation target and enhance the representation of important features. The features at different scales were fused by multi-scale structure (atrous spatial pyramid pooling, ASPP), in which strip pooling was used in the fusion process for the characteristics of picking points to reduce the useless. Finally, the information was decoded by convolution and upsampling, and the segmentation results were obtained. The experiment results showed that the model can segment the picking points effectively in the tea garden environment, and the pixel accuracy of the model reached 75.20%, the average region overlap was 70.78%, and the running speed reached 8.97f/s. The results showed that the model had the advantages of high accuracy, fast inference speed and small number of parameters, which can balance the accuracy and speed indexes well compared with other models. The research results can provide an effective and reliable reference for pinpointing tea picking points.

    Reference
    Related
    Cited by
Get Citation
Share
Article Metrics
  • Abstract:
  • PDF:
  • HTML:
  • Cited by:
History
  • Received:February 24,2023
  • Revised:
  • Adopted:
  • Online: September 10,2023
  • Published:
Article QR Code