Abstract:Aiming to address the challenges of accurate obstacle detection in complex cotton field environments due to occlusions and the computational limitations of edge devices, a field obstacle detection method based on improved YOLO 11n model was proposed. Firstly, the lightweight StarNet network was adopted as the primary feature extraction network, and the dynamic position bias attention block module (DBA) was introduced to reconstruct convolutional block with parallel spatial attention (C2PSA) to enhance multi-scale feature interaction. Secondly, Kolmogorov-Arnold generalized network convolution (KAGNConv) was used to replace the bottleneck structure in the cross stage partial with kernel size 2 module (C3k2) of the baseline model, enabling fine-grained feature extraction while improving model flexibility and interpretability. Finally, the separated and enhancement attention module (SEAM) was integrated into the detection head to enhance the model’s detection capability in occlusion scenarios. The experimental results showed that, compared with the baseline model, the improved YOLO 11n-SKS achieved increases of 2.3, 2.1, 1.3, and 1.4 percentage points in precision, recall, mAP50, and mAP50-95, reaching 91.7%, 88.3%, 91.9%, and 62.3%, respectively. The model’s floating-point operations were reduced to only 4.4×109 FLOPs, and the number of model parameters was decreased by 17.1%. This study achieved a favorable balance between performance and computational complexity, meeting the real-time detection requirements of cotton harvesting operations while lowering the computational demands for deployment on edge devices, thereby providing technical support for the autonomous and safe operation of cotton pickers.