Abstract:Aiming at the problem that it is difficult to extract body measurement points efficiently and accurately in the automatic measurement of body size of group-raised pigs, an automatic measurement method of body size of group-raised pigs based on improved YOLO v5-pose was proposed. Firstly, the convolutional block attention module (CBAM) was integrated into the YOLO v5-pose backbone network to better capture the relevant features of the measurement points. Then the C3 traditional module of the Neck layer was replaced with the C3Ghost lightweight module to reduce the number of model parameters and memory usage. Finally, the dynamic head (DyHead) target detection head was introduced in the Head layer to enhance the model’s ability to represent the position of the measurement points. The results showed that the average accuracy of the improved model was 92.6%, the number of parameters was 6.890×106, and the memory usage was 14.1MB. Compared with the original YOLO v5-pose model, the average accuracy was increased by 2.1 percentage points, and the number of parameters and memory usage were decreased by 2.380×105 and 0.4MB, respectively. Compared with the current classic models YOLO v7-pose, YOLO v8-pose, real-time multi-person pose estimation based on mmpose (RTMPose) and CenterNet, this model had better recall rate and average precision and was more lightweight. Experiments were conducted on a dataset of 2400 group-raised pigs images. The results showed that the average absolute errors of the body length, body width, hip width, body height and hip height measured by this method were 4.61cm, 5.87cm, 6.03cm, 0.49cm and 0.46cm, respectively, and the average relative errors were 2.69%, 11.53%, 12.29%, 0.90% and 0.76%, respectively. In summary, the method improved the detection accuracy of body size measurement points, reduced the complexity of the model, and achieved more accurate body size measurement results, providing an effective technical means for the automatic measurement of body size of pigs in group-raising environments.