Abstract:The behavior of beef cattle reflects its health status. How to recognize and track beef cattle in a real breeding environment is very important to perceive the behavior of beef cattle. Wearable devices have limited accuracy in sensing motion behavior and are easily damaged,while monitoring devices are widely used in farms and have a long lifespan. Based on the improved YOLO v3 algorithm (LSRCEM-YOLO),surveillance video was used to achieve real-time tracking of beef cattle in a real breeding environment. MobileNet v2 was used as the object detection backbone network. According to the uneven distribution of beef cattle and the large change of target scale, long-short range context enhancement module (LSRCEM) was proposed for multi-scale fusion, combined with the Mudeep ReID model to achieve multiple targets for beef cattle track. The experimental results showed that in beef cattle object detection, the mAP index of LSRCEM-YOLO reached 92.3%, and the model parameter amount was only 10% of YOLO v3, which was also reduced by 31.34% compared with YOLO v3-tiny; in terms of beef cattle re-identification (ReID), based on adjusting the Mudeep model of the receptive field obtained more multi-scale features, and its Rank-1 index reached 96.5%. Compared with the original DeepSORT algorithm, the MOTA index of multi-target tracking was increased from 32.3% to 45.2%, and the number of ID switch was decreased by 69.2%. This method can provide technical reference for real-time tracking and behavior perception of beef cattle in real environment.