Research on Limb Motion Command Recognition Technology of Lifting Robot
CSTR:
Author:
Affiliation:

Clc Number:

Fund Project:

  • Article
  • |
  • Figures
  • |
  • Metrics
  • |
  • Reference
  • |
  • Related
  • |
  • Cited by
  • |
  • Materials
  • |
  • Comments
    Abstract:

    In view of the limited monitoring distance of Kinect for limb recognition, the large zoom network camera was used and CNN-BP fusion network for human behavior recognition was constructed, and the nine groups of robot lifting instructions were trained and identified. Firstly, totally 18 skeleton nodes were extracted based on OpenPose to generate RGB skeleton map and skeleton vector. Then, using the migration learning method, the InceptionV3 network was used to extract the deep abstract features of the image, and the training data set was rotated, translated, scaled and affine. A variety of data enhancement methods were used to extend the training data to prevent overfitting;and then the extracted skeleton vector was extracted from the shallow layer features such as the point line surface using BP neural network;the InceptionV3 network and the BP neural network output were merged and obtained by using the Softmax solver to obtain limb classification results. Finally, the result of limb recognition was input into the robot auxiliary hoisting control system, and the double verification control mode was established to complete the robot auxiliary hoisting operation. The test results showed that the method ensured the timeliness of the model operation, and the real-time recognition accuracy reached 0.99, which greatly improved the long-distance human-computer interaction capability.

    Reference
    Related
    Cited by
Get Citation
Share
Article Metrics
  • Abstract:
  • PDF:
  • HTML:
  • Cited by:
History
  • Received:November 16,2018
  • Revised:
  • Adopted:
  • Online: June 10,2019
  • Published:
Article QR Code