吊装机器人肢体动作指令识别技术研究
作者:
作者单位:

作者简介:

通讯作者:

中图分类号:

基金项目:

国家自然科学基金项目(51575219)和福建省海洋经济创新发展区域示范项目(2014FJPT03)


Research on Limb Motion Command Recognition Technology of Lifting Robot
Author:
Affiliation:

Fund Project:

  • 摘要
  • |
  • 图/表
  • |
  • 访问统计
  • |
  • 参考文献
  • |
  • 相似文献
  • |
  • 引证文献
  • |
  • 资源附件
  • |
  • 文章评论
    摘要:

    鉴于Kinect相机进行肢体识别监控距离有限,提出使用网络大变焦摄像头、构建CNN-BP融合网络进行肢体动作识别,并以9组机器人吊装指令为例进行训练和识别。首先,基于OpenPose提取18个骨架节点坐标,生成RGB骨架图和骨架向量;然后,采用迁移学习方法对RGB骨架图使用InceptionV3网络提取图像深层抽象特征,并对训练数据集采用旋转、平移、缩放和仿射多种数据增强方式,以扩充训练数据,防止过拟合;再将提取的骨架向量使用BP神经网络提取点线面等浅层特征;最后对InceptionV3网络和BP神经网络输出进行融合,并使用Softmax求解器得到肢体识别结果。将肢体识别结果输入机器人辅助吊装控制系统,建立双重验证控制方法,完成机器人辅助吊装操作。实验结果表明,该方法保证了模型运行的精度和时效性,实时识别精度达0.99以上,大大提升了远距离人机交互能力。

    Abstract:

    In view of the limited monitoring distance of Kinect for limb recognition, the large zoom network camera was used and CNN-BP fusion network for human behavior recognition was constructed, and the nine groups of robot lifting instructions were trained and identified. Firstly, totally 18 skeleton nodes were extracted based on OpenPose to generate RGB skeleton map and skeleton vector. Then, using the migration learning method, the InceptionV3 network was used to extract the deep abstract features of the image, and the training data set was rotated, translated, scaled and affine. A variety of data enhancement methods were used to extend the training data to prevent overfitting;and then the extracted skeleton vector was extracted from the shallow layer features such as the point line surface using BP neural network;the InceptionV3 network and the BP neural network output were merged and obtained by using the Softmax solver to obtain limb classification results. Finally, the result of limb recognition was input into the robot auxiliary hoisting control system, and the double verification control mode was established to complete the robot auxiliary hoisting operation. The test results showed that the method ensured the timeliness of the model operation, and the real-time recognition accuracy reached 0.99, which greatly improved the long-distance human-computer interaction capability.

    参考文献
    相似文献
    引证文献
引用本文

倪涛,邹少元,刘海强,黄玲涛,陈宁,张红彦.吊装机器人肢体动作指令识别技术研究[J].农业机械学报,2019,50(6):405-411,426.

复制
分享
文章指标
  • 点击次数:
  • 下载次数:
  • HTML阅读次数:
  • 引用次数:
历史
  • 收稿日期:2018-11-16
  • 最后修改日期:
  • 录用日期:
  • 在线发布日期: 2019-06-10
  • 出版日期: