OpenCV实现人体姿态估计(人体关键点检测)OpenPose

落日映苍穹つ 2021-11-11 04:28 463阅读 0赞

OpenCV实现人体姿态估计(人体关键点检测)OpenPose


OpenPose人体姿态识别项目是美国卡耐基梅隆大学(CMU)基于卷积神经网络和监督学习并以Caffe为框架开发的开源库。可以实现人体动作、面部表情、手指运动等姿态估计。适用于单人和多人,具有极好的鲁棒性。是世界上首个基于深度学习的实时多人二维姿态估计应用,基于它的实例如雨后春笋般涌现。

其理论基础来自Realtime Multi-Person 2D Pose Estimation using Part Affinity Fields ,是CVPR 2017的一篇论文,作者是来自CMU感知计算实验室的曹哲(http://people.eecs.berkeley.edu/~zhecao/\#top),Tomas Simon,Shih-En Wei,Yaser Sheikh 。

人体姿态估计技术在体育健身、动作采集、3D试衣、舆情监测等领域具有广阔的应用前景,人们更加熟悉的应用就是抖音尬舞机。

OpenPose的效果并不怎么好,强烈推荐《2D Pose人体关键点检测(Python/Android /C++ Demo) 》2D Pose人体关键点实时检测(Python/Android /C++ Demo)_pan_jinquan的博客-CSDN博客 ,提供了C++推理代码和Android Demo

人体关键点检测需要用到人体检测,请查看鄙人另一篇博客:2D Pose人体关键点实时检测(Python/Android /C++ Demo)_pan_jinquan的博客-CSDN博客

20210723091643896.gif

20190421224406402.gif

OpenPose项目Github链接:https://github.com/CMU-Perceptual-Computing-Lab/openpose

OpenCV实现的Demo链接:https://github.com/PanJinquan/opencv-learning-tutorials/blob/master/opencv_dnn_pro/openpose-opencv/openpose_for_image_test.py)


1、实现原理

输入一幅图像,经过卷积网络提取特征,得到一组特征图,然后分成两个岔路,分别使用 CNN网络提取Part Confidence Maps 和 Part Affinity Fields;

20190421232218168.png
得到这两个信息后,我们使用图论中的 Bipartite Matching(偶匹配) 求出Part Association,将同一个人的关节点连接起来,由于PAF自身的矢量性,使得生成的偶匹配很正确,最终合并为一个人的整体骨架;
最后基于PAFs求Multi-Person Parsing—>把Multi-person parsing问题转换成graphs问题—>Hungarian Algorithm(匈牙利算法)
(匈牙利算法是部图匹配最常见的算法,该算法的核心就是寻找增广路径,它是一种用增广路径求二分图最大匹配的算法。)


2、实现神经网络

watermark_type_ZmFuZ3poZW5naGVpdGk_shadow_10_text_aHR0cHM6Ly9ibG9nLmNzZG4ubmV0L20wXzM4MTA2OTIz_size_16_color_FFFFFF_t_70

阶段一:VGGNet的前10层用于为输入图像创建特征映射。

阶段二:使用2分支多阶段CNN,其中第一分支预测身体部位位置(例如肘部,膝部等)的一组2D置信度图(S)。 如下图所示,给出关键点的置信度图和亲和力图 - 左肩。

watermark_type_ZmFuZ3poZW5naGVpdGk_shadow_10_text_aHR0cHM6Ly9ibG9nLmNzZG4ubmV0L20wXzM4MTA2OTIz_size_16_color_FFFFFF_t_70 1

第二分支预测一组部分亲和度的2D矢量场(L),其编码部分之间的关联度。 如下图所示,显示颈部和左肩之间的部分亲和力。

watermark_type_ZmFuZ3poZW5naGVpdGk_shadow_10_text_aHR0cHM6Ly9ibG9nLmNzZG4ubmV0L20wXzM4MTA2OTIz_size_16_color_FFFFFF_t_70 2

阶段三: 通过贪心推理解析置信度和亲和力图,对图像中的所有人生成2D关键点。


3.OpenCV-OpenPose实现推理代码

  1. # -*-coding: utf-8 -*-
  2. """
  3. @Project: python-learning-notes
  4. @File : openpose_for_image_test.py
  5. @Author : panjq
  6. @E-mail : pan_jinquan@163.com
  7. @Date : 2019-07-29 21:50:17
  8. """
  9. import cv2 as cv
  10. import os
  11. import glob
  12. BODY_PARTS = {"Nose": 0, "Neck": 1, "RShoulder": 2, "RElbow": 3, "RWrist": 4,
  13. "LShoulder": 5, "LElbow": 6, "LWrist": 7, "RHip": 8, "RKnee": 9,
  14. "RAnkle": 10, "LHip": 11, "LKnee": 12, "LAnkle": 13, "REye": 14,
  15. "LEye": 15, "REar": 16, "LEar": 17, "Background": 18}
  16. POSE_PAIRS = [["Neck", "RShoulder"], ["Neck", "LShoulder"], ["RShoulder", "RElbow"],
  17. ["RElbow", "RWrist"], ["LShoulder", "LElbow"], ["LElbow", "LWrist"],
  18. ["Neck", "RHip"], ["RHip", "RKnee"], ["RKnee", "RAnkle"], ["Neck", "LHip"],
  19. ["LHip", "LKnee"], ["LKnee", "LAnkle"], ["Neck", "Nose"], ["Nose", "REye"],
  20. ["REye", "REar"], ["Nose", "LEye"], ["LEye", "LEar"]]
  21. def detect_key_point(model_path, image_path, out_dir, inWidth=368, inHeight=368, threshhold=0.2):
  22. net = cv.dnn.readNetFromTensorflow(model_path)
  23. frame = cv.imread(image_path)
  24. frameWidth = frame.shape[1]
  25. frameHeight = frame.shape[0]
  26. scalefactor = 2.0
  27. net.setInput(
  28. cv.dnn.blobFromImage(frame, scalefactor, (inWidth, inHeight), (127.5, 127.5, 127.5), swapRB=True, crop=False))
  29. out = net.forward()
  30. out = out[:, :19, :, :] # MobileNet output [1, 57, -1, -1], we only need the first 19 elements
  31. assert (len(BODY_PARTS) == out.shape[1])
  32. points = []
  33. for i in range(len(BODY_PARTS)):
  34. # Slice heatmap of corresponging body's part.
  35. heatMap = out[0, i, :, :]
  36. # Originally, we try to find all the local maximums. To simplify a sample
  37. # we just find a global one. However only a single pose at the same time
  38. # could be detected this way.
  39. _, conf, _, point = cv.minMaxLoc(heatMap)
  40. x = (frameWidth * point[0]) / out.shape[3]
  41. y = (frameHeight * point[1]) / out.shape[2]
  42. # Add a point if it's confidence is higher than threshold.
  43. points.append((int(x), int(y)) if conf > threshhold else None)
  44. for pair in POSE_PAIRS:
  45. partFrom = pair[0]
  46. partTo = pair[1]
  47. assert (partFrom in BODY_PARTS)
  48. assert (partTo in BODY_PARTS)
  49. idFrom = BODY_PARTS[partFrom]
  50. idTo = BODY_PARTS[partTo]
  51. if points[idFrom] and points[idTo]:
  52. cv.line(frame, points[idFrom], points[idTo], (0, 255, 0), 3)
  53. cv.ellipse(frame, points[idFrom], (3, 3), 0, 0, 360, (0, 0, 255), cv.FILLED)
  54. cv.ellipse(frame, points[idTo], (3, 3), 0, 0, 360, (0, 0, 255), cv.FILLED)
  55. t, _ = net.getPerfProfile()
  56. freq = cv.getTickFrequency() / 1000
  57. cv.putText(frame, '%.2fms' % (t / freq), (10, 20), cv.FONT_HERSHEY_SIMPLEX, 0.5, (0, 0, 0))
  58. cv.imwrite(os.path.join(out_dir, os.path.basename(image_path)), frame)
  59. cv.imshow('OpenPose using OpenCV', frame)
  60. cv.waitKey(0)
  61. def detect_image_list_key_point(image_dir, out_dir, inWidth=480, inHeight=480, threshhold=0.3):
  62. image_list = glob.glob(image_dir)
  63. for image_path in image_list:
  64. detect_key_point(image_path, out_dir, inWidth, inHeight, threshhold)
  65. if __name__ == "__main__":
  66. model_path = "pb/graph_opt.pb"
  67. # image_path = "body/*.jpg"
  68. out_dir = "result"
  69. # detect_image_list_key_point(image_path,out_dir)
  70. image_path = "./test.jpg"
  71. detect_key_point(model_path, image_path, out_dir, inWidth=368, inHeight=368, threshhold=0.05)

watermark_type_ZmFuZ3poZW5naGVpdGk_shadow_10_text_aHR0cHM6Ly9wYW5qaW5xdWFuLmJsb2cuY3Nkbi5uZXQ_size_16_color_FFFFFF_t_70

参考资料:

[1].Python+OpenCV+OpenPose实现人体姿态估计(人体关键点检测)_不脱发的程序猿-CSDN博客

发表评论

表情:
评论列表 (有 0 条评论,463人围观)

还没有评论,来说两句吧...

相关阅读

    相关 基于MATLAB视频的人体姿态检测

    1. 设计目的和要求 1.根据已知要求分析视频监控中行人站立和躺卧姿态检测的处理流程,确定视频监中行人的检测设计的方法,画出流程图,编写实现程序,并进行调试,录制实验视频,