怎么在word上做超鏈接網(wǎng)站2345網(wǎng)址導(dǎo)航下載桌面
目錄
1. 前言
2.人體關(guān)鍵點(diǎn)檢測方法
(1)Top-Down(自上而下)方法
(2)Bottom-Up(自下而上)方法:
3.人體關(guān)鍵點(diǎn)檢測模型訓(xùn)練
4.人體關(guān)鍵點(diǎn)檢測模型Android部署
(1) 將Pytorch模型轉(zhuǎn)換ONNX模型
(2) 將ONNX模型轉(zhuǎn)換為TNN模型
(3) Android端上部署模型
(4) Android測試效果?
(5) 運(yùn)行APP閃退:dlopen failed: library "libomp.so" not found
5.Android項(xiàng)目源碼下載
6.C++實(shí)現(xiàn)人體關(guān)鍵點(diǎn)檢測
1. 前言
人體關(guān)鍵點(diǎn)檢測(Human Keypoints Detection)又稱為人體姿態(tài)估計(jì)2D Pose,是計(jì)算機(jī)視覺中一個相對基礎(chǔ)的任務(wù),是人體動作識別、行為分析、人機(jī)交互等的前置任務(wù)。一般情況下可以將人體關(guān)鍵點(diǎn)檢測細(xì)分為單人/多人關(guān)鍵點(diǎn)檢測、2D/3D關(guān)鍵點(diǎn)檢測,同時有算法在完成關(guān)鍵點(diǎn)檢測之后還會進(jìn)行關(guān)鍵點(diǎn)的跟蹤,也被稱為人體姿態(tài)跟蹤。
項(xiàng)目將實(shí)現(xiàn)人體關(guān)鍵點(diǎn)檢測算法,其中使用YOLOv5模型實(shí)現(xiàn)人體檢測(Person Detection),使用HRNet,LiteHRNet和Mobilenet-v2模型實(shí)現(xiàn)人體關(guān)鍵點(diǎn)檢測。為了方便后續(xù)模型工程化和Android平臺部署,項(xiàng)目支持高精度HRNet檢測模型,輕量化模型LiteHRNet和Mobilenet模型訓(xùn)練和測試,并提供Python/C++/Android多個版本;項(xiàng)目分為數(shù)據(jù)集說明,模型訓(xùn)練和C++/Android部署等多個章節(jié),本篇是項(xiàng)目《人體關(guān)鍵點(diǎn)檢測(人體姿勢估計(jì))》系列文章之Android實(shí)現(xiàn)人體關(guān)鍵點(diǎn)檢測,主要分享將Python訓(xùn)練好的模型移植到Android平臺,搭建一個可實(shí)時的人體關(guān)鍵點(diǎn)檢測Android Demo,且支持多人關(guān)鍵點(diǎn)檢測。
輕量化Mobilenet-v2模型在普通Android手機(jī)上可以達(dá)到實(shí)時的檢測效果,CPU(4線程)約50ms左右,GPU約30ms左右 ,基本滿足業(yè)務(wù)的性能需求。下表格給出HRNet,以及輕量化模型LiteHRNet和Mobilenet的計(jì)算量和參數(shù)量,以及其檢測精度。
模型 | input-size | params(M) | GFLOPs | AP |
HRNet-w32 | 192×256 | 28.48M | 5734.05M | 0.7585 |
LiteHRNet18 | 192×256 | 1.10M | 182.15M | 0.6237 |
Mobilenet-v2 | 192×256 | 2.63M | 529.25M | 0.6181 |
【尊重原創(chuàng),轉(zhuǎn)載請注明出處】?https://blog.csdn.net/guyuealian/article/details/134881797
Android人體關(guān)鍵點(diǎn)檢測APP Demo體驗(yàn)(下載):https://download.csdn.net/download/guyuealian/88610359
Android人體關(guān)鍵點(diǎn)檢測APP Demo體驗(yàn)
??
??更多項(xiàng)目《人體關(guān)鍵點(diǎn)檢測(人體姿勢估計(jì))》系列文章請參考:
- 人體關(guān)鍵點(diǎn)檢測1:人體姿勢估計(jì)數(shù)據(jù)集(含下載鏈接)?https://blog.csdn.net/guyuealian/article/details/134703548
- 人體關(guān)鍵點(diǎn)檢測2:Pytorch實(shí)現(xiàn)人體關(guān)鍵點(diǎn)檢測(人體姿勢估計(jì))含訓(xùn)練代碼和數(shù)據(jù)集?https://blog.csdn.net/guyuealian/article/details/134837816
- 人體關(guān)鍵點(diǎn)檢測3:Android實(shí)現(xiàn)人體關(guān)鍵點(diǎn)檢測(人體姿勢估計(jì))含源碼 可實(shí)時檢測?https://blog.csdn.net/guyuealian/article/details/134881797
- 人體關(guān)鍵點(diǎn)檢測4:C/C++實(shí)現(xiàn)人體關(guān)鍵點(diǎn)檢測(人體姿勢估計(jì))含源碼 可實(shí)時檢測?https://blog.csdn.net/guyuealian/article/details/134881797
- 手部關(guān)鍵點(diǎn)檢測1:手部關(guān)鍵點(diǎn)(手部姿勢估計(jì))數(shù)據(jù)集(含下載鏈接)https://blog.csdn.net/guyuealian/article/details/133277630
- 手部關(guān)鍵點(diǎn)檢測2:YOLOv5實(shí)現(xiàn)手部檢測(含訓(xùn)練代碼和數(shù)據(jù)集)https://blog.csdn.net/guyuealian/article/details/133279222
- 手部關(guān)鍵點(diǎn)檢測3:Pytorch實(shí)現(xiàn)手部關(guān)鍵點(diǎn)檢測(手部姿勢估計(jì))含訓(xùn)練代碼和數(shù)據(jù)集https://blog.csdn.net/guyuealian/article/details/133277726
- 手部關(guān)鍵點(diǎn)檢測4:Android實(shí)現(xiàn)手部關(guān)鍵點(diǎn)檢測(手部姿勢估計(jì))含源碼 可實(shí)時檢測https://blog.csdn.net/guyuealian/article/details/133931698
- 手部關(guān)鍵點(diǎn)檢測5:C++實(shí)現(xiàn)手部關(guān)鍵點(diǎn)檢測(手部姿勢估計(jì))含源碼 可實(shí)時檢測https://blog.csdn.net/guyuealian/article/details/133277748
??
2.人體關(guān)鍵點(diǎn)檢測方法
目前主流的人體關(guān)鍵點(diǎn)檢測(人體姿勢估計(jì))方法主要兩種:一種是Top-Down(自上而下)方法,另外一種是Bottom-Up(自下而上)方法;
(1)Top-Down(自上而下)方法
將人體檢測和人體關(guān)鍵點(diǎn)檢測(人體姿勢估計(jì))檢測分離,在圖像上首先進(jìn)行人體目標(biāo)檢測,定位人體位置;然后crop每一個人體圖像,再估計(jì)人體關(guān)鍵點(diǎn);這類方法往往比較慢,但姿態(tài)估計(jì)準(zhǔn)確度較高。目前主流模型主要有CPN,Hourglass,CPM,Alpha Pose,HRNet等。
(2)Bottom-Up(自下而上)方法:
先估計(jì)圖像中所有人體關(guān)鍵點(diǎn),然后在通過Grouping的方法組合成一個一個實(shí)例;因此這類方法在測試推斷的時候往往更快速,準(zhǔn)確度稍低。典型就是COCO2016年人體關(guān)鍵點(diǎn)檢測冠軍Open Pose。
通常來說,Top-Down具有更高的精度,而Bottom-Up具有更快的速度;就目前調(diào)研而言,?Top-Down的方法研究較多,精度也比Bottom-Up(自下而上)方法高。本項(xiàng)目采用Top-Down(自上而下)方法,先使用YOLOv5模型實(shí)現(xiàn)人體檢測,然后再使用HRNet進(jìn)行人體關(guān)鍵點(diǎn)檢測(人體姿勢估計(jì));
本項(xiàng)目基于開源的HRNet進(jìn)行改進(jìn),關(guān)于HRNet項(xiàng)目請參考GitHub
HRNet: https://github.com/leoxiaobin/deep-high-resolution-net.pytorch
3.人體關(guān)鍵點(diǎn)檢測模型訓(xùn)練
本項(xiàng)目采用Top-Down(自上而下)方法,使用YOLOv5模型實(shí)現(xiàn)人體檢測,并基于開源的HRNet實(shí)現(xiàn)人體關(guān)鍵點(diǎn)檢測(人體姿態(tài)估計(jì));為了方便后續(xù)模型工程化和Android平臺部署,項(xiàng)目支持輕量化模型LiteHRNet和Mobilenet模型訓(xùn)練和測試,并提供Python/C++/Android多個版本;輕量化Mobilenet-v2模型在普通Android手機(jī)上可以達(dá)到實(shí)時的檢測效果,CPU(4線程)約50ms左右,GPU約30ms左右 ,基本滿足業(yè)務(wù)的性能需求
本篇博文主要分享Android版本的模型部署,不包含Python版本的訓(xùn)練代碼和相關(guān)數(shù)據(jù)集,關(guān)于人體關(guān)鍵點(diǎn)檢測的訓(xùn)練方法和數(shù)據(jù)集說明,可參考 :?
人體關(guān)鍵點(diǎn)檢測2:Pytorch實(shí)現(xiàn)人體關(guān)鍵點(diǎn)檢測(人體姿勢估計(jì))含訓(xùn)練代碼和數(shù)據(jù)集?https://blog.csdn.net/guyuealian/article/details/134837816
下表格給出HRNet,以及輕量化模型LiteHRNet和Mobilenet的計(jì)算量和參數(shù)量,以及其檢測精度AP; 高精度檢測模型HRNet-w32,AP可以達(dá)到0.7585,但其參數(shù)量和計(jì)算量比較大,不合適在移動端部署;LiteHRNet18和Mobilenet-v2參數(shù)量和計(jì)算量比較少,合適在移動端部署;雖然LiteHRNet18的理論計(jì)算量和參數(shù)量比Mobilenet-v2低,但在實(shí)際測試中,發(fā)現(xiàn)Mobilenet-v2運(yùn)行速度更快。輕量化Mobilenet-v2模型在普通Android手機(jī)上可以達(dá)到實(shí)時的檢測效果,CPU(4線程)約50ms左右,GPU約30ms左右 ,基本滿足業(yè)務(wù)的性能需求
模型 | input-size | params(M) | GFLOPs | AP |
HRNet-w32 | 192×256 | 28.48M | 5734.05M | 0.7585 |
LiteHRNet18 | 192×256 | 1.10M | 182.15M | 0.6237 |
Mobilenet-v2 | 192×256 | 2.63M | 529.25M | 0.6181 |
HRNet-w32參數(shù)量和計(jì)算量太大,不適合在Android手機(jī)部署,本項(xiàng)目Android版本只支持部署LiteHRNet和Mobilenet-v2模型;C++版本可支持部署HRNet-w32,LiteHRNet和Mobilenet-v2模型?
4.人體關(guān)鍵點(diǎn)檢測模型Android部署
目前CNN模型有多種部署方式,可以采用TNN,MNN,NCNN,以及TensorRT等部署工具,鄙人采用TNN進(jìn)行Android端上部署。部署流程可分為四步:訓(xùn)練模型->將模型轉(zhuǎn)換ONNX模型->將ONNX模型轉(zhuǎn)換為TNN模型->Android端上部署TNN模型。
(1) 將Pytorch模型轉(zhuǎn)換ONNX模型
訓(xùn)練好Pytorch模型后,我們需要先將模型轉(zhuǎn)換為ONNX模型,以便后續(xù)模型部署。
- 原始Python項(xiàng)目提供轉(zhuǎn)換腳本,你只需要修改model_file和config_file為你模型路徑即可
- ?convert_torch_to_onnx.py實(shí)現(xiàn)將Pytorch模型轉(zhuǎn)換ONNX模型的腳本
python libs/convert_tools/convert_torch_to_onnx.py
"""
This code is used to convert the pytorch model into an onnx format model.
"""
import os
import torch.onnx
from pose.inference import PoseEstimation
from basetrainer.utils.converter import pytorch2onnxdef load_model(config_file, model_file, device="cuda:0"):pose = PoseEstimation(config_file, model_file, device=device)model = pose.modelconfig = pose.configreturn model, configdef convert2onnx(config_file, model_file, device="cuda:0", onnx_type="kp"):""":param model_file::param input_size::param device::param onnx_type::return:"""model, config = load_model(config_file, model_file, device=device)model = model.to(device)model.eval()model_name = os.path.basename(model_file)[:-len(".pth")]onnx_file = os.path.join(os.path.dirname(model_file), model_name + ".onnx")# dummy_input = torch.randn(1, 3, 240, 320).to("cuda")input_size = tuple(config.MODEL.IMAGE_SIZE) # w,hinput_shape = (1, 3, input_size[1], input_size[0])pytorch2onnx.convert2onnx(model,input_shape=input_shape,input_names=['input'],output_names=['output'],onnx_file=onnx_file,opset_version=11)print(input_shape)if __name__ == "__main__":config_file = "../../work_space/person/mobilenet_v2_17_192_256_custom_coco_20231124_090015_6639/mobilenetv2_192_192.yaml"model_file = "../../work_space/person/mobilenet_v2_17_192_256_custom_coco_20231124_090015_6639/model/best_model_158_0.6181.pth"convert2onnx(config_file, model_file)
(2) 將ONNX模型轉(zhuǎn)換為TNN模型
目前CNN模型有多種部署方式,可以采用TNN,MNN,NCNN,以及TensorRT等部署工具,鄙人采用TNN進(jìn)行Android端上部署
TNN轉(zhuǎn)換工具:
- (1)將ONNX模型轉(zhuǎn)換為TNN模型,請參考TNN官方說明:TNN/onnx2tnn.md at master · Tencent/TNN · GitHub
- (2)一鍵轉(zhuǎn)換,懶人必備:一鍵轉(zhuǎn)換 Caffe, ONNX, TensorFlow 到 NCNN, MNN, Tengine? ?(可能存在版本問題,這個工具轉(zhuǎn)換的TNN模型可能不兼容,建議還是自己build源碼進(jìn)行轉(zhuǎn)換,2022年9約25日測試可用)
??
(3) Android端上部署模型
項(xiàng)目實(shí)現(xiàn)了Android版本的人體檢測和人體關(guān)鍵點(diǎn)檢測Demo,部署框架采用TNN,支持多線程CPU和GPU加速推理,在普通手機(jī)上可以實(shí)時處理。項(xiàng)目Android源碼,核心算法均采用C++實(shí)現(xiàn),上層通過JNI接口調(diào)用。
如果你想在這個Android Demo部署你自己訓(xùn)練的模型,你可將訓(xùn)練好的Pytorch模型轉(zhuǎn)換ONNX ,再轉(zhuǎn)換成TNN模型,然后把TNN模型代替你模型即可。?
HRNet-w32參數(shù)量和計(jì)算量太大,不適合在Android手機(jī)部署,本項(xiàng)目Android版本只支持部署LiteHRNet和Mobilenet-v2模型;C++版本可支持部署HRNet-w32,LiteHRNet和Mobilenet-v2模型?
- 這是項(xiàng)目Android源碼JNI接口 ,Java部分
package com.cv.tnn.model;import android.graphics.Bitmap;public class Detector {static {System.loadLibrary("tnn_wrapper");}/**** 初始化檢測模型* @param dets_model: 檢測模型(不含后綴名)* @param pose_model: 識別模型(不含后綴名)* @param root:模型文件的根目錄,放在assets文件夾下* @param model_type:模型類型* @param num_thread:開啟線程數(shù)* @param useGPU:是否開啟GPU進(jìn)行加速*/public static native void init(String dets_model, String pose_model, String root, int model_type, int num_thread, boolean useGPU);/**** 返回檢測和識別結(jié)果* @param bitmap 圖像(bitmap),ARGB_8888格式* @param score_thresh:置信度閾值* @param iou_thresh: IOU閾值* @param pose_thresh: 關(guān)鍵點(diǎn)閾值* @return*/public static native FrameInfo[] detect(Bitmap bitmap, float score_thresh, float iou_thresh, float pose_thresh);
}
- 這是Android項(xiàng)目源碼JNI接口 ,C++部分
#include <jni.h>
#include <string>
#include <fstream>
#include "src/yolov5.h"
#include "src/pose_detector.h"
#include "src/Types.h"
#include "debug.h"
#include "android_utils.h"
#include "opencv2/opencv.hpp"
#include "file_utils.h"using namespace dl;
using namespace vision;static YOLOv5 *detector = nullptr;
static PoseDetector *pose = nullptr;JNIEXPORT jint JNI_OnLoad(JavaVM *vm, void *reserved) {return JNI_VERSION_1_6;
}JNIEXPORT void JNI_OnUnload(JavaVM *vm, void *reserved) {}extern "C"
JNIEXPORT void JNICALL
Java_com_cv_tnn_model_Detector_init(JNIEnv *env,jclass clazz,jstring dets_model,jstring pose_model,jstring root,jint model_type,jint num_thread,jboolean use_gpu) {if (detector != nullptr) {delete detector;detector = nullptr;}std::string parent = env->GetStringUTFChars(root, 0);std::string dets_model_ = env->GetStringUTFChars(dets_model, 0);std::string pose_model_ = env->GetStringUTFChars(pose_model, 0);string dets_model_file = path_joint(parent, dets_model_ + ".tnnmodel");string dets_proto_file = path_joint(parent, dets_model_ + ".tnnproto");string pose_model_file = path_joint(parent, pose_model_ + ".tnnmodel");string pose_proto_file = path_joint(parent, pose_model_ + ".tnnproto");DeviceType device = use_gpu ? GPU : CPU;LOGW("parent : %s", parent.c_str());LOGW("useGPU : %d", use_gpu);LOGW("device_type: %d", device);LOGW("model_type : %d", model_type);LOGW("num_thread : %d", num_thread);YOLOv5Param model_param = YOLOv5s05_320;//模型參數(shù)detector = new YOLOv5(dets_model_file,dets_proto_file,model_param,num_thread,device);PoseParam pose_param = POSE_MODEL_TYPE[model_type];//模型類型pose = new PoseDetector(pose_model_file,pose_proto_file,pose_param,num_thread,device);
}extern "C"
JNIEXPORT jobjectArray JNICALL
Java_com_cv_tnn_model_Detector_detect(JNIEnv *env, jclass clazz, jobject bitmap,jfloat score_thresh, jfloat iou_thresh, jfloat pose_thresh) {cv::Mat bgr;BitmapToMatrix(env, bitmap, bgr);int src_h = bgr.rows;int src_w = bgr.cols;// 檢測區(qū)域?yàn)檎麖垐D片的大小FrameInfo resultInfo;// 開始檢測if (detector != nullptr) {detector->detect(bgr, &resultInfo, score_thresh, iou_thresh);} else {ObjectInfo objectInfo;objectInfo.x1 = 0;objectInfo.y1 = 0;objectInfo.x2 = (float) src_w;objectInfo.y2 = (float) src_h;objectInfo.label = 0;resultInfo.info.push_back(objectInfo);}int nums = resultInfo.info.size();LOGW("object nums: %d\n", nums);if (nums > 0) {// 開始檢測pose->detect(bgr, &resultInfo, pose_thresh);// 可視化代碼//classifier->visualizeResult(bgr, &resultInfo);}//cv::cvtColor(bgr, bgr, cv::COLOR_BGR2RGB);//MatrixToBitmap(env, bgr, dst_bitmap);auto BoxInfo = env->FindClass("com/cv/tnn/model/FrameInfo");auto init_id = env->GetMethodID(BoxInfo, "<init>", "()V");auto box_id = env->GetMethodID(BoxInfo, "addBox", "(FFFFIF)V");auto ky_id = env->GetMethodID(BoxInfo, "addKeyPoint", "(FFF)V");jobjectArray ret = env->NewObjectArray(resultInfo.info.size(), BoxInfo, nullptr);for (int i = 0; i < nums; ++i) {auto info = resultInfo.info[i];env->PushLocalFrame(1);//jobject obj = env->AllocObject(BoxInfo);jobject obj = env->NewObject(BoxInfo, init_id);// set bbox//LOGW("rect:[%f,%f,%f,%f] label:%d,score:%f \n", info.rect.x,info.rect.y, info.rect.w, info.rect.h, 0, 1.0f);env->CallVoidMethod(obj, box_id, info.x1, info.y1, info.x2 - info.x1, info.y2 - info.y1,info.label, info.score);// set keypointfor (const auto &kps : info.keypoints) {//LOGW("point:[%f,%f] score:%f \n", lm.point.x, lm.point.y, lm.score);env->CallVoidMethod(obj, ky_id, (float) kps.point.x, (float) kps.point.y,(float) kps.score);}obj = env->PopLocalFrame(obj);env->SetObjectArrayElement(ret, i, obj);}return ret;
}
(4) Android測試效果?
Android Demo在普通手機(jī)CPU/GPU上可以達(dá)到實(shí)時檢測效果;CPU(4線程)約50ms左右,GPU約30ms左右 ,基本滿足業(yè)務(wù)的性能需求。
Android版本的人體關(guān)鍵點(diǎn)檢測APP Demo體驗(yàn):
Android人體關(guān)鍵點(diǎn)檢測APP Demo體驗(yàn)(下載):
https://download.csdn.net/download/guyuealian/88610359
???
(5) 運(yùn)行APP閃退:dlopen failed: library "libomp.so" not found
參考解決方法:
解決dlopen failed: library “l(fā)ibomp.so“ not found_PKing666666的博客-CSDN博客_dlopen failed
?Android SDK和NDK相關(guān)版本信息,請參考:?
?
5.Android項(xiàng)目源碼下載
Android項(xiàng)目源碼下載地址:
Android人體關(guān)鍵點(diǎn)檢測APP Demo體驗(yàn)(下載):https://download.csdn.net/download/guyuealian/88610359
整套Android項(xiàng)目源碼內(nèi)容包含:
- Android Demo源碼支持YOLOv5人體檢測
- Android Demo源碼支持輕量化模型LiteHRNet和Mobilenet-v2人體關(guān)鍵點(diǎn)檢測(人體姿態(tài)估計(jì))
- Android Demo在普通手機(jī)CPU/GPU上可以實(shí)時檢測,CPU約50ms,GPU約30ms左右
- Android Demo支持圖片,視頻,攝像頭測試
- 所有依賴庫都已經(jīng)配置好,可直接build運(yùn)行,若運(yùn)行出現(xiàn)閃退,請參考dlopen failed: library “l(fā)ibomp.so“ not found?解決。
6.C++實(shí)現(xiàn)人體關(guān)鍵點(diǎn)檢測
- ?人體關(guān)鍵點(diǎn)檢測4:C/C++實(shí)現(xiàn)人體關(guān)鍵點(diǎn)檢測(人體姿勢估計(jì))含源碼 可實(shí)時檢測?https://blog.csdn.net/guyuealian/article/details/134881797