有哪些網(wǎng)站是可以做宣傳的朔州seo
YOLOV8 + 雙目測距
- 1. 環(huán)境配置
- 2. 測距流程和原理
- 2.1 測距流程
- 2.2 測距原理
- 3. 代碼部分解析
- 3.1 相機參數(shù)stereoconfig.py
- 3.2 測距部分
- 3.3 主代碼yolov8-stereo.py
- 4. 實驗結(jié)果
- 4.1 測距
- 4.2 測距+跟蹤
- 4.3 測距+跟蹤+分割
- 4.4 視頻展示
相關(guān)文章
1. YOLOv5+雙目測距(python)
2. YOLOv7+雙目測距(python)
如果有用zed相機,可以進我主頁👇👇👇直接調(diào)用內(nèi)部相機參數(shù),精度比雙目測距好很多
https://blog.csdn.net/qq_45077760
下載鏈接(求STAR):https://github.com/up-up-up-up/YOLOv8-stereo
1. 環(huán)境配置
具體可見: Windows+YOLOV8環(huán)境配置
2. 測距流程和原理
2.1 測距流程
大致流程: 雙目標定→雙目校正→立體匹配→結(jié)合yolov8→深度測距
- 找到目標識別源代碼中輸出物體坐標框的代碼段。
- 找到雙目測距代碼中計算物體深度的代碼段。
- 將步驟2與步驟1結(jié)合,計算得到目標框中物體的深度。
- 找到目標識別網(wǎng)絡(luò)中顯示障礙物種類的代碼段,將深度值添加到里面,進行顯示
注:我所做的是在20m以內(nèi)的檢測,沒計算過具體誤差,當然標定誤差越小精度會好一點,其次注意光線、亮度等影響因素,當然檢測范圍效果跟相機的好壞也有很大關(guān)系
2.2 測距原理
如果想了解雙目測距原理,請移步該文章 雙目三維測距(python)
3. 代碼部分解析
3.1 相機參數(shù)stereoconfig.py
雙目相機標定誤差越小越好,我這里誤差為0.1,盡量使誤差在0.2以下
import numpy as np
# 雙目相機參數(shù)
class stereoCamera(object):def __init__(self):self.cam_matrix_left = np.array([[1101.89299, 0, 1119.89634],[0, 1100.75252, 636.75282],[0, 0, 1]])self.cam_matrix_right = np.array([[1091.11026, 0, 1117.16592],[0, 1090.53772, 633.28256],[0, 0, 1]])self.distortion_l = np.array([[-0.08369, 0.05367, -0.00138, -0.0009, 0]])self.distortion_r = np.array([[-0.09585, 0.07391, -0.00065, -0.00083, 0]])self.R = np.array([[1.0000, -0.000603116945856524, 0.00377055351856816],[0.000608108737333211, 1.0000, -0.00132288199083992],[-0.00376975166958581, 0.00132516525298933, 1.0000]])self.T = np.array([[-119.99423], [-0.22807], [0.18540]])self.baseline = 119.99423
3.2 測距部分
這一部分我用了多線程加快速度,計算目標檢測框中心點的深度值
config = stereoconfig_040_2.stereoCamera()
map1x, map1y, map2x, map2y, Q = getRectifyTransform(720, 1280, config)
thread = MyThread(stereo_threading, args=(config, im0, map1x, map1y, map2x, map2y, Q))
thread.start()
results = model.predict(im0, save=False, conf=0.5)
annotated_frame = results[0].plot()
boxes = results[0].boxes.xywh.cpu()
for i, box in enumerate(boxes):# for box, class_idx in zip(boxes, classes):x_center, y_center, width, height = box.tolist()x1 = x_center - width / 2y1 = y_center - height / 2x2 = x_center + width / 2y2 = y_center + height / 2if (0 < x2 < 1280):thread.join()points_3d = thread.get_result()# gol.set_value('points_3d', points_3d)a = points_3d[int(y_center), int(x_center), 0] / 1000b = points_3d[int(y_center), int(x_center), 1] / 1000c = points_3d[int(y_center), int(x_center), 2] / 1000distance = ((a ** 2 + b ** 2 + c ** 2) ** 0.5)
3.3 主代碼yolov8-stereo.py
(1)加入了多線程處理,加快處理速度
(2)如果想打開相機,直接把cap = cv2.VideoCapture(‘a(chǎn)1.mp4’)改成cap = cv2.VideoCapture(0)即可
import cv2
import torch
import argparse
from ultralytics import YOLO
from stereo import stereoconfig_040_2
from stereo.stereo import stereo_40
from stereo.stereo import stereo_threading, MyThread
from stereo.dianyuntu_yolo import preprocess, undistortion, getRectifyTransform, draw_line, rectifyImage, \stereoMatchSGBMdef main():cap = cv2.VideoCapture('ultralytics/assets/a1.mp4')model = YOLO('yolov8n.pt')cv2.namedWindow('00', cv2.WINDOW_NORMAL)cv2.resizeWindow('00', 1280, 360) # 設(shè)置寬高out_video = cv2.VideoWriter('output.avi', cv2.VideoWriter_fourcc(*'XVID'), 30, (2560, 720))while True:ret, im0 = cap.read()if not ret:print("Video frame is empty or video processing has been successfully completed.")break# img = cv2.cvtColor(image_net, cv2.COLOR_BGRA2BGR)config = stereoconfig_040_2.stereoCamera()map1x, map1y, map2x, map2y, Q = getRectifyTransform(720, 1280, config)thread = MyThread(stereo_threading, args=(config, im0, map1x, map1y, map2x, map2y, Q))thread.start()results = model.predict(im0, save=False, conf=0.5)annotated_frame = results[0].plot()boxes = results[0].boxes.xywh.cpu()for i, box in enumerate(boxes):# for box, class_idx in zip(boxes, classes):x_center, y_center, width, height = box.tolist()x1 = x_center - width / 2y1 = y_center - height / 2x2 = x_center + width / 2y2 = y_center + height / 2if (0 < x2 < 1280):thread.join()points_3d = thread.get_result()# gol.set_value('points_3d', points_3d)a = points_3d[int(y_center), int(x_center), 0] / 1000b = points_3d[int(y_center), int(x_center), 1] / 1000c = points_3d[int(y_center), int(x_center), 2] / 1000distance = ((a ** 2 + b ** 2 + c ** 2) ** 0.5)if (distance != 0):text_dis_avg = "dis:%0.2fm" % distancecv2.putText(annotated_frame, text_dis_avg, (int(x2 + 5), int(y1 + 30)), cv2.FONT_ITALIC, 1.2,(0, 255, 255), 3)cv2.imshow('00', annotated_frame)out_video.write(annotated_frame)key = cv2.waitKey(1)if key == 'q':breakout_video.release()cap.release()cv2.destroyAllWindows()if __name__ == '__main__':parser = argparse.ArgumentParser()parser.add_argument('--weights', type=str, default='yolov8n.pt', help='model.pt path(s)')parser.add_argument('--svo', type=str, default=None, help='optional svo file')parser.add_argument('--img_size', type=int, default=416, help='inference size (pixels)')parser.add_argument('--conf_thres', type=float, default=0.4, help='object confidence threshold')opt = parser.parse_args()with torch.no_grad():main()
4. 實驗結(jié)果
可實現(xiàn)測距、跟蹤和分割功能,實現(xiàn)不同功能僅需修改以下代碼,具體見 此篇文章