This sample is designed to run a state of the art object detection model using the ZED SDK and optimizing your model with the highly optimized TensorRT framework. Internally, the ZED SDK takes its images, run inference on it to obtain 2D box detections and extract 3D informations (localization, 3D bounding boxes) and tracking.
This sample shows how to pass your custom YOLO-like onnx model to the ZED SDK.
A custom detector can be trained with the same architecture. These tutorials walk you through the workflow of training a custom detector :
- Yolov8 https://docs.ultralytics.com/modes/train
- Yolov6 https://github.com/meituan/YOLOv6/blob/main/docs/Train_custom_data.md
- Yolov5 https://github.com/ultralytics/yolov5/wiki/Train-Custom-Data
- Get the latest ZED SDK and pyZED Package
- Check the Documentation
This sample is expecting an ONNX file exported using the original YOLO code. Please refer to the section corresponding to the needed version.
python custom_internal_detector.py --custom_onnx yolov8m.onnx # [--svo path/to/file.svo]
- The camera point cloud is displayed in a 3D OpenGL view
- 3D bounding boxes around detected objects are drawn
- Objects classes and confidences can be changed
If you need assistance go to our Community site at https://community.stereolabs.com/