YOLOv11Model conversion

1. Jetson Orin YOLO11 (benchmark)

YOLO11 benchmark data comes from the Ultralytics team, which tests models in multiple formats (data is for reference only)

image-20241231113339184

2. Enable optimal performance of the motherboard

2.1. Enable MAX power mode

Enabling MAX Power Mode on Jetson will ensure that all CPU and GPU cores are turned on:

2.2. Enable Jetson clocks

Enabling Jetson Clocks will ensure that all CPU and GPU cores run at maximum frequency:

3. Model conversion

According to the test parameters of different format models provided by the Ultralytics team, we can find that the inference performance is best when using TensorRT!

3.1、CLI:pt → onnx → engine

Convert the PyTorch model to TensorRT: The conversion process will automatically generate an ONNX model

image-20241231122558848

3.2、Python:pt → onnx → engine

Convert the PyTorch model to TensorRT: The conversion process will automatically generate an ONNX model

Note: The converted model file is located in the converted model file location

image-20241231130524974

4. Model prediction

CLI usage

CLI currently only supports calling USB cameras. CSI camera users can directly modify the previous python code to call onnx and engine models!

image-20241231141309294

image-20241231141137709

Frequently Asked Questions

ERROR: onnxslim

image-20241231120636237

Solution: Enter the onnxslim installation command in the terminal

image-20241231120751992

References

https://docs.ultralytics.com/guides/nvidia-jetson/

https://docs.ultralytics.com/integrations/tensorrt/