Edge AI has never been hotter. As computer vision technology advances, it is becoming more and more important to be able to deploy computer vision models that can inference in realtime on affordable edge devices.
In this post, our edge AI model is YOLOv5s and our selected hardware is the Jetson NX. Let's get cracking.
Training Your Custom YOLOv5 model
In this post, we will abstract away most of the model training steps. Thankfully, the Roboflow blog has great documentation on how to train YOLOv5 to recognize custom objects. If you don't yet have your trained model then I recommend checking that blog post out first.
Once you have your model trained in
.pt format you are ready to advance to the Jetson Xavier NX.
.pt on the cloud or on a USB device so you can access it from the NVIDIA device.
Booting up the Jetson NX
NVIDIA makes it easy to start up the Jetson NX with the NVIDIA Jetpack installation guide.
You will need your own microSD card to flash the NVIDIA Jetpack and ubuntu installation onto your device.
Once set up is complete, you can move forward to deploying YOLOv5.
The NVIDIA Jetpack 4.4 PyTorch Container
We will be deploying YOLOv5 in its native PyTorch runtime environment. That means we will need to install PyTorch on our NVIDIA Jetson Xavier NX. Getting this installation right could cost you your week.
Thankfully, the NVIDIA Jetpack 4.4 PyTorch Docker containers are available for our use. Docker crystallizes the install process so you don't have to do it on your machine.
After executing into the PyTorch container go ahead and clone the YOLOv5 repository. There are a few installs to make in the
requirements.txt. You may also need to install OpenCV 4.4.0 separately (as I did using this link).
Running Inference on the NVIDIA Jetson NX
Once you have the YOLOv5 environment configured on your NVIDIA Jetson NX, then you are ready to start making inferences. Download an image, a video, or expose your webcam port to the model and kick off an inference session with:
python detect.py --source ./inference/images/ --weights yolov5s.pt --conf 0.4
Subbing in your model for the default "yolov5s.pt".
Here is some printouts of my custom YOLOv5s model inferencing at 30FPS!
Making YOLOv5 Run Even Faster
Of course, we didn't give away all of the keys to the kingdom in this post. You may want to explore making your YOLOv5s model even smaller to speed up inference.
It is also worthwhile to look in to TensorRT solutions to speed things up even further.
Congratulations! You have learned how to deploy YOLOv5 all the way to an edge device, the Jetson Xavier NX to make inference in realtime at 30 FPS.
Given the flexibility of the YOLO model to learn custom object detection problems, this is quite the skill to have.
We hope you enjoyed and as always, happy detecting.