To follow along with this tutorial, you will need a Raspberry Pi 4 or 400. You will need to run the 64-bit Ubuntu operating system.
Roboflow supports deploying custom computer vision models to Raspberry Pi devices with a performance optimized Docker container. A Raspberry Pi is often used as an edge device because of their price point, size, power consumption, offline capability, and more. With all of those benefits in mind, this guide will show you how to deploy a custom computer vision model to your Raspberry Pi.
This deployment method will only work for models with model weights available for Roboflow Deploy.
Gather Data and Train a Model
To get started, create a Roboflow account then create a workspace and project. To add data to your project you can do any or all of the following:
In this example, we'll utilize the Construction Site Safety dataset and pre-trained model we've found on the Roboflow Universe "Construction" dataset page. Compatible dataset versions have a green check mark, and denote the model type used in training.
To install Docker on Raspberry Pi, you first need to check that you have a compatible system, and then install the prerequisites.
You'll need a Raspberry Pi 4 (or Raspberry Pi 400) running the 64-Bit version of Ubuntu. To verify that you're running a compatible system, type
arch into your Raspberry Pi's command line and verify that it outputs
If you haven't yet installed the 64-bit Ubuntu OS for Raspberry Pi, here's how to get it set up on your SD Card or Boot Drive with Raspberry Pi Imager:
- Download Raspberry Pi Imager
- Install the Raspberry Pi Imager on your system
- Ensure you have added your SD Card or Boot Drive to the appropriate removable storage location on your computer or laptop
- Start up Raspberry Pi Imager
Raspberry Pi OS (Other), and finally select
Raspberry Pi OS (64-Bit)
Choose Storageand select the appropriate storage device
- Remove your SD Card or Boot Drive from the removable storage location on your computer or laptop
Next, follow the instructions here to complete setup of Raspberry Pi OS (64-bit) on your Raspberry Pi 4 (or Raspberry Pi 400).
Installation Process - Option 1 (Recommended):
This is the fastest way to get Docker installed on your Raspberry Pi.
Then, install Docker on your Raspberry Pi with Docker's convenience script:
curl -fsSL https://get.docker.com -o get-docker.sh
sudo sh get-docker.sh
Installation Process - Option 2:
This method is a longer process.
Do this by running the command
sudo apt-get install apt-transport-https ca-certificates software-properties-common -y.
Now, download and add the
GPG key for the official Docker repository to your system. You can do this with the command
curl -fsSL https://download.docker.com/linux/raspbian/gpg | sudo apt-key add -.
Next, add the Docker repository to the Raspberry Pi's sources list. You can do this with the command
echo "deb [arch=armhf] https://download.docker.com/linux/raspbian stretch stable" | sudo tee /etc/apt/sources.list.d/docker.list.
Finally, update the package database and install Docker. You can do this by running the commands
sudo apt-get update and
sudo apt-get install docker-ce. After installation is complete, you can start using Docker on your Raspberry Pi.
Install the Roboflow Inference Server
The Inference API Server is available as a Docker container optimized and configured for the Raspberry Pi. To install the Inference API Server, pull the container:
sudo docker pull roboflow/inference-server:cpu
This command automatically detects your Raspberry Pi's CPU and pulls down the correct deployment container for your device.
Running the Roboflow Inference Server
sudo docker run --net=host roboflow/inference-server:cpu
Your Raspberry Pi is now available to make predictions. The Hosted Inference API documentation includes pre-written code snippets in several languages to help accelerate your development.
The first inference call to the model will take a few seconds to download and initialize your model weights. Once an initial inference call is successfully made, subsequent predictions will be processed faster.
Using the Python SDK
As noted above, you have the option to use the Hosted Inference API to receive model predictions.
Another available option is using the Roboflow Python SDK.
To install the Roboflow python package, activate your Python environment or virtual environment, and enter
pip install roboflow.
Add the following code snippet to a Python,
.py, file. This can be done in a
.txt file with Notepad and saved as a
.py file, or in a code editor such as Vim or VSCode.
# import the Roboflow Python package from roboflow import Roboflow # instantiate the Roboflow object and authenticate with your credentials rf = Roboflow(api_key="YOUR_PRIVATE_API_KEY") # load/connect to your project project = rf.workspace("YOUR_WORKSPACE").project("YOUR_PROJECT") # load/connect to your trained model model = project.version(VERSION_NUMBER, local="http://localhost:9001/").model # perform inference on an image file prediction = version.model.predict("YOUR_IMAGE.jpg") # print prediction results in JSON print(prediction.json())
Update the line with
YOUR_PRIVATE_API_KEY and replace with your Private API Key
API Keys for models on Roboflow Universe are available on the
API Docs tab:
YOUR_WORKSPACE: replace with the project's
roboflow-universe-projectsin this example
YOUR_PROJECT: replace with the project's
construction-site-safetyin this example
VERSION_NUMBER: replace with the numeric integer value corresponding to the trained model's
25in this example
Save and run the
.py file after making the necessary updates. For example, if the file is named
infer.py, run it with:
The model can be run as a client-server context and send images to the Raspberry Pi for inference from another machine on the network.
To do this, simply replace
localhost in the
version variable's definition with the Raspberry Pi's local IP address.
Pi Performance Expectations
We saw about 1.3 frames per second on the Raspberry Pi 400. These results were obtained while operating in a client-server context (so there is some minor network latency involved) and a 416x416 model.
The model container is available to continue processing predictions even after the Raspberry Pi is disconnected from an internet connection. However, this will remove the ability to run predictions on hosted images.
Turning off the Raspberry Pi will shut down the container so if power to the Raspberry Pi is lost, or you shut down your Pi, repeat the steps outlined above, beginning with the Running the Roboflow Inference Server section, to once again process model predictions.
Congratulations! Following this tutorial, you are now be able to deploy a custom computer vision model to your Raspberry Pi for edge inference.
A benefit of training and deploying models with Roboflow is the ability to change models or deploy targets with one line of code. Now that you've installed the Docker container on your device, you'll be able to quickly deploy the best version of any future models to your production application.