To follow along with this tutorial, you will need a Raspberry Pi 4 or 400. You will need to run the 64-bit Ubuntu operating system.

Roboflow supports deploying custom computer vision models to Raspberry Pi devices with a performance optimized Docker container. A Raspberry Pi is often used as an edge device because of their price point, size, power consumption, offline capability, and more. With all of those benefits in mind, this guide will show you how to deploy a custom computer vision model to your Raspberry Pi.

This deployment method will only work for models with model weights available for Roboflow Deploy.

Gather Data and Train a Model

To get started, create a Roboflow account then create a workspace and project. To add data to your project you can do any or all of the following:

Then, generate a dataset version and train your model. We are moving quickly through these steps, but for a step-by-step tutorial on creating your dataset, you can use the Roboflow Quick Start guide.

In this example, we'll utilize the Construction Site Safety dataset and pre-trained model we've found on the Roboflow Universe "Construction" dataset page. Compatible dataset versions have a green check mark, and denote the model type used in training.

Install Docker

To install Docker on Raspberry Pi, you first need to check that you have a compatible system, and then install the prerequisites.

You'll need a Raspberry Pi 4 (or Raspberry Pi 400) running the 64-Bit version of Ubuntu. To verify that you're running a compatible system, type arch into your Raspberry Pi's command line and verify that it outputs aarch64.

If you haven't yet installed the 64-bit Ubuntu OS for Raspberry Pi, here's how to get it set up on your SD Card or Boot Drive with Raspberry Pi Imager:

Raspberry Pi Imager
Raspberry Pi Imager
  1. Download Raspberry Pi Imager
  2. Install the Raspberry Pi Imager on your system
  3. Ensure you have added your SD Card or Boot Drive to the appropriate removable storage location on your computer or laptop
  4. Start up Raspberry Pi Imager
  5. Click Choose OS --> Raspberry Pi OS (Other), and finally select Raspberry Pi OS (64-Bit)
  6. Click Choose Storage and select the appropriate storage device
  7. Click Write
  8. Remove your SD Card or Boot Drive from the removable storage location on your computer or laptop
0:00
/

Next, follow the instructions here to complete setup of Raspberry Pi OS (64-bit) on your Raspberry Pi 4 (or Raspberry Pi 400).

This is the fastest way to get Docker installed on your Raspberry Pi.

Then, install Docker on your Raspberry Pi with Docker's convenience script:

curl -fsSL https://get.docker.com -o get-docker.sh
sudo sh get-docker.sh
Install Docker
Install Docker

Installation Process - Option 2:

This method is a longer process.

Do this by running the command sudo apt-get install apt-transport-https ca-certificates software-properties-common -y.

Now, download and add the GPG key for the official Docker repository to your system. You can do this with the command curl -fsSL https://download.docker.com/linux/raspbian/gpg | sudo apt-key add -.

Next, add the Docker repository to the Raspberry Pi's sources list. You can do this with the command echo "deb [arch=armhf] https://download.docker.com/linux/raspbian stretch stable" | sudo tee /etc/apt/sources.list.d/docker.list.

Finally, update the package database and install Docker. You can do this by running the commands sudo apt-get update and sudo apt-get install docker-ce. After installation is complete, you can start using Docker on your Raspberry Pi.

Install the Roboflow Inference Server

The Inference API Server is available as a Docker container optimized and configured for the Raspberry Pi. To install the Inference API Server, pull the container:

sudo docker pull roboflow/inference-server:cpu

This command automatically detects your Raspberry Pi's CPU and pulls down the correct deployment container for your device.

💡
Roboflow Inference, used in this guide to deploy a model, is now available as an open source project.

Inference works across devices, from ARM CPU devices like the Raspberry Pi to NVIDIA Jetson. See the Roboflow Inference documentation for more information about Inference.

Running the Roboflow Inference Server

sudo docker run --net=host roboflow/inference-server:cpu
Inference Server Installation: Success
Inference Server Installation: Success

Your Raspberry Pi is now available to make predictions. The Hosted Inference API documentation includes pre-written code snippets in several languages to help accelerate your development.

The first inference call to the model will take a few seconds to download and initialize your model weights. Once an initial inference call is successfully made, subsequent predictions will be processed faster.

Using the Python SDK

As noted above, you have the option to use the Hosted Inference API to receive model predictions.

Another available option is using the Roboflow Python SDK.

To install the Roboflow python package, activate your Python environment or virtual environment, and enter pip install roboflow.

Installing the Roboflow python package in a virtual environment
Installing the Roboflow python package in a virtual environment

Add the following code snippet to a Python, .py, file. This can be done in a .txt file with Notepad and saved as a .py file, or in a code editor such as Vim or VSCode.

# import the Roboflow Python package
from roboflow import Roboflow

# instantiate the Roboflow object and authenticate with your credentials
rf = Roboflow(api_key="YOUR_PRIVATE_API_KEY")
# load/connect to your project
project = rf.workspace("YOUR_WORKSPACE").project("YOUR_PROJECT")
# load/connect to your trained model
model = project.version(VERSION_NUMBER, local="http://localhost:9001/").model

# perform inference on an image file
prediction = version.model.predict("YOUR_IMAGE.jpg")
# print prediction results in JSON
print(prediction.json())

Update the line with YOUR_PRIVATE_API_KEY and replace with your Private API Key

API Keys for models on Roboflow Universe are available on the API Docs tab:

Construction Site Safety (Roboflow Universe API Docs)
Construction Site Safety (Roboflow Universe API Docs)
  • YOUR_WORKSPACE: replace with the project's workspace_id, roboflow-universe-projects in this example
  • YOUR_PROJECT: replace with the project's project_id, construction-site-safety in this example
  • VERSION_NUMBER: replace with the numeric integer value corresponding to the trained model's version_number, 25 in this example

Save and run the .py file after making the necessary updates. For example, if the file is named infer.py, run it with: python3 infer.py.

First Inference: Downloading and Initializing Model Weights
First Inference: Downloading and Initializing Model Weights
Example model predictions on one image
Example model predictions on one image

The model can be run as a client-server context and send images to the Raspberry Pi for inference from another machine on the network.

To do this, simply replace localhost in the version variable's definition with the Raspberry Pi's local IP address.

Pi Performance Expectations

We saw about 1.3 frames per second on the Raspberry Pi 400. These results were obtained while operating in a client-server context (so there is some minor network latency involved) and a 416x416 model.

The model container is available to continue processing predictions even after the Raspberry Pi is disconnected from an internet connection. However, this will remove the ability to run predictions on hosted images.

Turning off the Raspberry Pi will shut down the container so if power to the Raspberry Pi is lost, or you shut down your Pi, repeat the steps outlined above, beginning with the Running the Roboflow Inference Server section, to once again process model predictions.

Conclusion

Congratulations! Following this tutorial, you are now be able to deploy a custom computer vision model to your Raspberry Pi for edge inference.

A benefit of training and deploying models with Roboflow is the ability to change models or deploy targets with one line of code. Now that you've installed the Docker container on your device, you'll be able to quickly deploy the best version of any future models to your production application.