When you are deploying computer vision solutions in areas where people are present, you may opt to blur people to maintain the privacy of people in an image or video. To do so, you can use a person detection API to detect people, and then use the supervision Python package to blur all regions where people are present. This can be done in a few lines of code.

In this guide, we are going to walk through how to blue people in images and videos. The approach we show for images and videos will be slightly different. To blur people in images, we will use the Roboflow Hosted API and supervision. To blur people in videos, we will use the Roboflow Video Inference API, which is purpose-built for video processing, and supervision.

Here is an example of a video in which people have been blurred:

0:00
/0:08

Without further ado, let’s get started!

Blur People API: Images

To blur people in an image, we need to know the pixel coordinates that correspond with each person. For that, we can use a person detection computer vision model. In this guide, we are going to use the People Detection model hosted on Roboflow Universe. This model can be used out of the box, without any training required.

We can query this model using a hosted API.

Let’s blur all of the people in this image:

First, we need to install the Roboflow and supervision Python packages, which we will use to call the people detection API and to blur people, respectively:

pip install roboflow supervision

Next, create a new Python file and add the following code:

from roboflow import Roboflow
import supervision as sv
import cv2

rf = Roboflow(api_key="API_KEY")
project = rf.workspace().project("people-detection-o4rdr")
model = project.version(7).model

results = model.predict("people.jpg", confidence=40, overlap=30).json()

predictions = sv.Detections.from_inference(results)

image = cv2.imread("people.jpg")

# filter out predictions that aren't class id == 0, which maps to the class "person"
people = predictions[predictions.class_id == 0]

blur_annotator = sv.BlurAnnotator()

annotated_frame = blur_annotator.annotate(
    scene=image.copy(),
    detections=people
)

sv.plot_image(annotated_frame)

Above, replace API_KEY with your Roboflow API key. Learn how to retrieve your Roboflow API key. Then, run the code. The image you are processing will appear on screen, with all people blurred:

We successfully blurred the people in the image.

Blur People API: Videos

To blur people in videos, we can use the hosted Roboflow Video Inference API. This API is optimized for running inference on videos. We can send a video to the API and specify that we want to use a pre-trained people detection model. Then, we can retrieve predictions for each frame. We can use those predictions to blur people in a video.

Create a new Python file and add the following code:

import json

import os
from roboflow import Roboflow
import numpy as np
import supervision as sv
import cv2

PROJECT_NAME = "people-detection-o4rdr"
VIDEO_FILE = "people.mp4"

rf = Roboflow(api_key="API_KEY")
project = rf.workspace().project(PROJECT_NAME)
model = project.version(7).model

job_id, signed_url, expire_time = model.predict_video(
    VIDEO_FILE,
    fps=5,
    prediction_type="batch-video",
)

results = model.poll_until_video_results(job_id)

with open("results.json", "w") as f:
    json.dump(results, f)

frame_offset = results["frame_offset"]
model_results = results["people-detection-o4rdr"]

def callback(scene: np.ndarray, index: int) -> np.ndarray:
    if index in frame_offset:
        detections = sv.Detections.from_inference(model_results[frame_offset.index(index)])
    else:
        nearest = min(frame_offset, key=lambda x: abs(x - index))
        detections = sv.Detections.from_inference(
            model_results[frame_offset.index(nearest)])

    blur_annotator = sv.BlurAnnotator()

    annotated_image = blur_annotator.annotate(
        scene=scene, detections=detections)

    return annotated_image

sv.process_video(
    source_path=VIDEO_FILE,
    target_path="output.mp4",
    callback=callback,
)

Above, replace API_KEY with your Roboflow API key. Learn how to retrieve your Roboflow API key. We recommend setting fps to 5 as a starting point. This means that inference will be run five times every second. The higher the FPS, the more expensive inference will be.

When you run the code above, your video will be sent for processing. The amount of time it takes for inference to run on your video depends on how many frames are in your video. The “poll_until_video_results” function will poll the video API every 60 seconds until a result is available. We will then save that result to a file.

You will see the following output as processing starts:

loading Roboflow workspace...
loading Roboflow project...
Checking for video inference results for job 15be7ab5-232a-4e02-aa47-864ff8a2581b every 60s

The video inference API returns the pixel coordinates of people in each frame. We then use the supervision Python package to blur the regions where people are present.

When the script above has finished running, the results will be saved to a file called “output.mp4”. Here is an example of the output of the video blur code above:

0:00
/0:08

We successfully blurred people in the video.

Conclusion

When you are recording video that contains people and people are not relevant to your computer vision project, blurring regions where people are present is recommended. Blurring people allows you to preserve the privacy of people in frame.

In this guide, we walked through how to blur people in images and videos using the Roboflow image and video processing APIs, and the supervision Python package. We used the Person Detection model hosted on Roboflow Universe to detect people in images and videos, then supervision to blur regions where people were present in the visual media.