Launch: Run Vision Models on Multiple Streams

Roboflow Inference, a high-performance computer vision inference server developed by Roboflow, now supports running computer vision models on multiple streams. You can deploy any model hosted on Roboflow with Inference. This feature allows you to provide one or more video sources and run inference on each of the sources in real time.

This feature is ideal for large-scale deployments where you want a central server to process several video sources. For example, consider a scenario where you have a CCTV system looking for people outside a factory at night. You could connect to the RTSP streams of several cameras at once and run your vision model on one server.

In this guide, we are going to walk through how to run a computer vision model on multiple streams, on the same device.

Here is an example of Inference running on multiple streams:

0:00
/0:08

Without further ado, let’s get started!

Deploy Vision Models on Multiple Streams

The InferencePipeline method in Roboflow Inference allows you to pass in one or more video streams and process each stream using a defined callback function.

InferencePipeline supports the following input sources:

  1. RSTP streams
  2. Device webcams
  3. Video files (MP4)

If you are a Roboflow Enterprise customer, we have additional offerings for deploying models on cameras commonly used in manufacturing facilities for machine vision. Contact the Roboflow sales team to learn more about these offerings.

Step #1: Install Inference

To use this function, first install Roboflow Inference:

pip install inference

Step #2: Authenticate with Roboflow

Next, you need to set a Roboflow API key. This key will allow you to authenticate with Roboflow and retrieve any private model on your account or access any of the 50,000+ public models available on Roboflow Universe.

To authenticate, run:

export ROBOFLOW_API_KEY=""

Learn how to find your Roboflow API key.

Step #3: Initialize a Pipeline

Next, create a new Python file and add the following code:

from inference import InferencePipeline
from inference.core.interfaces.stream.sinks import render_boxes

pipeline = InferencePipeline.init(
    video_reference=["your_video.mp4", "your_other_ideo.mp4"],
    model_id="yolov8n-640",
    on_prediction=render_boxes,
)
pipeline.start()
pipeline.join()

In this code, we initialize an inference pipeline using the InferencePipeline.init() method. We initialize this pipeline with:

  1. A list of input sources;
  2. The ID of the model we want to run on our video sources, and;
  3. A callback function that processes predictions from the model.

If you want to use an RTSP stream, you can use an RSTP stream URL in the video_reference list. If you want to use a webcam, you can pass in the device webcam ID associated with the webcams on whose streams you want to run inference.

We then start our pipeline using the start() and join() methods.

Above, we set a default YOLOv8n model to test. You can configure this to use any model deployed on Roboflow. You can use any of the private models on your account, or any of the 50,000+ public models available on Roboflow Universe.

To use a custom model, you will need a model ID. Learn how to find your model ID.

We are using the default render_boxes callback function available in Inference. This will display the predictions from our model on each stream so we can visualize the results of our system. With that said, you can set your own callback function to implement additional processing logic. To learn how to write your own callback function, refer to the InferencePipeline callback signature documentation.

With our pipeline configured, we can now run a pipeline on multiple streams:

0:00
/0:08

Our pipeline is successfully identifying classes from our YOLOv8n checkpoint trained on the COCO dataset. This model can identify 80 classes. The above video shows our model identifying several classes, including people.

Follow our step-by-step on processing multiple camera streams for real time traffic analytics if you’re interested in trying with real data.

Conclusion

You can use the InferencePipeline method in Roboflow Inference to run models on multiple streams. With InferencePipeline, you can provide a list of RTSP streams, webcam streams, or videos on which to run inference concurrently.

You can use your own model hosted on Roboflow. This model is downloaded onto your device for optimal performance. You can also pass in a custom callback function to process predictions from your model. If you are deploying vision models for a commercial application, the Roboflow Field Engineering team can help you architect your system to achieve your business goals. To learn more about our Field Engineering offerings, contact the Roboflow sales team.