Broadcast Computer Vision Predictions with Apache Kafka
Apache Kafka allows you to send and store data from different sources on a network. You can use Apache Kafka to receive predictions from computer vision models deployed in your facility.
This allows you to integrate your models directly into your existing Kafka event streaming infrastructure. For example, if a product on an assembly line does not meet quality standards, metadata about the product – assembly line position, the defect detected, where the defect is on the product – can be stored in a centralized system.
In this guide, we are going to show how to broadcast computer vision predictions with Apache Kafka. We will use an open source project, Roboflow Inference, to deploy a computer vision model, then send predictions to a Kafka consumer.
Here is an example of a Kafka receiver accepting model predictions:
This guide assumes you already have a Kafka receiver (broker) set up. If you do not have a Kafka receiver set up, refer to the official Apache Kafka quickstart for instructions on how to get set up.
Without further ado, let’s get started!
Step #1: Prepare a Model
We will need a computer vision model to use to build our Kafka streaming logic. For this guide, we are going to deploy a model using Roboflow Inference, a computer vision inference server.
You can deploy any model trained or uploaded to Roboflow on your hardware using Roboflow Inference. To learn about training a model on Roboflow, refer to the Roboflow getting started guide.
We will deploy a model that verifies the integrity of a bottle cap. The model can identify if a bottle cap is correctly sealed, loose, or missing. This model could be used on an assembly line to assure the quality of drink bottles. The full dataset and model are available on Roboflow Universe.
To get started, install Inference:
pip install inference
Then, create a new Python file and add the following code:
from inference import InferencePipeline
from inference.core.interfaces.stream.sinks import render_boxes
pipeline = InferencePipeline.init(
model_id="bottle-cap-integrity/7",
video_reference=0, # Path to video, device id (int, usually 0 for built in webcams), or RTSP stream url
on_prediction=render_boxes,
)
pipeline.start()
pipeline.join()
In this code, we use InferencePipeline to run our model on a video stream. The 0 value corresponds to the ID of our webcam. 0 represents the default webcam on a device. You can also specify an RTSP stream URL if you have an RTSP stream on which you want to run inference.
Above, replace:
bottle-cap-integrity/7
with your Roboflow model ID. Learn how to retrieve your model ID. You can also leave the ID alone to test on the bottle cap model that Roboflow has made.- 0 with your webcam ID. Your default webcam should have the ID 0. You can also pass in an RTSP stream URL or a video file name.
To run the code above, you will need to set your Roboflow API key in an environment variable called ROBOFLOW_API_KEY
:
export ROBOFLOW_API_KEY=""
Learn how to retrieve your API key.
When you run this script, your video feed will appear on screen. Detections from your model will be annotated with bounding boxes in the feed.
Now that we have a model set up, we can add logic to publish predictions to a Kafka topic.
Step #2: Publish to a Kafka Topic
You can send messages to a Kafka topic in Python. Before you can send messages, you will need to create a topic. How you do this will depend on the Kafka broker you are using.
Create a topic with a descriptive name. This topic will receive messages from your model.
Then, install the kafka-python
package:
pip install kafka-python
This package has utilities you can use to send messages using the Kafka protocol.
Next, create a new Python file and add the following code:
import json
from kafka import KafkaProducer
from inference import InferencePipeline
from inference.core.interfaces.stream.sinks import render_boxes
producer = KafkaProducer(bootstrap_servers=['localhost:9092'])
TOPIC_NAME = "bottle-cap-integrity"
future = producer.send(TOPIC_NAME, b'raw_bytes')
def on_prediction(predictions, video_frame):
render_boxes(predictions=predictions, video_frame=video_frame)
predictions = json.dumps(predictions)
predictions_as_bytes = predictions.encode('utf-8')
producer.send('bottle-cap-integrity', predictions_as_bytes)
pipeline = InferencePipeline.init(
model_id="bottle-cap-integrity/7",
video_reference=0,
on_prediction=on_prediction,
confidence=0.3,
)
pipeline.start()
pipeline.join()
Above, replace:
localhost:9002
with the host name and port on which your Kafka receiver is running.- The value of
TOPIC_NAME
with the topic to which you want to send messages. bottle-cap-integrity/7
with your Roboflow model ID. Learn how to retrieve your model ID.video_reference=0
with the video stream on which you want to run your model. Your default webcam should have the ID0
. You can also pass in an RTSP stream URL or a video file name.
The code above connects to a Kafka consumer. Then, the specified model is run on frames from the incoming video stream. All predictions are serialized as bytes then sent to the Kafka receiver.
When you are ready, run the code above.
You should see:
- A window open up which shows predictions from your model, and;
- Predictions coming in to your Kafka receiver.
The below video shows the above two outputs, indicative of the script running successfully:
If predictions are not coming into your server, check to make sure you have created your topic, you have specified the right topic name, and that you are subscribed to the correct topic in the window where you want to see output from your server.
Conclusion
You can use Apache Kafka to send computer vision predictions from a device to a server. This has many applications in computer vision deployments.
For example, you could deploy NVIDIA Jetsons to run computer vision models on an assembly line, then collect all results from each model for further processing in a central server. You can send and receive the model results using Kafka.
In this guide, we walked through how to broadcast computer vision model predictions using the Apache Kafka framework. We showed how to deploy a model with Roboflow Inference, then how to use the kafka-python
framework to send predictions to a Kafka receiver.