One common question we get is "Can I use my Roboflow model on a video?" The answer is yes! Videos are really just a sequence of images, so your model can give predictions just like it does on images.

The process is simple: 1) split your video into frames, 2) get a prediction from your model from each frame, and 3) reconstruct the output back into a video.

Full video-inference tutorial video on our YouTube.

For your convenience, we've created a simple open source shell script you can use to perform these three steps with your Roboflow Train models.

roboflow-ai/video-inference
Example showing how to do inference on a video file with Roboflow Infer - roboflow-ai/video-inference
Note: Your model will perform much better on video if you've trained it on video frames as well; Roboflow Pro can ingest video files as training data. Learn more about uploading video data to Roboflow in our docs.

In just a few moments, you'll be able to visualize your model's output on a video file and it will look a little something like this:

Example output of infer.sh

That gif was generated with this command (and a model I trained in Roboflow):

ROBOFLOW_KEY=xxxxxxxx ./infer.sh rf-aquarium-merged--3 IMG_3203.mov fish.gif --fps_in 3 --fps_out 12 --scale 4

When you're ready to build the data from your models' predictions into your application directly, check out our inference docs for example code in your preferred programming language.