
Roboflow supports GPT-5, the latest OpenAI series of models (GPT-5, GPT-5 Mini, GPT-5 Nano), and all users can access these models with your own OpenAI API key, or using Roboflow credits with our Managed Roboflow OpenAI Key.
These models are now the best OpenAI has to offer for vision tasks. According to OpenAI, GPT-5 “excels across a range of multimodal benchmarks, spanning visual, video-based, spatial, and scientific reasoning.” – read here to see our analysis of testing GPT-5 on 80+ vision tasks.
If you haven’t explored using GPT-5 for vision tasks, you can access GPT-5 in the Roboflow Model Playground to experiment with your own data or visit the vision-language model leaderboard to see the results across multiple real world tasks.
How to use GPT-5 with Roboflow
GPT-5 is a powerful model to add into your vision pipelines and applications. The top ways Roboflow users build applications with models like GPT-5 are:
- Piloting new use cases. Large vision models are useful without any fine-tuning and help you prototype new applications quickly. This can help make projects come to life to get internal support or funding to invest in building a custom application.
- Multi-model pipelines. GPT-5 is an excellent model when you can focus it on specific and narrow tasks. This is useful for later stages of a vision pipeline where you can crop smaller portions of your images and pass them to GPT-5 for tasks like OCR, captioning, classification, and open prompting.
- Open ended general questions. Response times for vision tasks can be 10-25 seconds which means GPT-5 can help you when sampling data with lower frequency. A common use case is to sample single frames and then route the frames to other fine-tuned models when appropriate with prompts like: Is the machine running? Is there an accident? Are there any hazards? Are people in the frame?
How to Build a Vision Workflow with GPT-5
In this guide, we are going to walk through how to use GPT-5 in Roboflow Workflows.
We will build an application that detects a book, crops the cover, and returns the title of the book. A fine-tuned detection model will detect and crop the book, then GPT will return the title of the book. By cropping the region we pass to GPT, we will save on token cost by removing background information and also gain the ability to map a specific region of an image to a book name.
Here is an example of the output from the Workflow we will build:
You can try the Workflow here:
Let’s get started!
Step #1: Add a Detection Model
To get started, create a free Roboflow account. Then, click “Workflows” in the left sidebar and create a new Workflow.
When asked to choose a model, select “Object Detection”:
Click “Add Model”.
You will then be asked to choose a model. For this guide, we are going to use an open source book cover detection model hosted on Roboflow Universe.
Click the “Public Models” and set the model ID to “book_inventory/3”:
Click “Add Model”.
Your model will then be added to your Workflow:
Next, click “Add Block” and search for the “dynamic crop” block:
This block will crop the book in the image. We can then pass the cropped book to OpenAI for OCR. The block will be automatically configured to use the bounding box regions from your detection model to determine what regions to crop.
Here is what your Workflow should look like:
Step #2: Add an OpenAI Block
So far, our Workflow can find the location of one or more books in an image and crop the region where the book is in the image. We have one more step before our application is complete: using GPT-5 to read the title of the book.
Click “Add Block” and search for “OpenAI”:
Add an OpenAI block to your Workflow.
A configuration pane will open in which we can set up our OpenAI block.
First, we need to set our prompt. We are going to use the prompt:> Return the title of the book.
Next, we need to set our API key. You can choose whether to use your own OpenAI key or the Roboflow Managed API Key.
If you use the Roboflow Managed API Key, OpenAI requests will be made using Roboflow’s API key. Your Roboflow credits will be used to cover the cost of the OpenAI requests. This means you can pay for your whole vision application through Roboflow rather than also maintaining a separate OpenAI billing account.
Next, scroll down to the “Model Version” form field. Set the version to either “gpt-5”, “gpt-5-mini”, or “gpt-5-nano”, depending on the model you want to use:
Step #3: Test and Deploy the Workflow
With a Workflow ready, we can start testing our project.
Click “Test Workflow” in the top left corner, then drag and drop an image that contains a book into the image field:
Here is an example of an image we can use:
Click “Run” to run your Workflow.
Our object detection model will find the book in the image, crop the region where the book is, then run OCR with GPT-5 to read the title of the book.
Here is the result:
We can see the “output” in the OpenAI response has the title of the book:
> AI and Machine Learning for Coders
The model successfully read the title of the book.
We can also see the cropped region of our input image.
When you are ready to deploy your Workflow, click “Deploy Workflow”. You will then see several options for running your Workflow. By default, you can deploy your Workflow in the Roboflow Cloud. This will let you use Roboflow’s infrastructure to run your model, then an OpenAI call will be made for the GPT-5 step.
On the “Deploy Workflow” tab, you will see a code snippet you can use to call your Workflow:
Conclusion
GPT-5 is the new flagship GPT model developed by OpenAI and is integrated into the Robfolow ecosystem. This model has both advanced reasoning capabilities and multimodal support. You can send an image to OpenAI and ask a question that the model will then answer.
You can use GPT-5, GPT-5 Mini, and GPT-5 Nano in Roboflow Workflows. This feature is available to all users. You can use your own OpenAI key to pay for OpenAI requests, or use the Roboflow Managed API Key to pay for your OpenAI requests using your Roboflow credits.
To learn more about building with Workflows, check out the Roboflow Workflows documentation.
Cite this Post
Use the following entry to cite this post in your research:
James Gallagher. (Aug 14, 2025). Launch: Use GPT-5 in Roboflow. Roboflow Blog: https://blog.roboflow.com/gpt-5-roboflow-integration/