Get a Hosted API for your Object Detection Model
Roboflow provides tools for labeling, organizing, and training a computer vision model. Once you finish running one of the Jupyter notebooks from our Computer Vision tutorials you can download a weights file optimized for making predictions on your custom dataset.
The Current State of Affairs
But what next? How do you use those weights?
That's one of the most common questions we receive. Previously the answer for many use-cases was: "stand up a server, write a script to accept images, run them through your model, and return the results."
But this can be tricky and expensive. If you're not careful, you may end up paying for expensive GPU instances to sit idle waiting to handle inbound requests. And you'll either need to write logic to automatically scale up your infrastructure when traffic increases or resign yourself to queueing requests and potentially letting your service go down under high load.
Enter Roboflow Infer
Luckily, there's now a better way. With Roboflow Train and Roboflow Infer, you can click a button and we will train a custom, state-of-the-art object detection model on your dataset and give you a hosted API for inference.
We handle hosting the model and scaling your infrastructure up and down with demand. And best of all, unlike other AutoML services, you'll only based on the number of inferences you use (not for idle server time).
How to Use
Roboflow Infer is part of Roboflow Pro. Once you're onboarded you'll receive an access token you can use to query your deployed models with just a few lines of code in any major programming language. We even have a playground you can use to build a custom front-end right in your browser.
What's Next
Server-side inference solves many use-cases. We already have customers using Roboflow Infer in the wild to power features in their mobile and web applications. When combined with our upload API you can use it as part of an active learning workflow to continually improve your model as it is exposed to more real world edge cases.
But some projects require deploying your model in an application that doesn't have reliable Internet access or needs to process a realtime video-feed with very low latency (for example, an augmented reality mobile application). For those cases, we're working on on-device deployment with a select few early partners. If you're interested in deploying your Roboflow Train model to the edge, get in touch!