The Joys of Sharing Models on OpenCV's Modelplace
Today marks the official release of OpenCV's Modelplace, a marketplace for AI models.
In the computer vision world today, a general AI that solves all tasks for us is far away. Computer vision models today are typically handcrafted team by team for specific use cases, using specific domain specific datasets. These models are trained and deployed in various machine learning frameworks and they are tucked away in isolated codebases.
Many tasks in dataset assembly, training, and deployment in machine learning are replicated time and time again for the same task - ex. the detection of license plates, tracking people, spotting text.
If we could all get together and share our model creation and deployments, that would be a very good thing for the computer vision community. Modelplace is a big step in that direction.
At Roboflow, we were excited to participate in sharing an inaugural model on Modelplace, and we learned a good amount about the marketplace along the way.
Let's dive into how it works!
The Layout of Modelplace
Modelplace is designed with two actors in mind:
- The model creator
- The model consumer
Model creators assemble new datasets and build new models, publishing their model for consumption on Modelplace.
Note: In the initial release, all models are provided on Modelplace, free of charge.
Consuming Models from Modelplace
If you find a model on Modelplace that you think would be good for your application, you can test the model on Modelplace's web interface by simply dragging and dropping a photo in.
Once you've decided that a model may be of use for you, you can pick between two deployment options (for most models):
- Posting images to a web hosted API
- On-Premise Python Wheel
- Deploying to an edge device - OAK devices
For most use cases, the web hosted API is preferable. You will need the on-device deployment if your model needs to run without internet connection, or if the model needs to be inferencing in realtime >20FPS.
Publishing Models to Modelplace
Model creators can share their model on Modelplace and monetize their model's popularity.
To share your model, you will first need to have a model that has been trained and infers according to your standards.
As of now, you will then reach out to Modelplace and they will provide you with instructions to pack your model. You will build a Python Wheel to publish and share your model on their backend. They support many frameworks including onnxruntime
, openvino
, pytorch
, tensorflow
. You must make sure to implement some basic tests to insure that your model is running properly. You will also need to be sure that your model's output conforms to a few basic standards.
Creating New Models Not Yet on Modelplace
If you don't see the model you want yet on Modelplace, you can consider building your own with Roboflow. If your model can be public facing, you can share it with the community on Modelplace so the next developer can move all the faster.
For the inaugural Modelplace release, we provided the American Sign Language model as an example of a custom trained model from Roboflow.
To create this model, we started from the public ASL dataset. We then used Roboflow's tools to curate and version our dataset. We also used training and deployment tools available with Roboflow Train.
Often times, you will find a given model failing in certain scenarios specific to your deployment. The process of gathering new images from failure scenarios and retraining your model to generalize to these failure conditions is called active learning. Active learning is a key step in iteratively improving your private model or the model you are hosting on Modelplace.
In Sum
OpenCV's Modelplace is an exciting new marketplace for computer vision models.
You can utilize pre-existing Modelplace models for inference and deployment, avoiding the costs of having to collect your own dataset, conduct your own training, and implement your own deployment.
You can share your existing models on Modelplace to monetize the hard work you have done to bring your model to life.
As always, happy training and more importantly, happy inferencing!