We're proud to share that Roboflow has entered into a partnership agreement with Ultralytics, the creators of YOLOv5, and that Roboflow is now the official dataset management and annotation tool for YOLOv5 with a focus on enabling active learning.
Over the past year, YOLOv5 has gained broad adoption and is now one of the fastest growing AI repositories on GitHub. Roboflow's mission is to improve every industry by democratizing computer vision, and powerful, easy to use, open source models are central to enabling any developer to give their software the sense of sight. To that end, our guide on How to Train YOLOv5 on Custom Data was released in June 2020.
We're excited to further support users building with YOLOv5 through a deeper partnership with Ultralytics.
This partnership means deeper integration between the Roboflow Python package and the YOLOv5 training and inference scripts so that you can bootstrap your projects with public datasets, seamlessly train custom models, and continuously improve performance by sampling real-world data and adding it back to your dataset for re-training iteratively.
Roboflow's goal is to simplify the computer vision process so developers can spend more time on their domain specific problems instead of machine learning minutia and computer vision infrastructure. By integrating the data organization, preparation, and processing pipeline more closely with the training and inference code, we can make computer vision even more accessible.
A Focus on Custom Training
Most computer vision research centers around models' performance on the COCO dataset. This provides a great benchmark for comparing model performance head to head in standardized conditions but, for end users, ease of use and accuracy on real-world datasets (which are often smaller, and less robust than COCO) matters far more. There is a gap between research and engineering that we aim to bridge.
This is something that Ultralytics and YOLOv5 have always emphasized and we plan to contribute back code to the YOLOv5 repository to further improve custom training performance and reduce friction for real-world use-cases. (Stay tuned; what we're announcing today is only the beginning.)
How to Get Started
To try out the new changes, check out the new YOLOv5 custom training Colab notebook. The flow will automatically connect to your custom datasets in Roboflow via our pip package. Simply "export" your project from Roboflow in your desired format and we'll generate the code snippet you need for custom training.
The only thing you'll need to do to get the notebook training on your custom model is to copy/paste this snippet from your account into the right spot in the notebook.
Resources to Support Open Source
Supporting open source not only means improving integrations within our product, but also contributing financially to enable the further development and improvement of the YOLOv5 repo, including additional compute resources to further experiment with improving the state of the art for object detection. We think it's important to contribute back and support the open source ecosystem that underpins the advancement of our field and we're excited to be able to help YOLOv5 continue to thrive.
Supporting the Whole Ecosystem
The improvements to our pip package and custom training integration will also find its way into our notebooks for other models like EfficientDet, YOLOv4 (and the rest of the YOLO family of models), and future models. We'll be updating our model library over the coming weeks to include many of the same improvements we're contributing to the YOLOv5 flow.
We'd love to support more open source projects that share our goal of adding computer vision to every developer's toolbelt. If you're working on an innovative open source project lowering the barriers to access, please reach out if there are ways we can help!