Developing, deploying and optimizing computer vision models used to be a cumbersome, painful process. With Roboflow, we sought to democratize this technology, which (first and foremost) meant knocking down the
Featuring rock, paper, scissors. OpenAI's CLIP model (Contrastive Language-Image Pre-Training) is a powerful zero-shot classifier that leverages knowledge of the English language to classify images without having to be trained
When creating computer vision models, data augmentation can improve model performance with an existing image dataset. Image augmentation increases the size and variability of a dataset, thereby improving model generalizability.
Using transfer learning to initialize your computer vision model from pre-trained weights rather than starting from scratch (initializing randomly) has been shown to increase performance and decrease training time. It
IBM recently announced they are shutting down IBM Visual Inspection, their product for creating custom computer vision models for classification and object detection. No new instances can be created and
Machine learning – the software discipline of mapping inputs to outputs without explicitly programmed relationships – requires substantial computational resources. Traditionally, this limits where machine learning models can run to very powerful
Excitement is building in the artificial intelligence community around MIT's recent release of liquid neural networks. The breakthroughs that Hasani and team have made are incredible. In this post, we will discuss the new liquid neural networks and what they might mean for the vision field.
Computer vision technology continues to expand its use cases in healthcare and medicine. In this post, we will touch on some exciting example use cases for vision in healthcare and medicine and provide some resources on getting started applying vision to these problems.