Microsoft Research and Roboflow Partner for CVPR 2023 Workshop and Challenge

We are excited to announce Roboflow’s participation in The 2nd Workshop on Computer Vision in the Wild (CVinW) event. At the event, we will be presenting a Roboflow 100 object detection challenge and speaking about RF100 at the workshop.

Computer vision systems that are considered advanced are taught to anticipate a predetermined group of object categories. However, this type of limited supervision constrains their versatility and practicality as they necessitate more labeled data to define any other visual ideas.

The workshop and related challenges are spearheaded by Microsoft Research during CVPR to gather a community in support of advancing work related to computer vision and multimodal tasks with the ultimate goal of developing transferable foundation models/systems that can effortlessly adapt to a large range of visual tasks in the wild.

CVinW 2023 Workshop

The workshop takes place June 19, continuing from the ECCV 2022 CVinW Workshop with the call for papers open now through April 21st, 2023. Speakers and panelists from Google, Apple, Roboflow Google Brain, and more are invited to share talks related to recent research in the CVinW domain. Jacob Solawetz from the Roboflow team will be present at the event.

CVinW 2023 Challenges

Along with workshop paper submissions, there are challenges accepting submissions through June 2nd, 2023 introducing new benchmarks to evaluate the task-level transfer ability of pre-trained vision models. They present a diverse set of downstream visual recognition tasks/datasets, measuring the ability of pre-training models on both the detection accuracy and their transfer efficiency in a new task, in terms of training examples and trainable parameters.

Two new challenges are being introduced in 2023:

Challenge submissions are available for zero, few, and full-shot metrics and across an academic track and industry track. Participation is encouraged regardless of attendance at CVPR 2023!

Conclusion

We are excited to support the development of new benchmarks for image classification, object detection, and segmentation to measure the task-level transferability of models and methods over diverse real-world datasets.