How to Label Outdoor Surveillance Data for Computer Vision Models
Computer vision has many applications when working with outdoor surveillance footage. For example, you can train a model to track wildlife activity across your backyard throughout the day. You can also train a model to identify different modes of transportation people are taking during their morning commutes. Regardless of your use case, outdoor surveillance data can be tricky to label, particularly when objects are captured far away from your camera.
When labeling outdoor surveillance data there are several considerations to keep in mind, from labeling with tight bounding boxes to properly managing your ontology. In this guide, we are going to discuss how to label outdoor surveillance data to curate a high quality dataset that we can use in training a computer vision model. Let’s get started!
Label objects of interest with tight bounding boxes
Computer vision models are built to recognize patterns of pixels corresponding to the objects we have trained them on. With outdoor surveillance data, it is critical that we tightly label the objects of interest so that we teach the computer vision model to recognize the precise pixels that we are working with, particularly when objects are far away.
Label occluded objects
It is a best practice to label objects when they are occluded as if they were fully visible. Objects are considered occluded when they are partially blocked by another object or out of view. In this example we want to fully label each hog, even if there are other hogs blocking our view.
Use thoughtful class names
If you are looking to capture multiple attributes of an object, the naming convention of your classes will play a key role in making your data usable. For example, let’s say you are labeling people on skateboards as person_skateboard and people on scooters as person_scooter. These classes will allow us to train a model to identify people on skateboards and people on scooters. However, if you just want to train a model to identify people in transit, irrespective of the mode of transportation, you can merge the classes via the preprocessing step Modify Classes to override both classes names to person_moving.
Ensure your training data is similar to the data you will be capturing in production
For a computer vision model to perform well in production, it must be used on visual inputs similar to the data it was trained on. For example, if you are planning on deploying a model to capture data from a camera that is three feet off the ground but the model was trained on data from a camera 30 feet off the ground, the model will not perform as well as it could.
For additional considerations when annotating computer vision data more broadly, explore our guide on labeling best practices.
Curate Datasets with Roboflow’s Professional Labelers
Through Roboflow’s Outsource Labeling service, you can work directly with professional labelers to annotate projects of all sizes. Roboflow manages workforces of experts who are trained in using Roboflow’s platform to curate datasets faster and cheaper.
The first step in getting started with Outsource Labeling is to fill out the intake form with your project’s details and requirements. From there, you will be connected with a team of labelers to directly work with on your labeling project(s).
When working with professional labelers, clearly documenting your instructions is an essential part of the process. We often see that the most successful labeling projects are the ones in which well documented instructions are provided upfront, a period of initial feedback takes place with the labelers regarding an initial batch of images, and then the labeling volume is significantly ramped up. Read our guide to writing labeling instructions for more information about how to write informative instructions.
As part of the Outsource Labeling service, you will also be working with a member of the Roboflow team to help guide your labeling strategy and project management to ensure you are curating the highest quality dataset possible.