This post is part of Dataset Day in Roboflow Launch Week 2023, the day during which we have announced many new additions to our data management and annotation solutions. To see all of the announcements from launch week, check out the Roboflow Launch Week page.

The Roboflow platform is designed to empower everyone on your team to contribute toward building computer vision models. We have a web interface and APIs through which you can annotate and manage data, then train, evaluate, test, and deploy your computer vision models.

To help developers leverage these features – and reduce the time it takes to perform various actions on Roboflow – we've released new API endpoints through which you can build Roboflow in your computer vision pipeline and applications.

Today, we are announcing the launch of five new endpoints to our REST API. These new endpoints are designed to provide developers with the ability to automate tasks on our platform and integrate Roboflow more closely into computer vision pipelines run by your team.

In this post, we’re going to talk through each of the new endpoints and how you can use them. Without further ado, let’s get started!

To learn how to retrieve a key for use with our API, check out our Obtaining Your API Key guide.

Search Datasets

Datasets in the Roboflow platform are queryable through our semantic search feature, allowing you to find images that meet a query. For example, in a hard hat dataset, you can find all images of hard hats that were taken indoors.

This feature is now available through the Roboflow API. This endpoint returns both the file names of the images that meet your search query and high-level information about the images.

You can search by providing:

  1. A plain-text query or;
  2. The name of an image.

If you provide the name of an image, the search endpoint will return images related to that image. This is useful if you need to find samples that are semantically similar to an image in your dataset.

In the following request, we specify a JSON payload to the /search endpoint associated with a workspace. We request up to 125 images that meet the query “hard hat” that are in our final dataset. Images that are unannotated or annotated but not in our dataset will not be returned because we have specify the "in_dataset": true flag.

url -x POST "https://api.roboflow.com/my-workspace/my-project-name/search?api_key=$ROBOFLOW_API_KEY" --data

{
    "query": "hard hat",
    "in_dataset": true,
    "limit": 125,
}

To retrieve your workspace and project ID, check out our guide on retrieving your workspace and project ID.

To learn more about how to use this feature, check out the Search REST API documentation.

Upload Videos

Users can upload videos to our platform for use in building new versions of a dataset. Through the web interface, you can upload a video and select a frame rate interval. Frames will be taken at this interval and added to your dataset. For example, if you upload a video and choose a frame rate of 2 FPS, two images will be taken per second of video data.

To upload video data to the Roboflow platform, check out our Video Upload Colab Notebook. This notebook shows how to use ffmpeg, the Roboflow Python package, and Python file utilities to upload frames from a video to the Roboflow platform.

Upload Files via CLI

The Roboflow CLI provides various functionalities from our API in a command line interface. You can upload one or more files using the Roboflow CLI. To get started, first install the CLI:

npm i roboflow-cli

Then, authenticate using the following command:

roboflow login

Your browser will be opened and you will be entered into a flow to grant access to one or more workspaces for use with the CLI. You will then receive a token. When you are given a token from the web interface, copy it into the CLI and press enter.

Now you are ready to upload files via the CLI.

To upload files, use the “roboflow upload” command. For example, to upload all of the images with the “.jpeg” extension in an “images “directory, you could use this command:

roboflow upload images/*.jpeg

You can upload images in a “batch” (a group to which you can assign to an annotator via the API) by specifying the –batch flag with the name of the batch you want to create and with which the images should be associated:

roboflow upload images/*.jpeg –batch=camera-1

This command creates a batch of images with the batch label “camera-1”.

To learn more about the upload CLI feature use the -h flag. This flag opens a help menu with further information on usage:

roboflow upload -h

Create an Annotation Job

You can now assign batches to annotators using the API by providing a batch ID with which your images are associated (you can specify one when uploading images via the REST API or when using the CLI).

To create an annotation job, make a POST request to the “/jobs” API endpoint that contains the batch ID, the number of images in the batch, and the emails of the labeler (the person who will annotate the images) and the reviewer who will be responsible for verifying the quality of the annotations:

curl --location --request POST 'https://api.roboflow.com/${WORKSPACE}/${PROJECT}/jobs?api_key=${ROBOFLOW_API_KEY}' \
--header 'Content-Type: application/json' \
--data-raw '{
    "name": "Job created by API",
    "batch": "<BATCH_ID>",
    "num_images": 10,
    "labelerEmail": "lenny@roboflow.com",
    "reviewerEmail": "lenny@roboflow.com"
}'

A 200 response indicates the images have been annotated successfully. You can see the JSON payload returned in the full /jobs example in the Roboflow API documentation.

Conclusion

The Roboflow API is built to be extendable and provide you with the features you need to integrate Roboflow into your project workflow. With our API and CLI, you can automate many common actions – from creating dataset batches to assigning annotation jobs to labelers – so you can build a more efficient computer vision pipeline.

In addition to our web API and CLI, we provide a Python package through which you can interface with our API. This package lets you upload images, run inference, and more. Read the Python package documentation to learn more about how the package can help you in building computer vision applications.