Segment Anything 2 (SAM 2), released by Meta AI in July 2024, is an image segmentation model. You can use Segment Anything 2 to identify the precise location of an object in an image. You can also track an object across frames in a video.

When the first version of Segment Anything was released in 2023, Roboflow integrated the model into product offerings within the first week, eager to allow customers to use state-of-the-art technologies to solve business problems.

Following the success of using Segment Anything to power our automated polygon labeling tool and its use in computer vision applications, we are excited to announce that we have integrated Segment Anything 2 extensively into the Roboflow ecosystem.

We have enhanced the following features with SAM 2:

  1. Use SAM 2 as a label assistant to annotate polygons in an image with a single click;
  2. Segment regions of an image with SAM 2 as part of a multi-step computer vision workflow in Roboflow Workflows;
  3. Spin up your own SAM 2 API with Dedicated Deployments to which you can send inference requests.

In this guide, we are going to walk through the ways you can use SAM 2 with Roboflow.

Without further ado, let’s get started!

Segment Anything 2-Powered Data Labeling

Labeling images is one of the most tedious parts of a computer vision project. With that said, label quality is essential: good labels make or break a model.

Last year, we added Segment Anything-powered Smart Polygon into our product. This tool allows you to click on an object in an image and calculate a segmentation mask for the selected object. This could be used to generate polygon annotations that can be used to train object detection and segmentation models.

This feature is now backed by SAM 2, which is faster, and more accurate than the previous model.

To use the feature, navigate to any image in a dataset you own in Roboflow. Click the Smart Polygon cursor icon in the right task bar and select the “Enhanced” option powered by SAM 2.

You can then hover over any object in an image to preview the polygon SAM 2 would add to your image. To add a polygon, click on the region of the image you want to annotate. This will add the polygon annotation to your dataset.

0:00
/0:11

With this feature, you can label images in Roboflow faster than ever, and with greater precision than the previous version of Smart Polygon.

Deploy a Segment Anything 2 API

You can deploy SAM 2 as an API on your own hardware or in a Dedicated Deployment in the cloud.

To run SAM 2 on your own hardware, you can use Roboflow Inference, an open source, high-throughput computer vision inference server. Inference supports models ranging from CLIP to YOLOv10 to SAM-2.

You can load the tiny, small, large, and `b_plus` models with Inference. You can compute the most prominent mask associated with a point and submit negative prompts to refine predictions.

To learn how to deploy SAM 2 on your own hardware, refer to the Roboflow Inference SAM 2 documentation.

In addition, you can deploy SAM 2 on a dedicated cloud deployment with Roboflow Dedicated Deployments. With Dedicated Deployments, you can provision CPU or GPU servers that you can use exclusively for your inference use cases. This removes the need to go through the complexity of setting up a cloud server or a local GPU to run your models.

You can use a GPU Dedicated Deployment to run SAM 2 as an API. Dedicated Deployments run Roboflow Inference, 

To learn how to deploy a Dedicated Deployment, see our launch post.

Use Segment Anything 2 with Roboflow Workflows

Roboflow Workflows allows you to build complex computer vision applications from a web-based application builder. If you run a Workflow with a Dedicated Deployment, you can use SAM 2 as part of your Workflow.

You could use SAM 2 in Workflows to build an active learning pipeline for a segmentation model. Such a pipeline could use an object detection model to detect objects, then SAM 2 to generate segmentation masks for objects in an image.

You could use the labeled images to train a fine-tuned segmentation model which will run faster than SAM 2.This is a common use case for foundation models: use them to auto-label data as part of an active learning workflow, then train a fine-tuned model that can run faster in production.

To use SAM 2 with Roboflow Workflows, create a new Workflow then add a “Segment Anything 2” block to your project:

Below is an example of a Workflow that uses SAM 2 with a YOLOv8 model:

This Workflow uses a YOLOv8 model to detect objects, then uses SAM 2 to segment objects. The results are then visualized and returned. Here is an example of the workflow running on an image that contains common objects which our model is trained to identify:

You can try this Workflow with your own images below:

This Workflow is built to identify common objects (i.e. cars, cups, cell phones, cats), so will perform best on images that contain common objects.

To learn more about building workflows with Roboflow, refer to the Roboflow Workflows documentation.

Conclusion

Segment Anything 2 is a state-of-the-art image segmentation model. You can use Segment Anything 2 to generate segmentation masks for objects in an image.

Roboflow offers extensive support for using SAM 2 in your computer vision workflows and applications. With Roboflow, you can:

  1. Use SAM 2 to speed up your image labeling process.
  2. Provision a SAM 2 API with Dedicated Deployments for use with image segmentation workflows.
  3. Build computer vision applications that use SAM 2 as part of the application logic.

If you are interested in learning more about SAM 2 and how it works, check out our What is Segment Anything 2? guide.