What is YOLO11? An Introduction

YOLO11 is a computer vision model architecture developed by Ultralytics, the creators of the YOLOv5 and YOLOv8 models. YOLO11 supports object detection, segmentation, classification, keypoint detection, and oriented bounding box (OBB) detection.

In this guide, we are going to discuss what YOLO11 is, how it performs, and how you can train and deploy YOLO11 models to your own hardware.

Without further ado, let’s get started!

What is YOLO11?

YOLO11 is a series of computer vision models developed by Ultralytics. As of the launch of YOLO11, the model is the most accurate of all Ultralytics’ models.

The YOLO11x model, the largest in the series, reportedly achieves a 54.7% mAP score when evaluated against the Microsoft COCO benchmark. The smallest model, YOLO11n reportedly achieves a 39.5% mAP score when evaluated against the same dataset. See how YOLO11 compares to other object detection models in the object detection model leaderboard.

YOLO11 has support for the same task types as YOLOv8, which are:

  • Object detection
  • Classification
  • Image segmentation
  • Keypoint detection
  • Oriented Bounding Box (OBB)

Getting Started with YOLO11

To get started applying YOLO11 to your own use case, check out our guide on how to train YOLOv8 on custom dataset.

To see what others are doing with YOLO11, browse Roboflow Universe for other YOLO11 models, datasets, and inspiration.

Deploy YOLO11 Models to the Edge

In addition to using the Roboflow hosted API for deployment, you can use Roboflow Inference, an open source inference solution. Inference works with CPU and GPU, giving you immediate access to a range of devices, from the NVIDIA Jetson (i.e. Jetson Nano or Orin) to ARM CPU devices.

With Roboflow Inference, you can self-host and deploy your model on-device.

Step #1: Upload Model Weights to Roboflow

Once you have finished training a YOLOv8 model, you will have a set of trained weights ready for use with a hosted API endpoint. You can upload your model weights to Roboflow Deploy with the deploy() function in the Roboflow pip package to use your trained weights in the cloud.

To upload model weights, first create a new project on Roboflow, upload your dataset, and create a project version. Check out our complete guide on how to create and set up a project in Roboflow. Then, write a Python script with the following code:

import roboflow
roboflow.login()
rf = roboflow.Roboflow()
project = rf.workspace().project(PROJECT_ID)
project.version(DATASET_VERSION).deploy(model_type="yolo11", model_path=f"{HOME}/runs/detect/train/")

Replace PROJECT_ID with the ID of your project and DATASET_VERSION with the version number associated with your project. Learn how to find your project ID and dataset version number.

Shortly after running the above code, your model will be available for use in the Deploy page on your Roboflow project dashboard.

Step #2: Install Inference

You can deploy applications using the Inference Docker containers or the pip package. Let's use the pip package. First run:

pip install inference

Step #3: Run Inference on an Image

Then, create a new Python file and add the following code:

from inference import get_model
import supervision as sv
import cv2

image_file = "image.jpeg"
image = cv2.imread(image_file)
model = get_model(model_id="model-id")

# run inference on our chosen image, image can be a url, a numpy array, a PIL image, etc.
results = model.infer(image)[0]

# load the results into the supervision Detections api
detections = sv.Detections.from_inference(results)

# create supervision annotators
bounding_box_annotator = sv.BoundingBoxAnnotator()
label_annotator = sv.LabelAnnotator()

# annotate the image with our inference results
annotated_image = bounding_box_annotator.annotate(
    scene=image, detections=detections)
annotated_image = label_annotator.annotate(
    scene=annotated_image, detections=detections)

# display the image
sv.plot_image(annotated_image)

Above, set your Roboflow workspace ID, model ID, and API key, if you want to use a custom model you have trained in your workspace.

Also, set the URL of an image on which you want to run inference. This can be a local file.

Here is an example of an image running through the model:

The model successfully detected a person in the image, indicated by the purple bounding box on the image.

You can also run inference on a video stream. To learn more about running your model on video streams – from RTSP to webcam feeds – refer to the Inference video guide.

YOLO11 FAQs

Does YOLO11 have a published paper?

YOLO11 does not have a published or pre-print academic paper.

Under what license is YOLO11 covered?

YOLO11 is covered under an AGPL-3.0 license. If you deploy YOLO11 models with Roboflow, you automatically get a commercial license to use the model.

Where is the YOLO11 source code?

You can find the YOLO11 source code in the Ultralytics GitHub repository.

What classes can the base YOLO11 weights identify?

The base YOLO11 weights that were released with the model were trained on the Microsoft COCO dataset. You can see a full list of the COCO classes here.

Conclusion

Announced and launched in September 2024, YOLO11 is the latest series of computer vision models developed by Ultralytics. The model architecture has support for all the same vision tasks as YOLOv8, while offering improved accuracy when evaluated against the COCO dataset benchmark.

In this guide, we walked through the basics of YOLO11: what the model is, what you can do with it, and how to deploy the model on your own device. To learn more about training YOLO11 models, refer to our YOLO11 training guide.