The YOLO family continues to grow with the next model: YOLOX. In this post, we will walk through how you can train YOLOX to recognize object detection data for your custom use case.

We use a public blood cells object detection dataset for the purpose of this tutorial. However, you can import your own data into Roboflow and export it to train this model to fit your own needs. The YOLOX notebook used for this tutorial can be downloaded here.

Thanks to the Megvii team for publishing the underlying repository that formed the foundation of our notebook.

In this guide, we take the following steps:

  • Install YOLOX dependencies
  • Download custom YOLOX object detection data via Roboflow
  • Download Pre-Trained Weights for YOLOX
  • Run YOLOX training
  • Evaluate YOLOX performance
  • Run YOLOX inference on test images
  • Export saved YOLOX weights for future inference

Prefer YouTube?

What's New in YOLOX?

YOLOX is the latest version of YOLO models, pushing the limit in terms of speed and accuracy. YOLOX most recently won Streaming Perception Challenge (Workshop on Autonomous Driving at CVPR 2021).

YOLOX evaluation relative to other YOLO detection networks

The biggest modeling changes include the removal of box anchors (improves the portability of the model to edge devices) and the decoupling of the YOLO detection head into separate feature channels for box classification and box regression (improves training convergence time and model accuracy).

The decoupled head in YOLOX
Faster training time by epochs in YOLOX. We think YOLOv5 epochs are faster though (we have yet to run any direct head to head tests) 

Many other exciting training and inference considerations are included in the paper. You can dive in deeper here at the YOLOX paper or in this video.

Install YOLOX Dependencies

To setup our development environment, we will first clone the base YOLOX repository and download the necessary requirements:

!git clone
!pip3 install -U pip && pip3 install -r requirements.txt
!pip3 install -v -e .  
!pip uninstall -y torch torchvision torchaudio
# May need to change in the future if Colab no longer uses CUDA 11.0
!pip install torch==1.7.1+cu110 torchvision==0.8.2+cu110 torchaudio==0.7.2 -f

We will also install NVIDIA Apex and PyCocoTools to make this repository work as intended:

%cd /content/
!git clone
%cd apex
!pip install -v --disable-pip-version-check --no-cache-dir --global-option="--cpp_ext" --global-option="--cuda_ext" ./

!pip3 install cython; pip3 install 'git+'

Download Custom YOLOX Object Detection Data

Before, we get started, you will want to create a Roboflow account. We will be using this blood cells dataset but you are welcome to use any dataset whether it be your own dataset loaded into Roboflow or another public dataset.

***Using Your Own Data***

To export your own data for this tutorial, sign up for Roboflow and make a public workspace, or make a new public workspace in your existing account. If your data is private, you can upgrade to a paid plan for export to use external training routines like this one or experiment with using Roboflow's internal training solution.

For this notebook, we will need to apply some preprocessing steps to ensure that the data will work with YOLOX. To get started, create a Roboflow account if you haven't already and fork the dataset:

Fork the BCCD Dataset

After forking the dataset, you will want to add one preprocessing step which would be to resize all of the images to a size of 640 x 640:

Resize Images to 640 x 640

Then simply generate a new version of the dataset and export with a "Pascal VOC". You will recieve a Jupyter notebook command that looks something like this:

Copy the command, and replace the line below in the notebook with the command provided by Roboflow:

***Note*** Data download has switched over to the Roboflow PIP package.

#from roboflow import Roboflow
#rf = Roboflow(api_key="YOUR_API_KEY", model_format="voc")
#project = rf.workspace().project("mushrooms-slim")
#dataset = project.version("YOUR_VERSION").download("voc")

Labeling Your Data

If you are bringing your own dataset, you can annotate your images in Roboflow.

Download Pre-Trained Weights for YOLOX

YOLOX comes with some pretrained weights to allow the model to train faster and achieve higher accuracies. There are many sizes of weights but the size of weights we use will be based on the small YOLOX model (YOLOX_S). We can download this as follows:

%cd /content/
%cd /content/YOLOX/

Run YOLOX training

To train the model, we can run the tools/ file:

!python tools/ -f exps/example/yolox_voc/ -d 1 -b 16 --fp16 -o -c /content/yolox_s.pth

The arguments for running this command include:

  • Experience File: This file allows us to change certain aspects of the base model to apply when training
  • Devices: The number of GPUs our model will train with--1 is the value as Colab provides 1
  • Batch Size: Number of image in each batch
  • Pretrained Weights: Specify the path to the weights you want to use--this can be weights we downloaded or an earlier checkpoint of your model

After about 90 epochs of training, we get the following APs.

Evaluate YOLOX performance

To evaluate YOLOX performance we can use the following command:

MODEL_PATH = "/content/YOLOX/YOLOX_outputs/yolox_voc_s/latest_ckpt.pth.tar"

!python3 tools/ -n  yolox-s -c {MODEL_PATH} -b 64 -d 1 --conf 0.001 -f exps/example/yolox_voc/

After running the evaluation, we get the following results:

Evaluation for YOLOX Model

Seems to have a pretty high performance!

Run YOLOX Inference on Test Images

We can now run YOLOX on a test image and visualize the predictions. To run YOLOX on a test image:

TEST_IMAGE_PATH = "/content/valid/BloodImage_00057_jpg.rf.1ee93e9ec4d76cfaddaa7df70456c376.jpg"

!python tools/ image -f /content/YOLOX/exps/example/yolox_voc/ -c {MODEL_PATH} --path {TEST_IMAGE_PATH} --conf 0.25 --nms 0.45 --tsize 640 --save_result --device gpu

To visualize the predictions on the image:

from PIL import Image
OUTPUT_IMAGE_PATH = "/content/YOLOX/YOLOX_outputs/yolox_voc_s/vis_res/2021_07_31_00_31_01/BloodImage_00057_jpg.rf.1ee93e9ec4d76cfaddaa7df70456c376.jpg"

Looks like the model works as intended!

Export saved YOLOX weights for future inference

Finally we can export the model into our Google Drive as follows:

from google.colab import drive

%cp {MODEL_PATH} /content/gdrive/My\ Drive


YOLOX is an incredibly powerful, state-of-the-art object detection model. In this tutorial you were able to learn how to:

  • Prepare the YOLOX Environment
  • Download Custom Object Detection Data using Roboflow
  • Run the YOLOX Training Process
  • Using your trained YOLOX model for inference
  • Export your model to Google Drive

Happy training!