Roboflow provides everything you need to turn images into information. We build all the tools necessary to start using computer vision, even if you're not a machine learning expert (and to supercharge your pipeline if you are).

In this guide, we're going to train a computer vision model that identifies pieces on a chess board. You can use the information you learn in this guide to build a computer vision model for your use case. We also have a quickstart video below that shows building a coin counter.

As you get started, reference our Knowledge Base or Community Forum if you have feedback, suggestions, or questions. We are here to help!

Quickstart Tutorial (6 Minutes)

Full Tutorial (20 Minutes)

Adding Data

Let’s walk through a tutorial on managing images for a chess piece detection problem. To get started, create an account using your email or GitHub account.

After reviewing and accepting the terms of service, you’ll land on your project's homepage:

Go back to the Roboflow dashboard and click "Create New Project". Choose the "Guided Tour" option:

Next, we need to set a name for our dataset and specify what object we will be detecting. In this project, we will identify chess pieces on a board.

Leave the project type as the default "Object Detection" option since our model will be identifying chess piece objects.

Click “Create Public Project.” to continue.

For this walkthrough, we’ll use the Roboflow-provided sample chess dataset. A link to the chess dataset will appear on the next page:

Now, unzip the sample file we just downloaded to your computer, sampleChessDataset.zip. Click and drag the folder called “chess-tutorial-dataset” from your local machine onto the highlighted upload area.

As an aside, feel free to poke around the contents of the chess-tutorial-dataset on your computer so you can see what’s inside:

MacOS Finder Screenshot: annotations folder (containing xml files), img folder (containing JPG images)
We’ve provided 12 chess images and VOC XML annotations. While these are VOC XML, note Roboflow supports most annotation formats.

Once you drop the chess-tutorial-dataset folder into Roboflow, the images and annotations are processed for you to see them overlayed.

If any of your annotations have errors, Roboflow alerts you. For example, if some of the annotations improperly extended beyond the frame of an image, Roboflow intelligently crops the edge of the annotation to line up with the edge of the image and drops erroneous annotations that lie fully outside the image frame.

At this point, our images have not yet been uploaded to Roboflow. We can verify that all the images are, indeed, the ones we want to include in our dataset and that our annotations are being parsed properly. Any image can be deleted upon mousing over it and selecting the trash icon.

Everything now looks good. Click “Save and Continue” in the upper right-hand corner! (This step does work better with faster internet.)

Note that one of our images is marked as "Not Annotated" on the dashboard. We'll annotate this image in the next section.

You will be asked to select a train/test split for your images; the default of 70% training, 20% validation, and 10% testing is usually a safe bet.

You can upload videos as well as images. We already have enough images for our project, but we'll talk through this process in case you want to add video data to your projects in the future.

To upload a video, go to the Upload tab in the Roboflow sidebar and drag in a video. You can also paste in the URL of a YouTube video you want to upload.

When you upload a video, you will be asked how many images should be captured per second. If you choose "1 frame / second", an image will be taken every second from the video and saved to Roboflow.

When you have selected an option, click "Choose Frame Rate". This will begin the process of collecting images from the video.

Annotating Images

One of the images in the sample dataset is not yet annotated. You will use Roboflow Annotate to add a box around the unlabeled chess piece in the image. Annotations are the "answer key" your computer vision model learns from. The more annotated images we add, the more information our model has to learn what each class is (in our case, more images will help our model identify what is a rook, what is a pawn, etc.).

Follow the instructions on screen to assign the unannotated image to yourself. You'll be taken to a page where you can annotate the image:

Use your cursor to drag a box around the area on the chess board you want to annotate. A box will appear in which you can enter the label to add. In the example below, we will add a box around a white bishop, and assign the corresponding class:

We have just added a "bounding box" annotation to our image. This means we have drawn a box around the object of interest. Bounding boxes are a common means of annotation for computer vision projects. We also support polygon annotations, which are useful for segmentation projects. Polygon annotations let you draw a precise outline of an object of interest.

💡
Pro Tip: With Roboflow Label Assist, you can use previous versions of your model to annotate future versions. Label Assist uses another model to draw annotations on images for you, which means you can spend less time annotating and get a model ready for production faster than ever.

You can use publicly available models hosted on Roboflow Universe, our dataset community, for label assist, too.

Bounding boxes are fine for our project, but let's quickly review how to create a polygon annotation as it may be handy as you start building your own models.

To make a polygon annotation, press "P" on your keyboard or click the polygon annotation tool in the sidebar. Click around the object that you want to annotate, then press Enter when you are ready to complete the annotation:

0:00
/0:35

We have a tool called Smart Polygon that can assist you in drawing the outline for your polygon. With Smart Polygon, you can draw a bounding box and the tool will create an estimated outline for the an object. You can then click within or outside the object to narrow or increase the polygon region:

0:00
/0:22

Annotation Comments and History

Need help from a team member on an annotation? Want to leave yourself a note for later on a particular image? We have you covered. Click the speech bubble icon on the sidebar of the annotation tool. Then, click the place on the image you want to leave a comment.

0:00
/0:25

If you have multiple people working with you on a project, you can tag them by using the @ sign, followed by their name. They will get a notification that you have commented and requested their assistance.

We can see the history of our annotated image in the sidebar:

To view a project at a previous point in the history, hover over the state in history that you want to preview.

Add Image to Dataset

Once you have annotated your image, you need to add it into your dataset for use in training a model. To do so, exit out of the annotation tool and click "Annotate" in the sidebar. Then, click on the card in the "Annotating" section that shows there is one annotated image for review:

Next, click "Add 1 Image to Dataset" to add your image to the dataset:

You can search the images in your dataset by clicking "Dataset" in the sidebar. The search bar runs a semantic search on your dataset to find images related to your query. Because we only have a few images, the search is not too useful right now. But if we had games of outdoor chess and indoor chess games, we could use the search feature to find them, for example.

Preprocessing and Augmentations

After you're done annotating and have added your annotated image to your dataset, continue to generate a new version of your dataset. This creates a point-in-time snapshot of your images processed in a particular way (think of it a bit like version control for data).

From here, we can apply any preprocessing and augmentation steps that we want to our images. Roboflow seamlessly makes sure all of your annotations correctly bound each of your labeled objects -- even if you resize, rotate, or crop.

💡
By default, Roboflow opts you into two preprocessing steps: auto-orient and resize. Auto-orient assures your images are stored on disk the same way your applications open them for you. (If you’re unfamiliar, this is can be a silent killer of computer vision models). Resize creates a consistent size for your images (in this case, smaller, to expedite training).

You can also choose to augment your images which generates multiple variations of each source image to help your model generalize better.

Roboflow supports auto-orient corrections, resizing, grayscaling, contrast adjustments, random flips, random 90-degree rotations, random 0 to N-degree rotations, random brightness modifications, Gaussian blurring, random shearing, random cropping, random noise, and much more. To better understand these options, refer to our documentation.

💡
We recommend starting with one or two augmentations that may work with a dataset, and adding more further down the line if necessary. Adding more augmentations does not necessarily boost the performance of your model.

Preparing Data for Training

For our walkthrough, we’ll leave settings as they were: auto-orient and resize 416x416, and some basic augmentations. To create downloaded data, select “Generate” at the end of the workflow.

After optionally naming, select Export. Roboflow is now preparing each of your images and annotations for download.

Once this finishes, you’ll be shown the final result (note the menu on the left now includes an archive of this specific version). You can train within Roboflow (explained in detail later in this guide) or Export to train with your own model.

For this walkthrough, let’s use the “Pascal VOC” export format. When you click "Export", your dataset is zipped and you are given several options for using it.

Click the "your dataset" link to download your zipped export to your computer. It contains an export folder with your images and annotations and a README describing the transformations you provided to your data.

Downloading an exported dataset version.

Evaluating the Dataset

Understanding the data you have in your dataset, and using the right amount of data for all of the classes you want to identify, is the key to building a robust model. Our Health Check feature identifies potential issues in your dataset to help you maximize the performance of your model.

To access this feature, click "Health Check" in the sidebar of your project.

This feature will point out class imbalances. This refers to classes that are over or underrepresented in your dataset. In this example, we have many more pawns than we do other chess pieces. As a result, our model may be able to identify pawns well but struggle with other classes that are not as well represented. In this example, the queens, bishops, and white rook are not well represented.

Other issues such as the number of unannotated images you have in your dataset will appear, too. In this case, we have no missing annotations.

We can use the insights on this page to maximize the performance of our trained model. For instance, we could add more annotations of white and black queens to help our model learn how to identify those objects if it struggles to identify them correctly.

Furthermore, this page shows general insights about your dataset such as the size distribution of images, a heat map of where the annotations for each class appear in your images, and more.

For a production project, we would evaluate these results in more detail and take action. But for this chess piece project, we can go ahead and train our model!

Training a Computer Vision Model

Roboflow Train, a one-click autoML training product, offers two model types that you can train and host using Roboflow. We handle the GPU costs and also give you access to out-of-the-box optimized deployment options which we will cover later in this guide.

Your trained model can now be used in a few powerful ways:

  • Model-assisted labeling speeds up labeling and annotation for adding more data into your dataset
  • Rapid prototyping or testing your model on real-world data to test model performance (explained in the next section)
  • Deploying to production with out-of-the-box options that are optimized for your model to run on multiple different devices (explained in the next section)

After you've generated your dataset, you'll see the option to start training with the number of available credits you have for training.

Start Training screen with Roboflow Train

Once you click Start Training, you'll be prompted to choose which model you'd like to train: Fast or Accurate. We explain the differences in detail but for this example, we will use Fast.

When training a model, you can Train from Scratch or Start from a Checkpoint. We advise people to utilize transfer learning and Start from a Checkpoint to help improve model performance in most cases.

You can use any model you have built as a checkpoint, models you have starred from Universe, and Microsoft COCO, a benchmark dataset that has an understanding of over 80 objects. We recommend starting with the COCO checkpoint for your first training job.

When you click Start Training, the training process will begin. This will take between a few minutes and a day depending on how many images are in your dataset. Because our chess dataset contains a dozen or so images, we can expect the training process will not take too long.

You can view the status of your model training from the Roboflow dashboard. When you first start training, you will see a message that says the training machine is starting:

When a machine has been assigned your dataset from which to train a model, a graph will appear on the page. This graph shows how your model is learning in real-time. As your model trains, the numbers may jump up and down. Over the long term, the lines should reach higher points on the chart.

💡
Precision is a measure of, "when your model guesses how often does it guess correctly?" Recall is a measure of "has your model guessed every time that it should have guessed?" Consider an image that has 10 red blood cells. A model that finds only one of these ten but correctly labels is as "RBC" has perfect precision (as every guess it makes – one – is correct) but imperfect recall (only one of ten RBC cells has been found).

The mAP, precision, and recall statistics tell us about the performance of our model. You can learn more about what these statistics tell you in our guide to mAP, precision, and recall.

You'll receive an email once your model has finished training. The email contains the training results for you to see how the model performed. If you need to improve performance, we have recommendations for you on how to do that.

View your training graphs for a more detailed view of model performance.

Roboflow Train metrics for you to see model performance

You may want to select one from our model library which contains ready-to-go Jupyter notebooks in frameworks like PyTorch, Keras, and TensorFlow that you can run for free right within Google Colab. If you plan to train a model outside of Roboflow, a good next step is to try our YOLOv8 tutorial.

Deploying a Computer Vision Model

Your trained model, hosted by Roboflow, is optimized and ready to be used across multiple deployment options. If you're unsure of where might be best to deploy your model, we have a guide that lists a range of deployment methods and when they are useful that can be used to navigate that decision.

Out-of-the-box deployment options available for your model

The fastest ways to immediately use your new model and see the power of computer vision:

From this screen, you can also deploy using options that make it easy to use your model in a production application whether via API or on an edge device:

Regardless of your deployment method, we suggest taking a data-centric approach to improving your model over time by using active learning. Our pip package makes this easy across deployment options.

The benefit of using Roboflow Train and Roboflow Deploy is that we make it easy for you to test deployment options, change deployment options, or use multiple deployment options in your application.

As always, reference our Knowledge Base or Community Forum if you have feedback, suggestions, or questions.

We’re excited to see what you build with Roboflow!