Danger Monitoring for Cyclists using Raspberry Pi and Object Detection

This is a guest post authored by Herberto Warner and lightly edited by the Roboflow team. Herberto's Github hosts all of the images used in his project.


Increasing numbers of dangers and accidents for cyclists

Cyclists face dangers like cars on the street, other cyclists, and poorly developed bike lanes. In the last few years, the numbers of cyclists and cars have increased, which has made the structuring of urban areas a challenging problem for city planners. (Related: check out our post on reducing traffic with computer vision.) The consequences include:

  • an increasing number of cycling accidents,
  • cyclists feeling uncomfortable riding in urban areas, and
  • drivers feeling uncomfortable toward cyclists on the road.

Using computer vision – specifically object detection – on the street can lead to new opportunities towards danger prediction and avoidance.

My proposed solution is a low-cost prototype danger monitoring application that detects cars and bicycles, then warns the cyclist with an LED light.

The prototype is composed of:

  • An LED traffic light for different danger states.
  • The LED light is connected to the Raspberry Pi 4, plus a camera used for object detection.
  • The Raspberry Pi is connected to a power bank for power consumption.
  • The Raspberry Pi is also connected to a Google Coral TPU stick for accelerating the interference time.

Data preprocessing and collection

To train the danger monitoring system, three different datasets were used. All datasets consist of labels car and bicycle as bounding boxes.

  1. The first image dataset was shot with the Raspberry Pi camera. These 410 images were collected by using the helmet application on the street, represent real-life data from a cyclist's viewpoint, and were labeled with LabelImg.
  2. The next dataset consisted of 6,000 pre-labeled images of cars and bicycles downloaded from Google Open Images.
  3. The third dataset was a mix of the Raspberry Pi camera dataset and Google Open Images dataset.

During the data collection process, Roboflow automatically split my images into training, validation, and testing sets. I used their Dataset Health Check to show the frequency of each label and to look for errors in the data. I also preprocessed my data with Roboflow, including resizing the images.

During the training, a problem with the Google Open Images dataset slowed my progress -- many bounding boxes were missing or were out of bounds. Roboflow showed me all images with bad coordinates of the bounding boxes so I could fix them. Once my data was prepared, I exported my images as TFRecords.

Using Roboflow for data preparation and generating TFRecords accelerated the project significantly.

Model and training

I used the pretrained MobileNetSSD model, because this model is known for a fast inference time and for being used in embedded and low power applications. All models were trained on the same datasets to compare the performance between them. For the evaluation, I compared the precision and recall between the models with the COCO Evaluation Board.

After training, I converted the model to a TFlite file, compiled with the Edge TPU of the Coral Stick. Using Python and OpenCV, each detected object had a bounding box displayed around it, with a confidence level describing how confident the model was in that prediction.

The LED traffic light switched between the colors depending on the risk of the danger. Specifically, the distance, size, and predicted label of each object controlled the LED traffic light logic. (For example, predicting a car that is far away turns the LED light "green," for low-danger, but predicting a car that is closer yields "yellow" for increased danger.)

Testing the prototype on the streets and results

To use this application, the cyclist has to attach the LED traffic light to their wrist. This allows us to test the application! When evaluating the models by comparing the average precision and recall, the model fit on the Raspberry Pi cam dataset performed best. Bigger objects like cars were easily detected as danger; smaller objects like bicycles were not detected as often. Thanks to Roboflow, the data preprocessing was hugely accelerated, allowing me more time to build my object detection application.

Look into the future and further development

The prototype has still flaws: namely, detection errors and lack of functionality.

  • To address detection errors, I will continue to gather data, especially on smaller objects (like bicycles) where my model currently underperforms. I'll also explore additional models.
  • Regarding functionality, with the steady development of small single board computers like the Raspberry Pi in the coming months and years, it will be possible to resize the application to a smaller size with low cost.

Moving forward, I'd like to see danger monitoring used in smart cities – perhaps cyclists exchange data like danger status in a certain area – in order to make the life in cities and urban areas safer.