Enhancing Child Safety with Computer Vision
Published Sep 20, 2023 • 9 min read

This article was contributed to the Roboflow blog by Abirami Vina.

Introduction

Child safety is a priority for parents and caregivers, an issue society takes seriously. While traditional safety measures are indispensable, technology is opening up new avenues for enhancing our ability to ensure child safety in various environments, especially with respect to computer vision.

One standout feature of computer vision is its exceptional capability for real-time monitoring, which offers an extra layer of security that is particularly invaluable for keeping an eye on kids.

In this article, we'll explore computer vision applications aimed at child safety. We'll also walk through a comprehensive tutorial on harnessing the power of computer vision to make your pool area a safer space for little ones. Let's get started!

Computer Vision and In-Home Safety

First, let's understand what object detection is and why it's relevant for toddler safety. Object detection is a specialized technique within the expansive field of computer vision. It employs machine learning algorithms to identify specific objects - in this case, toddlers - in digital images and videos. This technology offers real-time tracking capabilities, adding a crucial layer of security that can be a game-changer in child safety scenarios.

Safety Applications

Object detection can be fine-tuned to serve as a guardian in various contexts. For example, you can set up a system that sends immediate alerts if a child gets too close to a swimming pool, dramatically reducing the risk of drowning. But it doesn't stop there.

Object detection can also monitor areas that are off-limits to children, like workshops filled with hazardous tools, and send you real-time alerts if a boundary is crossed. And, for those concerned about the dangers of traffic, systems can be installed near driveways or busy streets to notify caregivers if a child steps into these high-risk zones.

Detecting whether a child has entered a zone marked as dangerous. Source

Convenience Applications

Object detection isn't just about safety; it also offers a level of convenience that can make life easier. Imagine automated baby gates that open only when an adult approaches or a smart crib monitoring system that sends text or phone alerts for unusual activity, like a baby trying to climb out. These IoT convenience features can simplify daily routines and offer parents a breather.

An example of detecting a baby falling out of its crib. Source

Well-being Applications

Beyond safety and convenience, this technology can also be employed for child activity tracking, offering valuable data that can be useful for developmental milestones. Additionally, sleep monitoring systems can be set up to provide insights into a child's sleep patterns, helping parents understand sleep quality and identify any potential issues.

Monitoring a baby's movements and detecting when and how long it takes them to fall asleep. Source

Applying Object Detection for Pool Area Monitoring

Let’s use a trained object detection model to detect children and analyze an image of a kid playing in the backyard near a pool. In this guide, we’ll focus on how to apply an object detection model rather than how to train an object detection model to detect children. For more information on creating your own object detection model, take a look at our guide on custom training with YOLOv8.

A Trained Object Detection Model

We'll be using a trained toddler object detection model from Roboflow Universe. Roboflow Universe is a platform that is a hub for open-source computer vision datasets and models, featuring an extensive library with more than 200,000 datasets and 50,000 ready-to-use models. To get started, create a Roboflow account and head over to the page where the model is deployed, as indicated below.

Upon scrolling down, you’ll see a piece of sample code that shows how to deploy the API for this model, as shown below. Ensure to note down the model ID and version number from the third and fourth lines of the sample code. In this case, the model ID is “toddler-final,” and it’s the sixth version of the model. This information will come in handy when we assemble our inference script.

Code Walk-through for Monitoring Kids Near a Pool

Our objective is to create a boundary around the pool that can be considered a danger zone, and if the child is within this boundary, an alert should be displayed to warn that the kid is near the pool.

I’ve downloaded a relevant image (as shown below) from the internet to illustrate monitoring kids playing near a pool. You can do the same or use your own relevant images.

Source

We'll use the Roboflow Inference Server, a microservice interface that operates over HTTP, for executing our inference operations. This service offers both a Python library and a Docker interface. We'll go for the Python library, as it's more streamlined and ideal for projects centered around Python.

Step1: Setting up Roboflow Inference

For CPU-based installation of Roboflow Inference, execute the following command:

pip install inference

For a GPU-based setup, use this command instead:

pip install inference-gpu

Step 2: Defining Boundaries for the Pool Area

Using the OpenCV library, we can designate specific areas as 'danger zones' for children, such as the pool area in our example. The code snippet provided below enables us to interactively draw points to form a polygon directly on a frame. By doing so, we can outline the pool area or any other region we wish to monitor.

Once the polygon is drawn, the code will calculate the maximum and minimum values for both the x and y coordinates of the polygon points. These calculated values will then be used to draw a rectangular boundary around the designated pool area, marking it as a danger zone for children.

import cv2
import numpy as np

#read image from file path
path = "test_kid.png"
img = cv2.imread(path)
copy = img.copy()
place_holder= img.copy()

done = False
points = []
current = (0, 0)
prev_current = (0,0)


# Mouse callbacks
def on_mouse(event, x, y, buttons, user_param):
    global done, points, current,place_holder
   
    if done:
        return
    if event == cv2.EVENT_MOUSEMOVE:
        # updating the mouse position
        current = (x, y)
    elif event == cv2.EVENT_LBUTTONDOWN:
        # Left click to add a point
        print(x, y)
        cv2.circle(img,(x,y),5,(255,0,0),-1)
        points.append([x, y])
        place_holder = img.copy()
    elif event == cv2.EVENT_RBUTTONDOWN:
        # Right click to finish
        print("Boundary complete")
        done = True

cv2.namedWindow("Draw_Boundary")
cv2.setMouseCallback("Draw_Boundary", on_mouse)

while(not done):
            # Keeps drawing new images as we add points
            if (len(points) > 1):
                if(current != prev_current):
                    img = place_holder.copy()

                cv2.polylines(img, [np.array(points)], False, (0,255,0), 1)
                # To show what the next line would look like
                cv2.line(img, (points[-1][0],points[-1][1]), current, (0,0,255))

            # Update the window
            cv2.imshow("Draw_Boundary", img)

            if cv2.waitKey(50) == ord('d'): # press d(done)
                done = True

# Final drawing
img = copy.copy()

if (len(points) > 0):
    cv2.fillPoly(img, np.array([points]), (255,0,0))
    max = np.amax(np.array([points]), axis = 1)
    min = np.amin(np.array([points]), axis = 1)

    #prints max and min values of the polygon that will be used to draw a rectangle later
    print("xmax:",max[0][0])
    print("ymax:",max[0][1])
    print("xmin:",min[0][0])
    print("ymin:",min[0][1])
   
# And show it
cv2.imshow("Draw_Boundary", img)
# Waiting for the user to press any key
cv2.waitKey(0)
cv2.destroyWindow("Draw_Boundary")

Here’s a GIF that shows what the process of defining the boundary by dragging and dropping looks like:

The output that is displayed after the boundary is drawn is shown below.

Step 3: Detecting Children in the Image

The following code helps us run inference tasks using the trained toddler object detection model.

import numpy as np
import cv2
import base64
import io
from PIL import Image
from inference.core.data_models import ObjectDetectionInferenceRequest
from inference.models.yolov5.yolov5_object_detection import (
    YOLOv5ObjectDetectionOnnxRoboflowInferenceModel,
)


model = YOLOv5ObjectDetectionOnnxRoboflowInferenceModel(
    model_id="toddler-final/6", device_id="my-pc",
    #Replace ROBOFLOW_API_KEY with your Roboflow API Key
    api_key="ROBOFLOW_API_KEY"
)


#read your input image from your local files
frame = cv2.imread("test_kid.png")

#converting the frames to base64
retval, buffer = cv2.imencode('.jpg', frame)
img_str = base64.b64encode(buffer)

request = ObjectDetectionInferenceRequest(
    image={
        "type": "base64",
        "value": img_str,
    },
    confidence=0.4,
    iou_threshold=0.5,
    visualization_labels=False,
    visualize_predictions = True
)

results = model.infer(request)

Step 4: Checking if the Detected Children Are Inside the Danger Zone

The final piece of code uses the defined boundary coordinates and the bounding box of the detected children to check if any kids are inside the danger zone.

#to be placed right after the code in the previous step
# Take in base64 string and return cv image
def stringToRGB(base64_string):

    img = Image.open(io.BytesIO(base64_string))
    opencv_img= cv2.cvtColor(np.array(img), cv2.COLOR_BGR2RGB)
    return opencv_img

output_img = stringToRGB(results.visualization)

for predictions in results.predictions:
  if ("baby" in predictions.class_name):
      print("Child detected")
      x = predictions.x
      y = predictions.y
      width = predictions.width
      height = predictions.height

      #calculating bounding box coordinates
      x1= int(x - width / 2)
      x2 = int(x + width / 2)
      y1 = int(y - height / 2)
      y2 = int(y + height / 2)

      #using the pool boundary coordinates from earlier
      xmin=23
      ymin=84
      xmax=533
      ymax=328


      #checking if the child is within the danger zone
      if (x1 > xmin) or (x2 < xmax) or (y1>ymin) or (y2<ymax):
            print("Child inside unsafe area")
            output_img= cv2.cvtColor(np.array(output_img), cv2.COLOR_BGR2RGB)
            frame = cv2.putText(output_img , 'Unattended Child Near Pool!', (40, 40), cv2.FONT_HERSHEY_SIMPLEX, 0.7,(0,0,255), 2, cv2.LINE_AA)
            frame = cv2.rectangle(frame , (xmin,ymin), (xmax,ymax), (0,255,0), 2)
         
            cv2.imwrite("output.jpg", frame)

            #breaks as soon as any child is found in the danger zone
            break

The output is displayed as follows:

Conclusion

In this article, we've illustrated the power of object detection with respect to child safety. We've seen how this technology can be a game-changer, offering real-time monitoring capabilities that can significantly enhance our ability to keep children safe. The applications are diverse and impactful, from pool area monitoring to restricted zones and traffic safety.

With this post, we've only scratched the surface. The potential for computer vision to revolutionize child safety is immense. Whether it's predictive analytics for potential hazards or real-time alerts for caregivers, the possibilities are endless. We encourage you to dive deeper, explore these technologies, and consider implementing them in your own safety measures. After all, when it comes to the safety of our youngest, every extra layer of protection counts.

Cite this Post

Use the following entry to cite this post in your research:

Contributing Writer. (Sep 20, 2023). Enhancing Child Safety with Computer Vision. Roboflow Blog: https://blog.roboflow.com/enhancing-child-safety-with-computer-vision/

Discuss this Post

If you have any questions about this blog post, start a discussion on the Roboflow Forum.

Written by

Contributing Writer