The below post is a guest post written Joo chan Kim, a PhD student at Luleå University of Technology in Sweden. He is developing an object detection application for Android devices that can identify specific IoT sensors with a custom model. The resulting application is to be used on the Societal development through Secure IoT and Open Data (SSiO) platform.
Overview of Problem
Nowadays, the Internet of Things is common in our daily life. The "things" include smart lights, smart wall plugs, door/window sensors, motion detector sensors and more. Every sensor is connected to the Internet and we, as users, can check the data collected from those sensors on other devices like smartphones, tablet PCs, and traditional PCs. However, checking the collected data requires additional tasks such as opening a specific app and searching the correct menu to see the data, and these tasks may serve as an obstacle for people who are unfamiliar with this kind of technology.
A more intuitive way of interaction between the user and the system is required, and we potential in object detection techniques to simplify the procedure of searching data and making an intuitive interface. With an object detection interface, the user does not need to navigate a menu to find the data that they want. Simply looking at the object would provide the data collected from the targeted sensor.
Background on Data
To build a model to recognize four different IoT sensors, we collected 328 source images from six different indoor backgrounds where the environment of our application will mainly be used. Notably, we collected source images with two white backgrounds that are similar to the white color of the IoT sensors. We expected that the source images with a white background would affect the detection performance; however good performance on white backgrounds is important since the environments with white color are not uncommon in an indoor context.
We also captured each target object from various angles and distances from the object to collect enough source images with sufficient level of variety.
All 328 source images were labelled using LabelImg.
We then used Roboflow's Dataset Health Check to examine the quality of our dataset.
The health check enabled us to identify images with missing annotations. We could then confirm how many source images for each target object were collected. Once we identified which classes were underrepresented, we addressed the "imbalanced classes" problem by adding more source images for that class. We collected additional data until we believed our model could achieve the desired level of performance.
Background on Data Prep
After we labelled all source images, we used Roboflow to improve the quality of the data. This includes including applying image preprocessing techniques like auto-orientation and augmenting our images with several changes including adding rotation and blur. This process increased the quantity and improved the diversity of our images. After augmentation, we had 557 images.
Background on Modeling
Since our prototype’s target platform is smartphones, we used mobile-friendly detection models for training. We started with SSD MobileNet V2. While we were satisfied with the model's speed, we needed better accuracy. We searched the Tensorflow 2 Detection Model Zoo, then selected the SSD MobileNet V2 FPNLite 320x320 model which has similar speed but higher accuracy.
We ran the model training on Google Colab. Since we had no prior experience using TensorFlow 2 on Google Colab, the Roboflow model library helped us easily train our own model. We also read another blog post to understand how TensorFlow training can be done on Google Colab.
Background on Model Impact
We used the official TensorFlow Lite Object Detection Android demo to deploy our custom model on Android devices.
We wanted to modify the model to be used in specific environments (e.g., small memory devices like IoTs) and adjusted the parameter values in the Android demo app.
To identify the input size of our model and what options needed adjustment to run on Android, we used Netron.
The next step was to evaluate the model's performance in certain environments. For example, the model struggled to recognize the object when there is too much backlight in the view. Moreover, even though there were observations with white background in the training data, the model still struggles to perform well with white backgrounds compared to other backgrounds. We expect to resolve these issues by:
- increasing the number of source images and
- using Roboflow to augment our images with random noise, exposure, and brightness transformations.
Next Steps
Ultimately, we will improve this application to use on the Societal development through Secure IoT and Open Data (SSiO) platform. SSiO provides secure and scalable architecture that allows users to add IoT devices, sensors, and open data for service development. Our plan is to use our application to receive data collected from IoT sensors.
Cite this Post
Use the following entry to cite this post in your research:
Matt Brems. (Dec 21, 2020). Tackling the Internet of Things with Roboflow: Object Detection Apps on Android. Roboflow Blog: https://blog.roboflow.com/tackling-iot-object-detection-application-android/
Discuss this Post
If you have any questions about this blog post, start a discussion on the Roboflow Forum.