In this entry of the AI Hazard Detection on Android series, we create a project that will be used for real-time hazard detection for a driver and prepared a detection model for use in TensorFlow Lite.
Many modern vehicles are equipped with cameras that can detect hazards on the road ahead and alert the driver. But what if you’re driving an older vehicle, and all you’ve got is an Android device? Can you create an app that detects hazards as they appear and alerts you before you drive into them? This series will show you how to create a hazard detector using an Android device. The completed project will give audio alerts and highlight obstacles when they are detected within dangerous zones.
The Tech Stack
To get started, we’ll need Android Studio, an AI model, and the Python interpreter. Optionally, you may also want a free software tool called Netron for viewing information on trained network models. As our first step in development, we will have the solution work on photographs stored on the device. You will want pictures from driving areas with some obstacles in view. Later, we will have the application use live video from the camera.
We'll be creating our Android app using Kotlin. If you've done much Android development, you're probably accustomed to using Java. So why use Kotlin if Java still works well? In short, Kotlin is now Google's preferred language for Android development. New Android APIs will be designed for use from Kotlin - and while they'll be usable from Java, it may be awkward to do so.
But if you're a Java veteran, don't worry! Kotlin is designed to feel very familiar to Java developers, and it also has a great Java interop story. You can load any Java library and use it seamlessly in a Kotlin app.
The AI Model
The AI model will enable the detection of objects within a visual scene. Rather than build a model from the ground up, you may want to start with an existing model as a starting point. You can find trained models on the ONNX Model Zoo or the TensorFlow Hub. ONNX is a format for representing trained networks. ONNX models can be converted to other representations. We will convert one for use with TensorFlow Lite. After looking through the various models that are available, I’ve decided to use a YOLO model from the ONNX Model Zoo. YOLO is a relatively easy-to-use model that detects a variety of different types of objects. Some picture analysis models can only tell that some set of objects are present, but cannot tell where in the picture they are. YOLO will also return information on where an object is.
The models from the ONNX Model Zoo must be converted. It is easiest to do this from a script. To perform the conversion, we need Python and the TensorFlow package for Python. At the time of this writing, you would want to use Python 3.8. While 3.9 is available, TensorFlow is compatible up to version 3.8. If you have installed Python, you can install the TensorFlow package and the ONNX to TensorFlow conversion package for Python using the following command.
pip3 install tensorflow
Pip3 install onnx-tf
Converting our model from ONNX to TensorFlow Lite is a two phase process. The file must be converted from ONNX to TensorFlow, and then from TensorFlow to TensorFlow Lite. To convert the model from ONNX to TensorFlow, use the following command.
onnx-tf convert -i youlov4.onnx -o yolo.pb
For the second conversion, we need a short Python script. Save the following to a file named convert.py. The variable saved_model_dir
should be set to the path in which you have saved the converted model.
import tensorflow as tf
saved_model_dir='/dev/projects/models'
converter = tf.lite.TFLiteConverter.from_saved_model(saved_model_dir)
converter.optimizations = [tf.lite.Optimize.DEFAULT]
tf_lite_model = converter.convert()
open(’yolov4.tflite', 'wb').write(tf_lite_model)
Our Android Project
For the Android project, create a new application with an empty activity. This application will eventually be a full screen application using the live view from the camera. This application will need permissions for camera access and location access. Add the following permissions to your AndroidManifest.xml. To send emergency alerts, the application needs SMS permissions too.
<uses-permission android:name="android.permission.CAMERA" />
<uses-feature android:name="android.hardware.camera" />
<uses-permission android:name="android.permission.ACCESS_COARSE_LOCATION" />
<uses-permission android:name="android.permission.ACCESS_FINE_LOCATION" />
<uses-permission android:name="android.permission.SEND_SMS" />
The application will need to request these permissions in code. In the interest of staying more focused on functionality specific to this application, some fairly common calls are not demonstrated here, but are within the example project. This includes the calls to requestPermissions
for SMS, location, and camera access.
The Android project requires a few configuration changes to prepare it for TensorFlow Lite. A reference to TensorFlow libraries must be added, and the project should be set to not compress TensorFlow model files. To add the TensorFlow libraries, add these three lines to the bottom of the dependencies section.
implementation 'org.tensorflow:tensorflow-lite:0.0.0-nightly'
implementation 'org.tensorflow:tensorflow-lite-gpu:0.0.0-nightly'
implementation 'org.tensorflow:tensorflow-lite-support:0.0.0-nightly'
Within the same file, within the android section, the following must be added to prevent the models from being compressed.
aaptOptions {
noCompress "tflite"
}
The project is now ready to handle TensorFlow Lite models. In the next article of this series, we will add the TensorFlow Lite model to the project and prepare it for processing.