Click here to Skip to main content
65,938 articles
CodeProject is changing. Read more.
Articles
(untagged)

DashCam Android Application

0.00/5 (No votes)
4 Jun 2019 1  
The main objective of this project is to develop an Android Application that uses a built-in camera to capture the objects on a road and use a Machine Learning model to get the prediction and location of the respective objects.

Have you created a project using some hardware and/or software available on Qualcomm Developer Network? Would you like to get it featured on QDN? Tell us about your project!

The project is designed to utilize the Qualcomm® Neural Processing SDK, which allows you to tune the performance of AI applications running on Qualcomm® Snapdragon™ mobile platforms. The Qualcomm Neural Processing SDK is used to convert the trained models from Caffe, Caffe2, ONNX, and TensorFlow to the Snapdragon supported format (.dlc format), allowing developers to enable their AI applications with optimized on-device inference.

Objective

The main objective of this project is to develop an Android Application that uses a built-in camera to capture the objects on a road and use a Machine Learning model to get the prediction and location of the respective objects.

Materials Required / Parts List / Tools

Source Code / Source Examples / Application Executable

Build / Assembly Instructions

Parts used

Below are the items used in this project.

Project components for Dashboard Android Application project

  1. Mobile Display with QC Dash Cam app
  2. Snapdragon 835 Mobile Hardware Development Kit
  3. External camera setup

Deploying the project

  1. Download code from the GitHub Repository.
  2. Compile the code and run the application from Android Studio to generate application (apk) file.

How does it work?

QC_DashCam Android application opens a camera preview, collects all the frames and converts it to bitmap. The network is built via Neural Network builder by passing caffe_mobilenet.dlc as the input. The bitmap is then given to model as inference, which returns object prediction and localization of the respective object.

Prerequisite for Camera Preview.

Permission to obtain camera preview frames is granted in the following file:

AIML-DashCam-App/app/src/main/AndroidManifest.xml

<uses-permission android:name="android.permission.CAMERA" />

In order to use camera2 APIs, add the below feature

<uses-feature android:name="android.hardware.camera2" />

Loading Model

Code snippet for neural network connection and loading model:

final SNPE.NeuralNetworkBuilder builder = new
SNPE.NeuralNetworkBuilder(mApplicationContext)
// Allows selecting a runtime order for the network.
// In the example below use DSP and fall back, in order, to GPU then CPU
// depending on whether any of the runtimes are available.
.setRuntimeOrder(DSP, GPU, CPU)
// Loads a model from DLC file
.setModel(new File("<model-path>"))
// Build the network
network = builder.build();

Capturing Preview Frames:

Texture view is used to render camera preview. TextureView.SurfaceTextureListener is an interface which can be used to notify when the surface texture associated with this texture view is available.

  @Override
    public void onSurfaceTextureAvailable(SurfaceTexture surfaceTexture, int i, int i1) {
        Logger.d(TAG, "OnSurfaceTextureAvailable");
          try {
//id[0] indicates rear camera
            mCameraManager.openCamera(ids[0], mCameraCallback, mBackgroundHandler);
        } catch (CameraAccessException e) {
            e.printStackTrace();
        }
    }

Camera Callbacks

Camera call back, CameraDevice.StateCallback is used for receiving updates about the state of a camera device. In the below overridden method, surface texture is created to capture the review and obtain the frames.

@Override
public void onOpened(@NonNull CameraDevice cameraDevice) {
    Logger.d(TAG, "onOpened()");
    mCameraDevice = cameraDevice;
      mSurfaceTexture = mTextureView.getSurfaceTexture();
Surface mSurface = new Surface(mSurfaceTexture);
try {
    mCameraDevice.createCaptureSession(Arrays.asList(mSurface), new  CameraCapture(),null);
} catch (CameraAccessException e) {
    e.printStackTrace();
}

try {
    builder = mCameraDevice.createCaptureRequest(CameraDevice.TEMPLATE_PREVIEW);
    builder.addTarget(mSurface);
} catch (CameraAccessException e) {
    e.printStackTrace();
}
}

Getting Bitmap from Texture view

Bitmap of fixed height and width can be obtained from TextureView in onCaptureCompleted callback using TotalCaptureResult. That bitmap can be compressed and sent to model as input.

@Override
public void onCaptureCompleted(@NonNull CameraCaptureSession session, @NonNull CaptureRequest request, @NonNull TotalCaptureResult result) {
    super.onCaptureCompleted(session, request, result);
    if (mNetworkLoaded) {
        Bitmap mBitmap = mTextureView.getBitmap(Constants.BITMAP_WIDTH, Constants.BITMAP_HEIGHT);

Object Inference

The bitmap image converted to RGBA byte array of size 300*300*3. Basic image processing depends on the kind of input shape required by the model, then converting that processed image into the tensor is required. The prediction API requires a tensor format with type Float which returns object prediction and localization in Map<String, FloatTensor> object.

private Map<String, FloatTensor> inferenceOnBitmap(Bitmap inputBitmap) {
    final Map<String, FloatTensor> outputs;
    try {
      if (mNeuralNetwork == null || mInputTensorReused == null || inputBitmap.getWidth() != getInputTensorWidth() || inputBitmap.getHeight() != getInputTensorHeight()) {
            return null;
        }
        // [0.3ms] Bitmap to RGBA byte array (size: 300*300*3 (RGBA..))
        mBitmapToFloatHelper.bitmapToBuffer(inputBitmap);
        // [2ms] Pre-processing: Bitmap (300,300,4 ints) -> Float Input Tensor (300,300,3 floats)
        mTimeStat.startInterval();
        final float[] inputFloatsHW3 = mBitmapToFloatHelper.bufferToNormalFloatsBGR();
        if (mBitmapToFloatHelper.isFloatBufferBlack())
            return null;
        mInputTensorReused.write(inputFloatsHW3, 0, inputFloatsHW3.length, 0, 0);
        mTimeStat.stopInterval("i_tensor", 20, false);
        // [31ms on GPU16, 50ms on GPU] execute the inference
        mTimeStat.startInterval();
        outputs = mNeuralNetwork.execute(mInputTensorsMap);
        mTimeStat.stopInterval("nn_exec ", 20, false);
    } catch (Exception e) {
        e.printStackTrace();
              return null;
    }
    return outputs;
}

Object Localization

Model returns the respective Float Tensors, from which the shape of the object and its name can be inferred. Canvas is used to draw a rectangle on the predicted object.

private void computeFinalGeometry(Box box, Canvas canvas) {
        final int viewWidth = getWidth();
        final int viewHeight = getHeight();
        float y = viewHeight * box.left;
        float x = viewWidth * box.top;
        float y1 = viewHeight * box.right;
        float x1 = viewWidth * box.bottom;
        // draw the text
        String textLabel = (box.type_name != null && !box.type_name.isEmpty()) ? box.type_name : String.valueOf(box.type_id + 2);
        canvas.drawText(textLabel, x + 10, y + 30, mTextPaint);
        // draw the box
      mOutlinePaint.setColor(colorForIndex(box.type_id));
        canvas.drawRect(x, y, x1, y1, mOutlinePaint);
    }

Sample screen shot of the application with model prediction

Sample screen shot of the application with model prediction

Usage Instructions

How to install Android Application

  • You will need an Android Phone with version 7.0 and above.
  • ADB installed in the Windows / Linux system.
  • Follow the instructions below to install adb tools on your computer. https://developer.android.com/studio/command-line/adb.html
  • Install the application using the ADB tool and following command: adb install qc_dashCam.apk
  • Run the application in the phone.

License

This article has no explicit license attached to it but may contain usage terms in the article text or the download files themselves. If in doubt please contact the author via the discussion board below.

A list of licenses authors might use can be found here