Click here to Skip to main content
65,938 articles
CodeProject is changing. Read more.
Articles
(untagged)

Deploying Multiple Intel® Distribution of OpenVINO™ Toolkit Workloads with Microsoft Azure on the IEI Tank AIoT Developer Kit

24 Jan 2019 1  
This article will go over setting up an Intel Distribution of OpenVINO toolkit module in Azure and explore the considerations for running multiple modules on CPU and GPU on the IEI Tank AIoT Developer Kit.

This article is in the Product Showcase section for our sponsors at CodeProject. These articles are intended to provide you with information on products and services that we consider useful and of value to developers.

Introduction

Deploying a containerized Intel® Distribution of OpenVINO™ toolkit solution as a Microsoft Azure* module can help manage your application, and Azure can even deploy multiple modules to the same device. This article will go over setting up an Intel Distribution of OpenVINO toolkit module in Azure and explore the considerations for running multiple modules on CPU and GPU on the IEI Tank* AIoT Developer Kit.

The module will run the object detection sample on GitHub*, which does inference on a video using a MobileNet-SSD Caffe model to detect cars in a video. The GitHub* repository has all the code and instructions on how to convert the model, build the sample, and link to download a sample video.

As multiple modules will be deployed to the same machine, the Intel Distribution of OpenVINO toolkit R4 will be install on the IEI Tank itself, which the containers will be able to access through bind mounts. This way each container does not have to have the Intel Distribution of OpenVINO toolkit installed within it, making build time faster and also taking up less space.

Hardware

IEI Tank* IoT Developer Kit with Ubuntu* 16.04 as Edge Device

Set Up the IEI Tank* AIoT Developer Kit

Install the Intel Distribution of OpenVINO toolkit R4 on the IEI Tank following these instructions.

Build the samples and move the below files into "/opt/intel/computer_vision_sdk/deployment_tools/inference_engine/samples/build/intel64/Release/lib/". Otherwise, the locations in the GitHub makefile and main.cpp will need to be updated with the new location.

Figure 1. Compiled sample libs location

Setting up Microsoft Azure*

If not already setup in Microsoft Azure* to deploy modules, then follow the quick start guide to set up a standard tier IoT Hub and then register, and connect the IEI Tank as an IoT Edge device to it. In addition, also create a registry to store the container images.

Docker*

The Intel Distribution of OpenVINO toolkit containers will be built on the IEI Tank with Linux* OS (another Linux machine could be used) using Docker* and then pushed up the Microsoft Azure registry before being deployed back down to the IEI Tank.

Install Docker onto the IEI Tank.

sudo apt-get install docker.io

The Dockerfile to build the container is below. It will install Intel Distribution of OpenVINO toolkit dependencies, included those for interfacing with GPU.

FROM ubuntu:16.04

RUN apt-get update -y

RUN apt-get install pciutils wget sudo kmod curl lsb-release cpio udev python3-pip libcanberra-gtk3-module libgtk2.0-0 libpng12-dev libcairo2-dev libpango1.0-dev libglib2.0-dev libgtk2.0-dev libswscale-dev libavcodec-dev libavformat-dev libgstreamer1.0-0 gstreamer1.0-plugins-base libgflags-dev build-essential cmake libusb-1.0-0-dev -y

# Install OpenCL driver and runtime for GPU
RUN apt -y install libnuma1 ocl-icd-libopencl1 wget
RUN wget https://github.com/intel/compute-runtime/releases/download/18.28.11080/intel-opencl_18.28.11080_amd64.deb
RUN dpkg -i intel-opencl_18.28.11080_amd64.deb
RUN rm intel-opencl_18.28.11080_amd64.deb

COPY . /

COPY run_traffic_demo.sh run_traffic_demo.sh

RUN chmod 743 run_traffic_demo.sh

CMD ./run_traffic_demo.sh

Code block 1. Dockerfile to build the container

The COPY . / will copy whatever is in the same directory as the Dockerfile to the Docker container, in this case we want a folder called ‘traffic_demo_files’ to be there. Inside of it should be the compiled objection detection sample application, the video of cars driving by in mp4 format, and the converted Mobilenet-Ssd files from the GitHub.

Figure 2. ‘traffic_demo_files’ folder

Also in the same location is the script run_traffic_demo.sh that will run the object detection program on the video file in a continuous loop with the optimized model file. To run it on the GPU, just change CPU to GPU.

#!/bin/bash
source /opt/intel/computer_vision_sdk/bin/setupvars.sh
cd /traffic_demo_files
while true
do
	./tutorial1 -i cars.mp4 -m mobilenet-ssd.xml –d CPU
done

Code block 2. run_traffic_demo.sh

So the Dockerfile, ‘traffic_demo_files’ folder, and runtraffic_demo.sh should all be in the same location. Now the container is ready to be built.

Build and push the Docker container to Microsoft Azure:

sudo docker build <path to Dockerfile>
sudo docker login <registryname>.azurecr.io -u <username> -p <password>
sudo docker tag <container> <registryname>.azurecr.io/openvino/traffic_monitor
sudo docker push <registryname>.azurecr.io/openvino/traffic_monitor

The traffic_monitor module should now be in the registry in Microsoft Azure.

Figure 3. traffic_monitor container in Microsoft Azure* Repository

Deploy

To deploy the module to the IEI Tank, go to the IoT Hub in Microsoft Azure. Scroll down to Automatic Device Management and click on IoT Edge. In the pane that opens on the right, click on the Device ID of the IEI Tank.

Figure 4. Go to the Device in Microsoft Azure

At the top, click on Set modules

Figure 5. Click on Set modules

First, link the container registry to the device so it can access it. Add the name, address, user name, and password.

Figure 6. Configure the Registry

Now you can add a Deployment Module by clicking on +Add and selecting IoT Edge Module.

Figure 7. Add an IoT Edge Module

Now it is time to configure the module with a name and the image URI, which is where you pushed it to originally with Docker. Refer to Figure 8 and 9 for how the configuration will look.

As the container doesn’t have the Intel Distribution of OpenVINO toolkit installed, it needs bind mounts to access it installed on the IEI Tank itself. The Privileged option allows the container to access the GPU. Add the below to the ‘Container Create Options’ field.

{
  "HostConfig": {
    "Binds": [
      "/tmp/.X11-unix:/tmp/.X11-unix",
      "/opt/intel/computer_vision_sdk/:/opt/intel/computer_vision_sdk/",
      "/opt/intel/computer_vision_sdk_2018.4.420/:/opt/intel/computer_vision_sdk_2018.4.420/"
    ],
    "Privileged": true
  }
}

Code block 3. Container Create Options

Choose a Restart Policy.

Note: Microsoft Azure may declare an invalid configuration when selecting ‘On-Failed’ during testing. It is recommended to select another option.

Finally at the end, you need to define one environment variable for the X11 forwarding to work: DISPLAY = :0

Note: X11 forwarding can be different per machine. You need to make sure X11 forwarding is enabled on your device, and you need to run command xhost + in a cmd prompt, or manage how the container will connect to your machine. The value of DISPLAY may also be different.

Figure 8. Part 1 of Module configuration

Figure 9. Part 2 of Module configuration

Click Save, and click Next and Submit until the module is deployed to the device.

Add more modules as desired. Remember to update the run_traffic_demo.sh with -d GPU option and push it to Microsoft Azure under a new name like <registryname>.azurecr.io/openvino/traffic_monitor_gpu.

Useful commands

List the module installed:

iotedge list

Remove the container:

sudo docker rm -f trafficMonitor

Look at the logs of the container:

sudo iotedge logs trafficMonitor –f

Figure 10. Modules deployed to IEI Tank

Conclusion

Now that multiple Microsoft Azure modules are running on the IEI Tank, there are some considerations to take into account. First of all, containerizing the solutions can make it run slower. The impact should be negligible to the average Intel Distribution of OpenVINO toolkit application, however in one where milliseconds matter the cost could be considered very high. Also, the modules running the inference on the GPU are still making use of the CPU to do other computation and impact the CPU modules quite heavily. The sample application from the GitHub repository used in this paper prints out the times for preprocessing (ms), inference (ms/frame), and postprocessing (ms) in the logs, using those as reference to see how the processing time increases with each module, which can be helpful.

Learn More

About the author

Whitney Foster is a software engineer at Intel in the core and visual computing group working on scale enabling projects for Internet of Things and computer vision.

References

Get Started with the Intel® Distribution of OpenVINO™ Toolkit and Microsoft Azure* IoT Edge

License

This article has no explicit license attached to it but may contain usage terms in the article text or the download files themselves. If in doubt please contact the author via the discussion board below.

A list of licenses authors might use can be found here