Introduction
This tutorial will go over how you could deploy a containerized Intel® Distribution of OpenVINO™ toolkit application over Azure* IoT Edge.
Pre-requisites
Please follow the Azure IoT Edge getting started guide first to setup your target and familiarize with the Azure IoT Edge service: https://software.intel.com/en-us/articles/set-up-microsoft-azure-iot-edge-on-an-up-squared-board
Pre-Installed Intel® Distribution of OpenVINO™ toolkit on target
If you know the device you’re deploying to already has the Intel® Distribution of OpenVINO™ toolkit installed on it (an Intel® IoT Developer Kit for example), you can load the Intel® Distribution of OpenVINO™ toolkit libraries into the container using bind mounts. This makes the overall space the container takes much less, and makes it significantly faster to build. This could work more ideally for deployment over cloud platforms as long as the device being deployed to has the Intel® Distribution of OpenVINO™ toolkit. For more instructions on how to install the Intel® Distribution of OpenVINO™ toolkit, go to https://software.intel.com/en-us/articles/OpenVINO-Install-Linux
Installing the Intel® Distribution of OpenVINO™ toolkit in the Container:
If you need the container to be more self-contained, or if the device you’re deploying to does not have toolkit installed, you will need to install the Intel® Distribution of OpenVINO™ toolkit in the container itself. This method has been covered in a previous tutorial for creating an Intel® Distribution of OpenVINO™ toolkit container image, although we will go over the differences here.
Constructing the Containers
Pre-Installed
For making a container with the Intel® Distribution of OpenVINO™ toolkit pre-installed, refer to the tutorial for doing this. In this container, however, we will not do USER root, since I believe this doesn’t work with Azure IoT Edge. We will also not install Jupyter* notebook in this container, since it is unnecessary. The bind mounts and environment variables for X11 forwarding will be defined in the command line (i.e. Azure portal)
The Dockerfile should look like this
FROM ubuntu:16.04
RUN apt-get update -y
RUN apt-get install pciutils wget sudo kmod curl lsb-release cpio udev python3-pip libcanberra-gtk3-module -y
RUN pip3 install opencv-python
COPY . /
COPY run_demo.sh run_demo.sh
RUN chmod 743 run_demo.sh
CMD ./run_demo.sh
run_demo.sh just sources the variables, then runs the sample:
#!/bin/bash
source /opt/intel/computer_vision_sdk/bin/setupvars.sh
cd /opt/intel/computer_vision_sdk/deployment_tools/demo/
./demo_squeezenet_download_convert_run.sh
Using bind mounts:
When constructing the container using bind mounts, you need to specify both the bind mounts and entrypoint of the container as command line arguments. There’s not currently a good way to define bind mounts in the Dockerfile, and if the mounts are defined in the command line, the entrypoint command also needs to be defined there, or it won’t have the bind mount to work with when it’s executed.
So, in this container you basically just need to have the Ubuntu* base image, install all dependencies, copy over the necessary files, and keep the demo script ready to be executed.
The Dockerfile looks like this:
FROM ubuntu:16.04
RUN apt-get update -y
RUN apt-get install pciutils wget sudo kmod curl lsb-release cpio udev python3-pip libcanberra-gtk3-module -y
COPY . /
RUN chmod 743 run_demo.sh
This container will also use run_demo.sh shown above.
Pushing the Container to Microsoft Azure* Container Registry (ACR):
You will need to push your custom container to either a private Azure Container Registry to deploy it, or it should work off Dockerhub too.
Refer to documentation online for pushing custom modules to an Azure Container Registry. https://docs.microsoft.com/en-us/azure/container-registry/container-registry-get-started-docker-cli
After creating the registry, log into it on the device where you have the custom image:
az acr login --name {registry name}
When logging in, it is easier to enable Admin User under Access Keys in the ACR, and use the username and password generated there to log into the ACR.
Log into Docker* using these credentials
docker login {login server name} -u {Generated username} -p {Generated password}
Now you should be able to push and pull containers from your ACR.
Tag your custom image before pushing
docker tag {image name} {login server name}/samples/{image name}:{version #}
Important Note: If for some reason you’ve already pushed a container to the registry, but make changes to it, you will need to push again with a different name. If you push it again with the same name, the ACR thinks no changes were made and doesn’t push anything.
To get around this, make sure when you tag the container, add a version # at the end, and keep updating that number as you edit.
docker tag {image name} {login server name}/samples/{image name}:{version #}
The ACR is good about not duplicating data. I had 3 versions of a 2GB Intel® Distribution of OpenVINO™ toolkit image in the ACR, each running a different demo, and the total space taken in the ACR was only 2GB.
Creating a Deployment Module
Now for the fun part, actually deploying a module you made!
Refer to this tutorial for setting up the IoT Edge Hub, and IoT Edge Device.
Once you’ve created an Edge Hub, and connected your device using the connection string, you’re ready to create a custom module.
Go to the IoT Edge devices, and then click on the Edge Device previously created.
In that portal, click Set Modules.
Once there, you’ll have to provide all the credentials of the ACR to the module, otherwise it will not be able to access the ACR and will return errors.
This can be found by navigating to the Azure Container Registry portal, under Access Keys.
Under Access Keys make sure Admin User is enabled, and the rest of the fields are provided easy enough to see (refer to above pictures under Push Container to ACR)
After you’ve done this, scroll to the bottom and hit Add + under Deployment Modules, and Add IoT Edge Module:
Give the module a name and link an Image URI (the same name as what you tagged the image before pushing).
Container Create Options:
After you give the module a name, and link the Image URI, you will define the command line arguments in the Container Create Options section. The documentation for defining these is actually found on docs.docker.com instead of Microsoft’s documentation.
This link gives you examples of how to define the container create options. For either container, we need to define a bind mount for X11 forwarding, and set Privileged
to true. Under HostConfig
you define things like bind mounts and even Ports. Outside of that, you define things like Privileged mode or the Entrypoint
command.
It will look like this:
{
"HostConfig": {
"Binds": [
"/tmp/.X11-unix:/tmp/.X11-unix"
],
"Privileged": true
}
}
For the container using bind mounts, we need to define an extra mount for the Intel® Distribution of OpenVINO™ toolkit library, and also define the Entrypoint
command. It will look like this:
{
"HostConfig": {
"Binds": [
"/tmp/.X11-unix:/tmp/.X11-unix",
"/opt/intel/computer_vision_sdk_2018.3.343/:/opt/intel/computer_vision_sdk_2018.3.343/"
],
"Privileged": true
},
"Entrypoint": [
"./opt/intel/computer_vision_sdk_2018.3.343/deployment_tools/demo/demo_security_barrier_camera.sh"
]
}
After the Create Options, you can define the restart policy. For this demo I chose On-failed because otherwise it will keep running the demo every time it restarts (which is always by default). It’s easier to just manually restart the demo when you want to run it again.
The desired running status should probably be running unless you want it deployed but not run for some reason.
Finally at the end, you need to define one environment variable for the X11 forwarding to work:
DISPLAY = :0
Note: X11 forwarding that it can be different per machine. You need to make sure X11 forwarding is enabled on your device, and you need to run xhost +, or manage how the container will connect to your machine. The value of DISPLAY may also be different.
Deploy the Deployment Modules
Click Save after finishing the settings. You can add as many IoT Edge Modules as you want in one deployment.
Then, click Next and Next (We won’t edit the routes for now). Then hit Submit. The module should be deployed and the demo will start to run on the target device.
For the container with the Intel® Distribution of OpenVINO™ toolkit pre-installed, it should come up a lot faster. For the container using bind mounts, it might take a lot longer (4-5 minutes), because it actually downloads a lot more stuff that it needs to run the demo (We should be able to fix this).
Run iotedge logs <<container name>> -f to see the logs of the container. You can have multiple containers in one deployment, so you’ll have to check each logs separately.
Azure IoT Edge + Container Registry
This tutorial will walk you through deploying the Intel® Distribution of OpenVINO™ toolkit module via Azure IoT Edge from end to end. There are a number of steps which are basically already documented in Microsoft tutorials, so those links are provided for extra reference.
Setting up Microsoft Azure* IoT Hub and IoT Edge
This tutorial heavily references: https://docs.microsoft.com/en-us/azure/iot-edge/quickstart-linux
>>Use this link for commands to install EdgeAgent, and the Security daemon
To deploy any module, you will need to create an Azure IoT Hub, which keeps track of all the edge devices you can deploy to.
First create a resource group to hold all the resources:
az group create --name IoTEdgeResources --location westus
Then, create the Hub in that resource group:
az iot hub create --resource-group IoTEdgeResources --name {hub_name} --sku F1
After --sku, if you don’t have a Hub already created, keep the F1 flag, otherwise switch that to S1 (you will be charged accordingly)
After you’ve made the IoT Hub, you need to add an Edge Device to the Hub.
az iot hub device-identity create --hub-name {hub_name} --device-id {device name} --edge-enabled
The Edge Device has a Connection String, which is used later as a key to connect to Azure. Grab this and save for future use.
az iot hub device-identity show-connection-string --device-id {device name} --hub-name {hub_name}
Next, install the IoT EdgeHub and EdgeAgent on the device you plan to deploy the modules to. This is what Azure uses to manage the containers and deployments.
- Refer to the commands listed in the above URL for installing this along with the security daemon
- You'll also need a container runtime, Docker should work fine. (Moby is some extension of Docker, wouldn't work on my IEI Tank* for some reason)
After you’ve installed these tools, go to the IoT Edge config file:
sudo nano /etc/iotedge/config.yaml
In this file, find the device_connection_string
variable, and replace it with the connection string generated above.
This should link your device with Azure!