Click here to Skip to main content
65,938 articles
CodeProject is changing. Read more.
Articles / artificial-intelligence / tensorflow

Deploying YOLOv5 Model on Raspberry Pi with Coral USB Accelerator

2.33/5 (2 votes)
1 Feb 2021CPOL5 min read 35.1K  
In this article we’ll deploy our YOLOv5 face mask detector on Raspberry Pi.
Here we deploy our detector solution on an edge device – Raspberry Pi with the Coral USB accelerator.

Introduction

In the previous article, we tested a face mask detector on a regular computer. In this one, we’ll deploy our detector solution on an edge device – Raspberry Pi with the Coral USB accelerator.

The hardware requirements for this part are:

  • Raspberry Pi 3 / 4 with an Internet connection (only for the configuration) running the Raspberry Pi OS (previously called Raspbian)
  • Raspberry Pi HQ camera (any USB webcam should work)
  • Coral USB accelerator
  • Monitor compatible with your Pi

The Coral USB accelerator is a hardware accessory designed by Google. It adds an edge TPU processor to your system, enabling it to run machine learning models at very high speeds. I suggest you have a look at its data sheet. You can plug it into virtually any device. In our case, we’ll use it in conjunction with our Pi because this board doesn’t make good detection at high FPS by itself.

A Raspberry Pi with an accelerator isn't the only portable you can use for this task. In fact, if you're working on a commercial solution, you'd be better off with a standalone Coral board, or something from NVidia's Jetson hardware series.

Both of these offer attractive options for scaling up toward mass production once you're done prototyping. For basic prototyping and experimentation, however, Raspberry Pi 4 with a Coral accelerator stick will do the job.

A final note: you don't have to deploy this solution to a Raspberry Pi if you don't want to. This model will run happily on a desktop or laptop PC if that's what you'd prefer to do.

Initial Steps on Raspberry Pi

Unplug the Coral stick if it’s connected already to Pi. I’ll let you know when you can attach it. We have to install several libraries first to run the YOLO models on it and take advantage of the Coral device.

First of all, let’s update the Raspberry Pi board. Open up a terminal window and run:

Python
sudo apt-get update
sudo apt-get upgrade

The above lines could take several minutes to complete. Check if the camera’s interface is active by clicking the top left Raspberry icon > Preferences > Raspberry Pi configuration > Interfaces tab. Select the camera’s Enable radio button and click OK. Reboot your Raspberry Pi board.

Preparing the Directory and Creating the Virtual Environment

Let’s start by creating an isolated environment to avoid conflicts in the future. This is a practice you should adopt from now on, as it will save you from encountering a lot of issues. To get the virtual environment’s pip package, run the line:

Python
sudo pip3 install virtualenv

Once the installation is done, run these lines to get everything prepared:

Python
mkdir FaceMaskDetection
cd FaceMaskDetection
git clone https://github.com/zldrobit/yolov5.git
cd yolov5
git checkout tf-android
git clone https://github.com/sergiovirahonda/FaceMaskDetectionOnPi.git
cd FaceMaskDetectionOnPi
mv customweights detect4pi.py requirements4pi.txt /home/pi/FaceMaskDetection/yolov5
cd ..

Alright, now it’s time to create the isolated environment in the project directory and activate it so the dependencies’ installation can be performed:

Python
python3 -m venv tflite1-env
source tflite1-env/bin/activate

Finally, run the following:

Python
pip install -r requirements4pi.txt

Installing TFLite Interpreter and PyTorch on Raspberry Pi

I previously mentioned that Raspberry Pi boards are not very good at running TensorFlow models as they simply don’t have the required processing power. To make this project viable, we need to take the TFLite workaround, and then install PyTorch because our script is still using some of its methods. Note that we won’t run the model on top of PyTorch, we will just use some of its power. Let’s start determining what your Pi’s processor architecture is by issuing the following:

Python
uname -a

If it shows that your processor architecture is ARM7L, let’s install the corresponding TFLite version.

If your Pi has installed Python 3.6:

Python
pip3 install https://dl.google.com/coral/python/tflite_runtime-2.1.0.post1-cp36-cp36m-linux_armv7l.whl

If your Pi has installed Python 3.7:

Python
pip3 install https://dl.google.com/coral/python/tflite_runtime-2.1.0.post1-cp37-cp37m-linux_armv7l.whl

If your Pi has any other Python version, or a different processor architecture, I suggest that you check this guide and, in the "Install just the TensorFlow Lite interpreter" section, look for Linux (ARM or AARCH) platform options.

Once that’s done, it’s time to install PyTorch – an open-source Python machine learning library used in multiple types of applications, including those that involve computer vision. Our Raspberry Pi doesn’t have the regular Intel x86 processor architecture. Instead, it has an ARM one; therefore. all Python packages that you want to install on Pi must be compiled for this specific architecture.

There is no official package for ARM processors. We still can install PyTorch from a pre-compiled wheel package, but they may vary depending on the processor version that your Pi has. Explore this NVIDIA forum in order to find the proper version and instructions for its installation. I also found this repo that contains the packages for some other ARM builds. I have Raspberry Pi with an ARMv7l processor.

If your Raspberry Pi’s processor is the same, you can use the wheel I’ve used. You’ll find it available at my Git repo.

Let’s start by installing the PyTorch dependencies required for it to run smoothly:

Python
sudo apt install libopenblas-dev libblas-dev m4 cmake cython python3-dev python3-yaml python3-setuptools

Once that’s completed, browse to /Documents in your Raspberry Pi and issue this command to get the .whl file:

Python
cd /home/pi/Documents
git clone https://github.com/sergiovirahonda/PyTorchForRPi.git

Once that’s done, run this to begin the installation:

Python
cd /home/pi/Documents/PyTorchForRPi
pip3 torch-1.0.0a0+8322165-cp37-cp37m-linux_armv7l.whl

The last command will take about two minutes to complete. If everything goes well, congratulations! You’re done with the hardest part.

Installing the Coral USB Accelerator Dependencies on Raspberry Pi

This is not mandatory - you only need this step if you want to accelerate the detection speed.

In the same terminal window, navigate to the project’s folder. Once there, add the Coral package repo to your apt-get distro list with the next lines:

Python
echo "deb https://packages.cloud.google.com/apt coral-edgetpu-stable main" | sudo tee /etc/apt/sources.list.d/coral-edgetpu.list
curl https://packages.cloud.google.com/apt/doc/apt-key.gpg | sudo apt-key add -
sudo apt-get update

Now we can finally install the Edge TPU runtime, which is the only requirement to use the USB accelerator. Install the libedgetpu library (the standard one that won’t make your Coral overheat) by running:

Python
sudo apt-get install libedgetpu1-std

You can now plug the Coral USB board into Raspberry Pi.

Deploying the Detector on the Raspberry Pi Board

Make sure your Coral USB accelerator and webcam are plugged in. Open a terminal window and navigate to the project’s directory:

Python
cd /home/pi/FaceMaskDetection
source tflite1-env/bin/activate
cd yolov5

Install the project’s requirements by running:

Python
pip3 install -r requirements4pi.txt

To initialize the detector without the Coral USB accelerator, issue:

Python
!python detect4pi.py --weights customweights/best-fp16.tflite --img 416 --conf 0.45 --source 0 --classes 0 1

Otherwise, you’ll need to run this command:

Python
!python detect4pi.py --weights customweights/best-fp16.tflite --img 416 --conf 0.45 --source 0 --classes 0 1 --edgetpu

Both options will open the webcam and initialize the detection as expected:

Next Step?

Actually, none. We’ve reached the end of this challenging project! I hope that the final outcome is what you’ve expected.

License

This article, along with any associated source code and files, is licensed under The Code Project Open License (CPOL)