Check out a brief demo of voice-enabled sound control:
[Fig 1.] .GIF screencast of my GUI while I was prototyping ^.^
Table of Contents
Introduction
Hello and welcome to my latest CodeProject article!
The last time I wrote an article here on CodeProject (it was October last year), I was still in high-school. Since then, I've since started studying Mathematics and IT at university, and have been generally too busy to write articles :-( However, after I finished my first semester of study a few weeks ago, I knew had a few weeks break before my second semester began, so I considered writing another CodeProject article. Then I checked Code Project and saw that there was an IoT competition on! Feeling inspired by the release of Google Home and Amazon Alexa, I thought I'd give voice-enabled home automation a try - and thus this project was born :cool:
In this article, I'm going to show you how you too can build your own personal assistant. We'll start by configuring a Raspberry Pi as a wifi router. After that, we will install Apache & Django, prototype our UI, and build our own wrappers around several API's. We will then build an Android app that consumes the API's we built, and add voice recognition and synthesis to our app by combining CMUsphinx and Microsoft's Bing Voice API. I will then show you how to use another Raspberry Pi and a USB webcam to create a real-time security feed. We'll conclude the article with a demo of how to control your SONOS sound system with Python's SoCo library, and take a look at future expansion possiblities for this project.
Why home voice automation? I think voice automation is a cool area of research because it can be applied in a wide variety of ways. For example, the personal assistant that I have created (Zukuno) could easily be optimized to give people with disablities (quadriplegia, blindness, etc) greater control of their surroundings through their voice. It could also be extended in industrial environments to automate equipment and monitor IoT services. The innate flexibility of voice technologies made it somewhat challenging to choose a category for this article.
Anyway, enough introduction. Let's get coding!
What we'll need
To build our personal assistant, we will need:
- Two Raspberry Pi's (one for the master wifi router, one for controlling the security camera)
- Two 8GB or greater micro SD cards
- A USB speaker
- A USB camera
- Power supplies
- An available ethernet port (for the master wifi router)
- An Android-based smartphone or tablet (an old one will do. We use this to display our GUI and handle microphone input)
- An HDMI cable and an HDMI enabled screen (not essential, but very useful when troubleshooting your rPi's)
- If you're not using the latest model Raspberry Pi's (model 3), then you will need Wifi-USB adaptors to complete this project.
Useful to have:
- An intermediate amount of programming experience
- Lots of patience
- A few weekends of spare time
Bird's Eye View
[Fig 2.] A bird's eye view of our project.
The core component of our entire system is a single Raspberry Pi 3. Connected to my main home network via ethernet port, the Raspberry Pi 3 is both a wifi router and an Apache/Django server, and provides a central API for Zukuno-optimized IoT devices to access. By utilizing Android tablets placed in stragetic locations throughout the target space of a Zukuno installation, Zukuno is able to listen for commands and display visual output. A USB speaker attached to the Raspberry Pi provides Zukuno with a centralized audio ouput, although the tablets/devices could be configured to provide localized output if needed.
The advantage of using wall-mounted tablets is that they provide adequate microphone quality, superb audio-visual output, as well as a local environment on which to perform CPU-intensive tasks such as voice recognition. If voice recognition was performed solely on the Pi, the CPU load on the Pi would dramatically increase with every room addded to the system, eventually rendering the entire system unworkable.
Getting Started with the Raspberry Pi
Setting it up
First created in May 2009, the Raspberry Pi is now in its third official iteration. Until May this year, I had never bought a Raspberry Pi, mainly because they seemed to limited. However, when the Raspberry Pi 3 arrived with inbuilt WiFi, I was finally motivated to purchase one. I'm glad I eventually did, because otherwise I would never have been able to write this article :-)
[Fig 3.] My Raspberry Pi 3, inside a case, with a ruler for scale
I highly recommend you get a case for your Raspberry Pi since they are very small and easily damaged by large objects.
Downloading & burning a Raspbian image
This is fairly straightfoward to do. In fact, the simplicity of this process is one of the major selling points of the Raspberry Pi (and, for that matter, other IoT devices): if you ever mess up your installation, it is very easy to remove the SD card, re-image it, and start your Pi from scratch with a completely clean OS.
Head over to the official Raspbian download page to download Raspbian.
Once you've downloaded your Raspbian image, insert a 8gb or greater MicroSD card into your machine. Use an adapter if necessary. Then, unzip the image file, and use the Win32DiskImager to burn the image to your SD card.
Booting for the first time
To boot your Pi for the first time, you should make sure you have a micro-USB charger capable of producing at least 5A output. I have personally found that the Pi generally runs fine on slightly less power, for example that generated by a phone charger, although it will sometimes brown out while running CPU-intensive tasks on a low power supply.
Begin by inserting your imaged SD card into the card slot underneath the Raspberry Pi. Optionally, connect your Pi to the internet by plugging an active ethernet cable into the ethernet port. If you have an HDMI-enabled monitor, grab an HDMI cable and plug your Pi into your monitor. Once all of that is done, insert the micro-USB charger into the power connector on the Pi. Your Raspberry Pi will automatically boot, and a stream of text output will flood your screen.
[Fig 4.] My Raspberry Pi 3 booting.
Changing your Password
The default login for your Raspberry Pi is "pi" (username, no quotes) and "raspberry" (password, no quotes).
I highly recommend changing your password to something memorable and secure before continuing any further. A lot of people skip this step, thinking "I'll be right", but I know of several people who have learnt the hard way why this step is important. Just do it.
To change your password, enter this command:
$ sudo passwd
You will be prompted to enter your current password (raspberry), after that you will be able to choose a new password.
Once you're done, try entering
$ startx
to play with the GUI that comes with Raspbian. You'll need to plug a USB mouse into the Pi to use the GUI. Take a little while to familarize yourself with the operating system.
[Fig 5.] The Raspbian GUI
Creating a WiFi access point
One of the coolest things you can do with your Pi without writing any code is configuring it to act as a WiFi router. It's a great way to whet your IoT appetite, and is a crucial step towards building our personal assistant.
Installing hostapd and dnsmasq
There are a number of tutorials online on how to set up your Raspberry Pi as a wifi server. Due to the differing times at which they were written, many of them conflicted with each other, so I thought I'd share the method that worked for me on the Raspberry Pi 3, running Raspbian Jesse (using the inbuilt WiFi hardware).
Start by cracking open a terminal and installing these two packages: hostapd and dnsmasq
$ sudo apt-get update
$ sudo apt-get install hostapd dnsmasq
Configuration
Open up /etc/network/interfaces for editing:
$ sudo nano /etc/network/interfaces
Edit it so that it looks something like this:
[Fig 6.] Editing /etc/network/interfaces
After that, create a new onfiguration file at /etc/hostapd/hostapd.conf:
$ sudo nano /etc/hostapd/hostapd.conf
Enter this content:
interface=wlan0
driver=nl80211
ssid=Pi3-AP #name of your new wifi hotspot
hw_mode=g
channel=6
ieee80211n=1
wmm_enabled=1
ht_capab=[HT40][SHORT-GI-20][DSSS_CCK-40]
macaddr_acl=0
auth_algs=1
ignore_broadcast_ssid=0
wpa=2
wpa_key_mgmt=WPA-PSK
wpa_passphrase= {YOUR DESIRED WIFI PASSWORD GOES HERE}
rsn_pairwise=CCMP
Make sure you choose a wifi password and replace {YOUR DESIRED WIFI PASSWORD GOES HERE}
with the correct value. I know it's terrible security practice to store passwords in plain text, but unfortunately this is the way hostapd works. Hostapd is an open-source project, so if you're concerned about this, you can always fork their code and implement proper password hashing.
Now run these commands in order:
$ sudo service dhcpcd restart
$ sudo ifdown wlan0; sudo ifup wlan0
$ sudo /usr/sbin/hostapd /etc/hostapd/hostapd.conf
At this point you should be able use your smartphone/laptop to preview your Wifi network. However, you won't be able to connect or access the internet, because we haven't configure dnsmasq yet.
$ sudo nano /etc/dnsmasq.conf
Then change the contents of that file to this:
interface=wlan0
listen-address=172.24.1.1
bind-interfaces
server=8.8.8.8 # Use Google's DNS servers.
domain-needed
bogus-priv
dhcp-range=172.24.1.50,172.24.1.150,24h
Now, to enable IPv4 forwarding, open up /etc/sysctl.conf:
$ sudo nano /etc/sysctl.conf
Uncomment the line that says net.ipv4.ip_forward=1
by removing the # from the start of the line. Ctrl-X, y, Enter to save.
Now enter the following commands:
$ sudo iptables -t nat -A POSTROUTING -o eth0 -j MASQUERADE
$ sudo iptables -A FORWARD -i eth0 -o wlan0 -m state --state RELATED,ESTABLISHED -j ACCEPT
$ sudo iptables -A FORWARD -i wlan0 -o eth0 -j ACCEPT
$ sudo sh -c "iptables-save > /etc/iptables.ipv4.nat"
$ sudo nano /etc/rc.local
Just above the line exit 0
, add the following line of text:
iptables-restore < /etc/iptables.ipv4.nat
If you have any trouble with the above steps, try googling, or reference the articles I used to write this:
- http://raspberrypihq.com/how-to-turn-a-raspberry-pi-into-a-wifi-router/
- https://frillip.com/using-your-raspberry-pi-3-as-a-wifi-access-point-with-hostapd/
Check your connection
Reboot your Pi with:
$ sudo reboot
If all went well, once your Pi has finished loading, you should be able to connect other devices to your new WiFi network!
[Fig 7.] iPhone connected to my Pi3-AP network
Connecting via PuTTY
Now that you have set up your network, connect your PC to it. If you don't already have PuTTY, download it from an official site, then run it.
[Fig 8.] PuTTY
Enter in the details for your Raspberry Pi (IP address: 172.24.1.1, Port: 22, that's it!), then click "Open".
[Fig 9.] Using PuTTY to SSH into our new Raspberry Pi WiFi router
Now that we can access a terminal remotely from our desktop, we don't really need to use the Raspbian GUI anymore. If you wish, you can disconnect your Pi from your monitor and tuck your Pi safely away in a nearby corner where it can access power and ethernet. Beginners should note that they do not use their WiFi password to access their Pi through PuTTY, instead they should use the password to their Raspbian account that they set with passwd
earlier in this article.
Setting up our Workflow
Git
Git is an immensely popular file management and versioning system, often associated with the independent service GitHub, although many other Git-based services exist, and many people run their own Git servers.
For the rest of this article, we will be using Git to push and deploy our code to our Raspberry Pi.
Visual Studio Code
Visual Studio Code is a brilliant modern text editor that I highly recommend using for this article. There are alternative text editors, which you can use if you wish, but I am a firm believer in learning to adapt to all sorts of text editors so that you can pick the best tool for each task with an open mind. For example, although I recommend VS code for this article, I wouldn't use VS code to edit a 2GB .csv file. Something like Vim, Nano, or perhaps even Notepad++ on Windows would be better suited to that task.
Creating & using repositories
To create a new repository in Git on your Raspberry Pi, SSH into your Pi with PuTTY, and create a new directory called "GitTest".
$ mkdir GitTest
Then enter that directory.
$ cd GitTest
Now, initialize a new git repo:
$ git init
$ git add .
$ git commit -m "First Commit"
To push to a remote repository, such as one on GitHub, do the following:
$ git remote add origin URL-of-your-remote-repository
$ git remote -v
$ git push -u origin master
After you've made local changes, you can simply push them to a repo through the following:
$ git push
Likewise, remote changes can be pulled down to a local repo:
$ git pull
Building our Backend
Zukuno is split into two parts: a UI served dynamically through Apache to participating tablets, and a Django-powered API consumed by the UI that makes everything tick. In this section, we will focus on building a Zukuno's backend API.
Installing & Configuring Apache
Start by installing the Apache package:
$ sudo apt-get install update
$ sudo apt-get install apache
After installation completes, start the Apache service:
$ sudo service apache start
You can now test your server by opening Chrome on your computer and navigating to 172.24.1.1 (the IP address of your Raspberry Pi). You should be greeted by the Apache welcome page. Our WiFi router is now a server too :cool:
Files served through the Apache server are located at /var/www/ on your Raspberry Pi. Navigate to this location and create and push a git repository to GitHub, then clone it to your Desktop. Then, whenever you make changes to your desktop repo, you can push them to GitHub with git push
and load them into your Raspberry Pi with git pull
.
Installing & Configuring Django
We are slowly starting to get to the more advanced parts of this build. Installing Django alongside Apache is somewhat tricky. I highly recommend following the official documentation for doing so, perhaps setting up a virtual environment to work in, etc :-)
You can test your Django isntallation using the following command:
$ python -m django --version
Asssuming you've got Django covered, let's create a new folder for our project:
$ cd ../
$ mkdir django
$ cd django
$ mkdir zukuno
$ cd zukuno
Now let's create a new Django project:
$ django-admin startproject zukuno
Now that we've done that, let's ensure that we own all the files that we just created:
$ sudo chown -R $USER:$USER .
We will now have a directory with the following structure:
zukuno/
manage.py
zukuno/
__init__.py
settings.py
urls.py
wsgi.py
Let's start our server. Navigate into the folder containing manage.py, then run the following command:
sudo python manage.py runserver 0.0.0.0:8000
The 0.0.0.0:8000
on the end tells django to accept external connections, using port 8000.
[Fig 10.] Running Django on our Raspberry Pi
Now, if you load http://172.24.1.1:8000/ on a device connected to the Raspberry Pi's network, you will see something like Fig 11 (taken from a very early build of our Android app, which we'll discuss later).
[Fig 11.] The default Django welcome page, running inside an early build of the Android app we'll cover later
To close the server gracefully, push Ctrl-C while the terminal is active. Unfortunately, I have found that sometimes Django fails to exit completely, blocking port 8000 such that the server cannot be restarted. If this happens to you, use this command to force-clear the port so you can restart the server:
$ sudo fuser -k 8000/tcp
Consuming a GTFS feed
Introduction to GTFS
According to Google's official documentation, GTFS Realtime (General Transit Feed Specification) is a "feed specification that allows public transportation agencies to provide realtime updates about their fleet to application developers".
GTFS, as a general protocol, also requires several static files for reference whenever real-time data is unavailable.
When building this project, I referenced GTFS data from the following sources:
- https://translink.com.au/about-translink/open-data
- https://gtfsrt.api.translink.com.au/
Installing Google's Python wrapper
[Fig 12.] Installing gtfs-realtime-bindings on Windows.
Fortunately for us, we don't need to spend ages building a feed parser for this protocol. Instead, we can use Google's pre-built python package.
Installing it takes literally one line:
$ pip install --upgrade gtfs-realtime-bindings
Trying it out
Create a new file called gtfs-test.py in your home directory on your Raspberry Pi:
$ nano gtfs-test.py
[Fig 13.] Using the GTFS library with the code below
Let's try something simple. Add the following code to the file:
from google.transit import gtfs_realtime_pb2
import urllib
feed = gtfs_realtime_pb2.FeedMessage()
response = urllib.urlopen('https://gtfsrt.api.translink.com.au/Feed/SEQ')
feed.ParseFromString(response.read())
vehicles = []
for entity in feed.entity:
if entity.trip_update.vehicle.id not in vehicles:
vehicles.append(entity.trip_update.vehicle.id)
print entity.trip_update.vehicle.id
print "There are " + str(len(vehicles)) + " buses on the road"
Save and exit. Then run the file using
$ python gtfs-test.py
The last line of the generated output will be something along the lines of this:
There are 974 buses on the road
We have just collected the unique vehicle identifiers and used their total to calculate how many buses are active right now :cool:
Our code
To get started, stop the Django process if it's running, then navigate to the folder containing manage.py in your Django project. Once you're there, run the following:
$ python manage.py startapp gtfs
This will create a new "app", gtfs, which is like a submodule of a Django project. Open gtfs/views.py in nano and change it to the following:
from django.shortcuts import render
from google.transit import gtfs_realtime_pb2
import urllib
from django.http import HttpResponse, JsonResponse
def index(request):
feed = gtfs_realtime_pb2.FeedMessage()
response = urllib.urlopen('https://gtfsrt.api.translink.com.au/Feed/SEQ')
feed.ParseFromString(response.read())
trip_updates = []
for entity in feed.entity:
if entity.HasField('trip_update'):
for stop_update in entity.trip_update.stop_time_update:
if stop_update.stop_id == '<the stop you wish to use>':
delay = stop_update.arrival.delay
append = " ahead"
if delay < 0:
delay = delay * -1;
append = " late"
if delay < 60:
delay = "30 seconds" + append
else:
delay = str(delay / 60) + " minutes" + append
trip_update = {
'route_id': entity.trip_update.trip.route_id,
'time': stop_update.arrival.time,
'delay': delay,
'vehicle_id': entity.trip_update.vehicle.id
}
trip_updates.append(trip_update)
trip_updates.sort()
return JsonResponse(trip_updates, safe=False)
To find the stop outside your house, refer to the static files available as part of the GTFS standard. You should be able to look up your street name and find a matching stop.
The code above is basically a JSON simplifier. It takes the huge JSON response from the real-time feed, extracts the tiny bit of information that we want, and returns it in a different JSON structure.
Future Improvements
The code samples I have written above are only a very simple demonstration of what is possible here. For example, I could track this data for a few months, then apply machine-learning algorithms in an attempt to understand when transport delays happen, why they happen, and how one can avoid them. However, I only have so much time to work on this project, so I have to leave this as-is for now.
Using Microsoft's TTS engine
Why this engine?
I chose to use Microsoft's Speech API because:
- It has a reasonable free quota per month
- It is adequately accurate
- It was easier to use than other alternatives that I tried
Getting an API access key
To get an API keyset to use when compiling my sample code, or when building your own apps, head over to https://www.microsoft.com/cognitive-services/en-us/speech-api
Trying it out
First, we define some key parameters:
clientId = "<insert>"
clientSecret = "<insert>"
ttsHost = "https://speech.platform.bing.com"
params = urllib.urlencode({'grant_type': 'client_credentials', 'client_id': clientId, 'client_secret': clientSecret, 'scope': ttsHost})
headers = {"Content-type": "application/x-www-form-urlencoded"}
AccessTokenHost = "oxford-speech.cloudapp.net"
path = "/token/issueToken"
After that, we contact Microsoft's servers to request an access token:
conn = httplib.HTTPSConnection(AccessTokenHost)
conn.request("POST", path, params, headers)
response = conn.getresponse()
data = response.read()
conn.close()
accesstoken = data.decode("UTF-8")
ddata=json.loads(accesstoken)
access_token = ddata['access_token']
Now that we have are access token, we can query their servers with our text to be dictated:
body = "<speak version='1.0' xml:lang='en-us'><voice xml:lang='en-us' xml:gender='Female' name='Microsoft Server Speech Text to Speech Voice (en-US, ZiraRUS)'>" + text_to_dictate + "</voice></speak>"
headers = {"Content-type": "application/ssml+xml",
"X-Microsoft-OutputFormat": "riff-16khz-16bit-mono-pcm",
"Authorization": "Bearer " + access_token,
"X-Search-AppId": "99aab5c6fd784c0dac57d1fe01059c3c",
"X-Search-ClientID": "099a0afb96044b298daeb6ce9cc131ce",
"User-Agent": "TTSForPython"}
conn = httplib.HTTPSConnection("speech.platform.bing.com")
conn.request("POST", "/synthesize", body, headers)
response = conn.getresponse()
data = response.read()
conn.close()
Finally, we save the recording to our local disk:
file = open("/home/pi/response.wav", "w")
file.write(data)
file.close()
Our code
The code that I used in my application is almost identical to the code above, except I wrapped it in a function and created a class around it, so that my Django view app could call it whenever necessary:
from voice import tts
response = "I've paused the music."
tts.getDictation(response); # Use Microsoft Speech API to dictate the input
player = subprocess.Popen(["omxplayer", "-o", "local", "/home/pi/response.wav"], stdin=subprocess.PIPE
Building our Frontend
My Approach to Frontend
I decided to build the frontend of Zukuno as a webapp. This had several positive and negative effects. One positive effect of this approach was that it was very easy to push updates to the GUI - all I had to do was update the files on the Raspberry Pi, and the changes would automatically flow to every device accessing those files. Unfortunately, one negative effect of this approach was that I was severely limited in terms of design and responsiveness, mainly due to the extremely poor performance of Android's WebView component.
Using Android Studio
If you want to build modern Android apps today, I recommend using
Android Studio. Since the first stable release came out in December 2014, it has steadily been improving - and today it is by far the best Android IDE out there for developing native Android apps.
[Fig 14.] Android Studio
Using WebView
WebView is broken
As I mentioned earlier, the our app uses Android's WebView component to display the Zukuno UI. Unfortunately, WebView is largely broken and unmaintained, and as a result can sometimes suffer performance issues and behave unexpectedly. In the next few sections, I'm going to explain a few workarounds I came up with.
Enabling touch events
One bizarre type of behaviour that I encountered while using WebView was that it was almost impossible to recieve onClick events, or even onTouch events, from Javascript code within the webapp. I still have not found a clean workaround for this; although I did find the following hackish solution:
main_view.setOnTouchListener(new View.OnTouchListener() {
@Override
public boolean onTouch(View v, MotionEvent event) {
if (event.getPointerCount() > 1) {
return true;
}
switch (event.getAction()) {
case MotionEvent.ACTION_DOWN: {
m_downX = event.getX();
main_view.loadUrl("javascript:$(document.elementFromPoint(" + event.getX() + ", " + event.getY() + ")).click()");
}
break;
case MotionEvent.ACTION_MOVE:
case MotionEvent.ACTION_CANCEL:
case MotionEvent.ACTION_UP: {
event.setLocation(m_downX, event.getY());
}
break;
}
return false;
}
});
Out-of-box CSS animations and transforms are horrible in WebView. I found that these changes helped a little. Animations are still far from perfect, but at least they are now watchable.
main_view.setLayerType(View.LAYER_TYPE_HARDWARE, null);
main_view.getSettings().setAppCacheEnabled(false);
main_view.getSettings().setCacheMode(WebSettings.LOAD_NO_CACHE);
main_view.getSettings().setJavaScriptEnabled(true);
main_view.setVerticalScrollBarEnabled(false);
main_view.setHorizontalScrollBarEnabled(false);
Communicating through Javascript
One redeeming quality of the WebView component is how easily it does support Javascript communication. Although in other platforms I've used (CefSharp, C# WebBrowser contorl, etc), Javascript has been a pain to get working, in Android's WebView it just works.
Java -> Javascript
main_view.loadUrl("javascript:alert('triggered')");
Javascript -> Java
Java:
public class HelloWorld {
Context ctx;
HelloWorld(Context c) {
ctx = c;
}
public void fooBar() {
}
}
main_view.addJavascriptInterface(new HelloWorld(this), "HelloWorldPortal");
Then, in Javascript:
HelloWorldPortal.fooBar();
Building our own GUI within WebView
Structure
The structure of our HTML is as follows:
<html>
<head>
<title>Alex</title>
<link href="https://fonts.googleapis.com/css?family=Roboto+Slab:400,300,700" rel="stylesheet" type="text/css">
<link rel="stylesheet" href="style.css" />
</head>
<body>
<div id="mic">
<img src="img/mic.png" />
<div class="spinner">
<div class="bounce1"></div>
<div class="bounce2"></div>
<div class="bounce3"></div>
</div>
</div>
<div id="rec-text">
<span></span>
<div class="circle-waiter"></div>
</div>
<div id="response"></div>
<div id="shade"></div>
<!--
<div id="space">
<div id="left-space">
<img src="http://172.24.1.73:8081">
</div>
<div id="right-space"></div>
</div>
<div id="times">
<h1>Upcoming Buses</h1>
<div id="sched">
<div class="entry">
<span class="service">P205</span>
<span class="time"><span>ETA</span>6:00am</span>
<span class="data">
<div>
<span class="late">3 minutes late</span>
<span class="details">to City, George St.</span>
</div>
</span>
</div>
... lots of entries ...
</div>
</div>
<script src="jquery.min.js"></script>
<script src="main.js"></script>
</body>
</html>
Consuming the API's we built
Playing activation/deactivation sound effects is very easy:
$.get("http://172.24.1.1:8000/voice/?p=up-listening");
$.get("http://172.24.1.1:8000/voice/?p=down-finished");
Sending other information is done like so:
$.get("http://172.24.1.1:8000/voice/?p=" + rec, function(result) {
showResponse(result);
setTimeout(function() {
StopRecognizing();
}, 2000);
});
The following code updates the live bus feed:
function getTripData() {
is_collecting = true;
console.log("Collecting data");
$.getJSON("http://172.24.1.1:8000/gtfs/", function(result){
is_collecting = false;
console.log("Parsing data");
$('#sched').html('');
data = result;
$.each(result, function(i, field){
$('#sched').append(generateTripSchema(field.route_id, formatAMPM(field.time), field.delay));
});
});
}
function generateTripSchema(route_id, eta, delay) {
var template = `
<div class="entry">
<span class="service">` + route_id + `</span>
<span class="time"><span>ETA</span>` + eta + `</span>
<span class="data">
<div>
<span class="late">` + delay + `</span>
<span class="details">to City, George St.</span>
</div>
</span>
</div>
`;
return template;
}
[Expanding our system] Using the SoCo Python Library
Using SoCo to control my home's Sonos system was amazingly simple.
Installation
pip install soco
Usage
import soco
zone_list = list(soco.discover())
for zone in zone_list:
zone.pause()
zone_list = list(soco.discover())
for zone in zone_list:
zone.play_uri('https://ia802300.us.archive.org/18/items/MozartCompleteWorksBrilliant170CD/Volume%201%28CD01%29%20Symphonies%20KV%2016-19-19A-22-43-45.mp3')
That's it! All I needed to do was call these functions in the voice
Django app whenever a corresponding intent was detected.
[Expanding our system] Setting up a security camera
One last extension I made to this project was adding a "security camera".
To do this, I used a second Raspberry Pi, a USB camera, a powered USB hub, and a package called Motion.
Installing motion is easy:
$ sudo apt-get install motion
Once it has finished installing, open up /etc/motion.conf:
$ sudo nano /etc/motion.conf
Then hunt down the section referring to localhost connections and set everything to off, so that you can access the stream from outside the host Pi. After that, reboot the Pi, and you should be good to go.
[Fig 16.] Accessing our security camera through the browser
Conclusion
[Fig 17.] Close to the current version of my app
Thanks for reading til the end!
Zukuno is still very much a work in progress, and what I have presented here is only a skeleton of what it is capable of. Over the next few months, I'd like to gradually add more and more API's into Zukuno's codebase, so that it can spread its wings and start fully living up to it's goal of being the personal assistant for the Internet of Things. I learnt a lot by building what I've done so far, and I hope that this article inspires people to try building their own apps for the IoT. I personally believe that voice-enabled apps like Zukuno have enormous potential in helping disabled people (sufferring fom quadriplegia, blindness, etc) take control of their surroundings and achieve greater independence.
In conclusion, I hope you enjoyed this article :-) Please leave any comments you may have in the forum below. I'm keen to hear your feedback.
History