Click here to Skip to main content
65,938 articles
CodeProject is changing. Read more.
Articles / IoT / Arduino

Stage 5-UROBI:The Ultimate Robotic Framework Over IoT For Arduino

4.98/5 (30 votes)
22 Dec 2014CPOL62 min read 75.9K   1.3K  
This tutorial present a step by step guide for building the ultimate robotic control and information sinking for Arduino Controlled Robots over Internet of Things

Check out the demo Video:

 

 

1. Background
     1.1 Overview
     1.2 System Architecture

Part A: Mechatronics

2. Building The Robot
    2.1 OWI Robotic Arm Kit
    2.2 Building the Base
    2.3 Preparing Robotic Connections for Control System

3. Power Supply For Robot
    3.1 Building Power Supply Circuit
    3.2 Testing Robot With Power Supply
    3.3 Connection Through Battery

4.  Relay System for Robotic Control
     4.1 Overall Concept of Control Unit
     4.2 Voltage Line Selection
     4.3 Connecting Robot with Relay Board
     4.4 Manual Testing of the Robot

Part B : IoT Integration

5.  Controlling the Robot 
     5.1 Control through Serial Port
           5.1.1 Using ArdOS
           5.1.2 Non ArdOS method                                                                                                                                        5.2 Building Control Interface in .Net                                                                                                                            5.3 IoT Integration for Robotic Control                                                                                                                                5.3.1 Service Integration in C# Client                                                                                                                              5.3.2 IoT Test Client

6.  Remote Webcam Streaming for Robot Monitoring
    6.1 Objective and Design Issues
    6.2 Integrating EmguCV and FaceDetection
    6.3 Streaming WebCam images to LAN
    6.4 IoT .Net Client for Robotic Control
          6.4.1 MJPEG Stream Viewer                                                                                                                                            6,4.2 Building IoT Remote Control Interface For Robotic Control                                                                            6.5 Performance Tuning With BackgroundWorker

Part C: Control Modalities

7. Why Multimodality and What Different Modalities are available

8. Control Through Computer Vision
    8.1 Controlling Robot With Face Movement
    8.2 Control Through Lasser Gesture

9. Integrating Speech Recognition And TTS 

10. Wireless Robotic Control with IR

Part D: Other Important IoT Services

11. Notification Services
      11.1 Understanding Notification System
      11.2 IFTTT ( If this then that)
             11.2.1 Introducing IFTTT
             11.2.2 : Generating IFTTT based Notification System for UROBI
       11.3 Sending A Mail Through Gmail On Alert
       11.4: Broadcasting Gmail Data to Other Channels through IFTTT
       11.5 Calling GmailSend Method From Serial Data Handler


 12.  Security Service
        12.1 Need of Security in IoT Context
        12.2 AES Encryption and Decryption
        12.3 Integrating Encryption-Decryption in UROBI Framework
        12.4 A Discussion about other Services


13. Conclusion

1. Background

1.1 Overview

Through out the span of IoT Tutorial Contest, I have written topics on Arduino elaborating various aspects of Hardware and Software designs for IoT platform. This article is supposed to be the 'Conclusion' of my effort. I was wondering of what could be a good topic to end a series of tutorials that should act like an end of course for the tutorials. Then I thought why not a Robot? 

Robots are fun to make. Every DIY guy's top agenda remains designing the coolest robot world has every seen. There are robotic contests around the world to encourage robotic development. Robots are not only restricted to hobby projects but also provides a great platform for learning controlling of motors, actuators, syncing a logic into an embedded system. The best thing is that carefully designed robot control systems can be extended to much more complicated industrial control units.  

There are numerous numbers of resources available on robotics starting from building hobby kits to programming robots. Many tutorials gives a wonderful platform to the beginners with both hardware and software level. But I am yet to come accross a tutorial or robotic framework that integrates basic robotic system with IoT at the same time integrating multimodal controlling.

Few tutorial teaches to control rbot using small keypads, some explains the process of robot control with IR and so on. But as a DIY guy I always wanted a build a system that could be controlled through various ways: Voice, Body Movement, Remote Control, Mobile and every other cool controlling stuff possible.

This tutorial should be one heck of fun for me while composing as I will be building the robot, coding it, integrating services simultaneously. I invite you to join me in this fun studded journey.

1.2 System Architecture

To start with a robot is literally built of motors. Different motors( Stepper, DC, Geared, Servo Motors) are used to as joint for arm and leg structure to create a machine. When we start with robotics there are several simple design of robot that is available. There are also some very good robotic kits to help you get started with robotic construction. I prefer kits as they save time and are more robust. One of the kits that is a good worth for money is OWI Robotic Kit. I am particularly fond of this kit because it offers 5 Degree Of Freedom arm, relatively cheaper in comparision to other kits of similar functionality, works on DC motors which does not require any encoders and more than all these is the sheer fun of assembling and working with this Robot.

So in this tutorial we are going to build a system to Control this Robotic Arm Shown in Figure 1.1.

Image 1

Figure 1.1 OWI Robotic Arm Edge( Pic Courtesy:robotshop.com)

What is fascinating is that this particular kit comes with it's own switch based control box for controlling the robotic arm. We are going to hack the robot to bypass it's own control system with our sophisticated control mechanism which would perfectly over internet and LAN using our extended IoT framework.  We would also alter the rbot by assembling this  arm kit on a moving rover to make a fascinating moving robot. We will also put some sensors onbard so that the robot "performs" certain tasks. 

Figure 1.2 Presents a more clear picture of what we will do in this DIY project.

Image 2

Figure 1.2: Proposed System Model

So we are first going to develp a Relay based control unit for controlling the motors of the robot and thus the robot itself. This control unit is controlled through a Arduino board which processes commands passed from Serial port. A C# Serial Communication client would be acquiring sensor data from the Robotic unit through Arduino Board and would be pushing it to ThingSpeak for real time data update. The client will be updated to have a camera interface which would be tracking the robot such, the video stream will be made available remotely using a streaming server so that person operating the robot remotely can keep track of the position and other details of the robot.

C# Client would be connected to cloud using our custom web service. It will continuesly poll for commands for the robot which can be generated remotely by remote client.

C# Client would also extend the controlling facility for the robot with a local control mechanism. Speech Recognition and Face Detection would be combined to it for seamless multi modal control. The Objective of combining different modality into control interface is to demonstrate the capabilities of different modalities for generating commands for the robot. 

Therefore by the end of the tutorial, you should be able to integrate hardware, connected boards, services and input modalities to Internet of Things.

So let us not waste further time and get started with the work.

Part A: Mechatronics

2. Building The Robot

2.1 OWI Robotic Kit

The first step obviously is to buy yourself a OWI Robotic Arm Edge Kit. Compare the prices in different stores before ordering one. The kit comes with a very good manual that tells how to assemble the part of the kits. However if you want tohave a look at the system before you buy, you may check out this online manual.

This Youtube Video Resource is a good starting point to learn the assembling process for construction of the robot. The robotic arm comes with 5 degree of freedom. Each degree of movement being attributed by one gearbox which is driven by BLDC motor. Figure 2.1 gives youa detail view of the parts list.

Image 3

Figure 2.1: Independent parts of the OWI Robotic Arm Kit

Obviously, once you get the kit, you need to complete up the assembling process. But at the begining it is sufficient for you to know that a Control system for this particular arm basically means a way of controlling the motors associated with the gear boxes. Each of the motors have a capacitor connected accross it's poles to help it with smooth and jerk free start. As all the degree of freedoms are balanced using appropriate gears, you do not have to worry about controlling the motors with PWM as the motors would have to bear high load and hence would expect 100% duty cycle.  Thus controlling of the arm literally is switching on and off of the motors.

Coming back to the motor, every motor can be seen to have a pair of wires coming out of them which is connected to a two pin connector at the end. You can see from Page 24 of the Online manual that all these wires are connected to the switch controller comes with the kit using their connectors. We will need to hack the kit by bypassing it's own controllers, followed by connecting the wires to our own control board. 

 Complete the robotic assembly. but need not to complete the switch board or connecting wires from motors to the switch board. This part would be hacked, changed and modified to be working with our control system. Once you successfully complete the robot, it should look like figure 1.1.

2.2 Building the Base

The biggest drawback of using the OWI kit is that it is mainly an arm kit which means no mobility. If a Robot is not moving, it surely isn't any robot. So I decided to tweak the design a little and mount the arm on the top of a chesis. There are plenty of chesis and tyres available in bobotic shops. You can get yourself four tyres and a chesis over which you can mount the robot.

Here is one such link from ebay.in

On the hindsight, you may also design one of your own chesis. While ordering be carefull about the length of the chassis as you need to mount the  arm on the top of it. My chassis came with a bore at the bottom. I have drilled in the base of the arm and screw mounted the arm over it. You can see the finished assembly in figure 2.2.

Image 4

Figure 2.2: Finished View of Robotic Arm Mounted on the Rover.

The chassis has to take a lot of load along with the arm. Therefore Chassis needs geared motor. You can see in 2.2 a) that I have replaced one of the tyre shaft with geared motor. If you want to move the rover left and right, you may prefer to have two motors. I have selected a 100 RPM geared motor to keep the speed little less such that there are no jerk while starting and stopping. 2.2 c) gives a more clear picture of the fitting where you can see how tyres can be attached to the shafts using screws.

 

2.3 Preparing Robotic Connections for Control System

We have total six motors in total. Five BLDC motors are for five gear boxes of the kit and the sixth one is a geared motor for the rover. We need to bypass the switch that  comes with the robotic kit completely. These motors can attribute to two motions: Forward and Reverse based on the polarity of the voltage applied on the motor poles. As our geared motor is 12V-1A rating, I would use a 12v-1A power source and drive all the motors through the same. Now the question is how do we control these motor's switching and polarity using a 5V Arduino where the driving voltage is 12V?

If we did not need reverse motion of the motors, the motors could have been controlled using a simple transistor as explained by this section of my tutorial on connected devices for Arduino.

However when you change the polarity of the voltage applied to motor, you are basically supplying -12v, so an NPN transistor will always be kept in reverse voltage as base voltage will always be only 5v where as the collector voltage will be -12v. Hence we would have to use relays.

In order to understand how to control a motor using Relay's please refer to this relay section of my tutorial for Arduino connected devices.

Before we get to understand the control part, we need a little modification on the connection of the robotic unit. Once completed assembling, you will find five pair of wires coming out of the robotic arm. Each of these pairs have one black and one colored wire. The black wire is for ground and the colored wire is for applying voltage. As relay tutorial bits might have had you understand the concept that for being able to control any device through relay and microcontroller, the ground point of the microcontroller, the device and relay all must be made common. So once we attempt to control the motors through our control system, the black wires from all the motors should be connected togather.

For controlling we will have to build a circuit comprising of relays as we shall see at a later stage. The robot should be connected to the circuit board with a cable. So we will now cut off the connector from every motor's wire pair and connect all the grounds togather. 

Now take a multi color flat cable of say 1m long, the black of the cable should be connected to the common ground and respective colors must be connected with the same color wire of the flat color cable.

Following figure 2.3 will give you an ide about the hack we are talking about.

Image 5

Figure 2.3 : Color Flat Cable Connection with Robot

As we need appropriate power supply to drive the motors and the relay unit, we will first see building of a power supply before we come back to control unit.

3. Power Supply For Robot

Arm Edge Motors typically operates at a range of 9V to 12V range with maximum 1A current. The geared motor is 12v-1A spec. Thus it makes sense to use a 12v power supply which would be able to drive both geared motor as well as the other motors. We also have 12V relays available in the market. Thus 12V power supply is an automated choice for our robot.

I have worked with this robotic kit for a long time for designing and testing various automation which are based on DC motors. From my experience I have observed that soon as you put load on the arm ( like when the arm lifts an object which is greater than 150gm in weight) motor has to draw more current to maintain constant speed. Therefore voltage tends to drop which results in very slow and unpleasent motor movement.  Therefore I suggest 12V-1A power source for driving the motors of this robotic kit. However if your load is bare minimum, then you can even work with 9v standard battery.

Now you have two choices for power supply. a) Either you use a 12V battery or b) You build your own power supply. 

I would be using a power supply here as our robot is wire connected. So a persistant power source helps maintaining constant and desired current to the robot. Secondly we can readily get +12V and -12V  from power supply circuit. If we are to use battery, then we need to use two batteries to obtain +12V and -12V. However as a Side note, I will also show how to work with batteries so that you can select your preferred source while working with your robot.

3.1 Building Power Supply Circuit

Image 6

Figure 3.1: Power Supply Unit For the Robot

We provide AC input to the power supply using a 12-0-12 Center tapping step down transformer. The transformer reduces the AC voltage to 24V peak to peak ( EMF between positive peak high and negative peak high). AC is rectified to DC using bridge rectifier comprising of four diodes.  The advantage of using four diode bridge rectifier over simple two diode full wave rectifier is that bridge has better current output. Whenever a power supply is expected to drive loads like motors, bridge is always a preferred rectifier over more simple full wave rectifier. 

Rectified output is always impure DC containing AC components due to switching latency of the diodes. This AC components are removed using capacitors C1 and C2. Resulting DC will be about 15V DC. However the voltage will tend to vary based on the load being drawn by the load. In order to maintain a constant voltage output, we need to use voltage regulator. The beuty of this circuit is that we can regulate and obtain both +12V and -12V from same circuit. All we need is voltage regulator 7812 for +12v regulation and 7912 -12V regulation.  Their respective pin out is presented in figure 3.1 b).

C4 and C4 helps to further filter the output voltage. We use two LEDs as indicators to indicate that both voltages are available. Finally the ground, +12v and -12V is connected to a 3-Pin port from where supply can be taken for the circuit.

I have mounted the circuit on a general purpose circuit board because of ease of connectivity and longivity of the board. If you are not comfortable with soldering and mounting, you can mount the components on a bread board too. Figure 3.2 Shows the hind side of the PCB.

Image 7

Figure 3.2: General Purpose PCB for the Power Supply

The reason for putting up figure 3.2 is to encourage you to start with PCBs. It is not that tough. You need to just insert the pin of the components in the board. Then remove the insulator of some connecting wire and solder the pin with appropriate connection point using the naked thin wire. If you are to do anything serious with DIY, you must learn making circuit boards. 

Once your supply is ready, it is always wise to measure the voltage using multimeter before working with motors. High voltage may burn the winding of your motor.

You have a three slot port. Ine for +12V, one for -12V and one is Ground point. So measure the voltage between positive with ground and negative with ground. They must be +12v and -12V respectively( Although practically you may not get precise 12v, it will be nearer to that value). Figure 3.3 shows voltage measurement of my power supply unit.

Image 8

Figure 3.3: Verifying Voltage of Power Supply Unit

3.2 Testing Robot With Power Supply

Now you have both positive and negative voltage with you for driving the motors forward and reverse. Let us first give a manual testing of the movement of the motors before we proceed ahead for doing it programatically. Your robot motors are now connected to a multi color flat cable with black being connected to common ground black wire of the motors.  What you need to do at this stage is connect the black wire of the multi color flat cable to ground point. First touch the respective color wire to +12v point and then -12v point. You will see that part of the arm is moving in forward/reverse direction depending upon the polarity of the supplied voltage. At this stage, yu should be able to manually test all five degrees of freedom for your arm as well as the movement of the rover. You can also test by connecting multiple wires togather to +12V/-12v. You will observe that as the number of motors that this board drives increases, we have severe current defficiency in the line current and the LEDs in the power supply board would start fluctuating. Therefore a good power supply is very important if you are to test complex projects.

However in all eventuality, and for fun purpose you want to test the with Battery Connection. The next subsection just shows you how to use battery instead of power supply for this specific robot model. 

3.3 Connection Through Battery

For driving Your robotic assembly and the relay control unit that you will be designing next, you need a 12V, 3.4AH battery. Please be sure not to use too high of ampere rating, otherwise the motors will be burnt.  The question is how to generate positive and negative voltage out of the supply? Well take two batteries. Connect one's positive with another's negative. This connection completes the ground connection. So you are now left with negative terminal of the first battery and the positive terminal of the second battery which will provide you -12V and +12 v respectively as seen by figure 3.4

Image 9

Figure 3.4: Preparing Power Supply for Our Robot using Pair of Batteries

Do measure the current through the circuit before connecting your robot motors. If you do not manage to find low current batteries, use 7809 and 7909 ICs respectively for regulating the positive voltage to +9v and negative to -9v. Pin out of these ICs are same as their 12v regulator counterparts 7812 and 8912 which is shown in figure 3.1 b.

You can also design a duel power supply with a single relay that selects between main supply and battery units. 

4.  Relay System for Robotic Control

4.1 Overall Concept of Control Unit

Please refer this relay section for clearer understanding of the concept of using relay with Arduino.

We need to control three states of each motor, a motor can be either off or rotating in forward direction ( clock wise) or rotating reverse ( Anti clockwise direction). So threre are three states of that relay has to control. Relays can at the most have two poles: i.e two inputs to select from. But here we want relays to switch between three voltages: +12v,0v,-12V. How is it possible?

We will use the concept of voltage line selection here. Firstly a relay will act like a polarity selection relay. It will have two inputs +12V and -12V in NO and NC poles. It's output will be provided to NO poles of independent relays for every motor. So when Polarity relay is off and any of the Motor Control Relay is ON, the associated motor will move clockwise. When the Motor Controlling relay is OFF, motor gets no Supply. So Motor is OFF. When Polarity Selection and Motor Control Relay are both ON, the Motor will move in anti clock wise direction because Motor Control relay will have -12V supplied to it's NO through Polarity Selection Relay.

4.2 Voltage Line Selection

Image 10

Figure 4.1: Voltage Line Selection Relay Unit Circuit Diagram

So it is clear from figure 4.1 that Arduino's pin 13 is used to control the polarity selection relay which is coupled with the relay using optocoupler MCT2E. When Pin 13 is high, Input side photo transistor is active which activates photo diode at the reciver side which in turn completes the output circuit and makes available 12 given to pin 5 to pin 4. Pin 4 is connected with the Vs pin of polarity relay which triggers the relay, making it to output -12V. When pin 13 is OFF, optocoupler pin 4 has 0 voltage as output circuit is not completed. This forces the relay to be in OFF state, making +12V available at the output.

4.3 Motor Control Relay Unit

Image 11

Figure 4.2: Circuit Diagram for Main Relay Unit

The Difference between the circuit of polarity selection relay and motor controlling relay is that here Normally connected terminal is grounded. Thus motor gets no supply when relay is not triggered. When this relay is triggered it throws the voltage at NO which is the output of polarity relay. Therefore based on polarity relay's output this relay drives the motor in either forward direction or reverse direction.

4.3 Connecting Robot with Relay Board

Figure 4.3 Shows the overall circuit diagram for the robotic unit starting from the transformer till the relay unit.( Click on the image to see full size image)

Image 12

Figure 4.3: Complete Circuit Diagram of the Robotic Unit

The output of each realy is connected to positive of the motors. All the motor's grounds are made common and is connected to power supply ground which is further made common to the Arduino ground. Pin 13 is connected to polarity selection relay. Six other relays are connected in serial order starting from pin 12  but leaving out PWM pins. All the optcocoupler's 4 and 5 pins are shorted and is connected to +12v port of the power supply.  Grounds of all the optocouplers and relays are made common and is connected to the common ground point. Note: Even though it is not shown in the circuit, it is always advised to connect the Arduino pins to Optocoupler using Diodes, in order to avoid any back current back to the microcontroller board.

I have mounted relays in a separate board and power supply in a different board so that if there occours any problem I can debug them separately. Here is how the relay unit and the power supply unit looks like after being connected.

Image 13

Figure 4.4: Relay Unit with Power Supply Unit

 

4.4 Manual Testing of the Robot

Before you power up the Arduino board and proceed ahead with coding the robot, it is always advised to teest the robot manually. Every independent component of IoT must be separately debuggable. So we should check if our circuit is working or not. Power on your power supply, take a scre driver or any conducor like multimeter lead and touch pin 4 and pin 6 of an optocoupler. Remember 5 and 6 are the collector of the photo transistor at the output side of the optocoupler. When you short pin 4 and pin 5, you are bypassing the photo transistor and making 12 volt available to pin 4 of the optocoupler which is connected to relay. Soon as you connect these two pins, the optocoupler will trigger the corresponding relay which will move the corresponding join of the robot.

When you short pin 4 and 5 of polarity selection relay and one more relay, that specific part of the robot will move in inverse direction. See 4.5 to know how to test your robot manually without coding.

Image 14

Figure 4.5: Manual Testing With Relay Board

 

Part B : IoT Integration

5.  Controlling the Robot 

5.1 Control through Serial Port

Controlling the robot becomes easier once it is assembled and connected through the relay circuit. The only task here is to write an Arduino sketch that takes input from Serial Port and activate corresponding digital pin which is connected to an optocoupler which is driving a relay.

While testing the robot by connecting pin 4 and 5 of the optocoupler you might have noticed that the part of the arm moves very fast because the speed of the DC motor is about 2200RPM. So if you generate an ON command and leave it as it is, the robotic part could get damaged because of restricted movement. So the trick here is to switch on the robot for some time and then turn it off. Typically 50ms to 100ms delay is enough for good movement of the parts. Though some parts might be tested with different delays. For instance the "Jaw" can be opened and closed in 50ms time. But lower arm and higher arm needs about 70-100ms delay for acceptable movement.

Therefore our Arduino program logic is:

a) declare pins connected to optocouplers as OUTPUT

b) Keep monitoring Serial Command.

c) Reserve a pair of command for polarity relay

d) reserve one command each for six relays. When the board receives serial command associated with these relays, turn the pin HIGH, wait for DELAY period of time and then turn it LOW.

5.1.1 Using ArdOS

We will be using ArdOS for our coding. So if you are not well versed with working with ArdOS, I recommend you to please go through this tutorial on ArdOS

So we are going to modify the code SerialCommArduinoSketch  we developed for that specific tutorial to be able to control Robot.

C++
#include <kernel.h>
#include <queue.h>
#include <sema.h>

#define NUM_TASKS  2
#define SIZE 6

int pins[SIZE]={12,9,8,7,6,5};
void taskSerialRead(void *p)
{
  int val=0;
  
  int i=0;
  int polarity=13;

  int DIR=1;
   while(1)
  {

    if(Serial.available())
    {
      
    val=Serial.read() ;
    val=val-48;
   
    ////////////// Switching Logic///////////
    if(val>=0)
    { 
      if(val<SIZE)
      {
        Serial.println(pins[val]);
       digitalWrite(pins[val],HIGH);
       OSSleep(100);
       digitalWrite(pins[val],LOW);
      }
      if(val==7)
      {

       digitalWrite(polarity,HIGH);

      }
      if(val==8)
      {

       digitalWrite(polarity,LOW);

      }
     
     
     
    }
    
    OSSleep(100);
    }
  }
   
  
}

void setup()
{
 

  
  int polarity1=13;

  Serial.begin(115200);
  pinMode(polarity1, OUTPUT);
  digitalWrite(polarity1,LOW);

///////////// Making All the Pins Low Initially/////////
 for(int i1=0;i1<SIZE;i1++)
  {
    pinMode(pins[i1],OUTPUT);
    digitalWrite(pins[i1],LOW);
  }
  ////////////////////////////////////////////////
   OSInit(NUM_TASKS);
   OSCreateTask(0, taskSerialRead, NULL);    

   OSRun();
}

void loop()
{
  // Empty
}

ArdOS is still in beta and has several issues. One of the issues is that when you declare the array within a task, the sketch simply can't locate array elements. Therefore pins array is being declared globally. However if you declare normal variable globally, the task simply refuses to identify it. 

Rest of the Logic is straight forward. Number 7 and 8 are for turning on and off the polarity relay. Number 0-5 is for activating 6 relays driving 6 motors.  You can debug and upload the code and test the your robotic movement by giving input from 0-8. 

5.1.2 Non ArdOS method

If you face problem with ArdOS, then use this simple Arduino sketch to get working with your Robot.

C++
#define SIZE 6
int pins[SIZE]={12,9,8,7,6,5};
int i=0;
int polarity=13;
//15->13 or polarity on
//16->13 OFF
//17->ALL OFF
int DIR=1;
void setup()
{
  for(i=0;i<SIZE;i++)
  {
    pinMode(pins[i],OUTPUT);
    digitalWrite(pins[i],LOW);
  }
    pinMode(polarity,OUTPUT);
    digitalWrite(polarity,LOW);
  Serial.begin(115200);
}

void loop()
{
if(Serial.available()>0)
{
  int val=  Serial.read();
  val=val-48;
  Serial.println(val);
  if(val==7)
  {
    DIR=-1;
  digitalWrite(polarity,HIGH);
  }
  if(val==8)
  {
    DIR=1;
  digitalWrite(polarity,LOW);
  }
  if(val==17)
  {
  for(i=0;i<SIZE;i++)
  {
   
    digitalWrite(pins[i],HIGH);
  }
  }
  if(val<SIZE)
  {
    digitalWrite(pins[val],HIGH);
    delay(100);
   
    digitalWrite(pins[val],LOW);
  }
  
  
  
}
delay(20);  
}

One point of note: Some developers are facing problems with compiling ArdOS and getting "Naked Function" error. This appearently is IDE specific problem. You can try out an older Arduino software if you want to stick to ArdOS. 

5.2 Building Control Interface in .Net

The design issue with C# Client that will bridge the underneath hardware with the Internet of Things is  to interpret a set of command and send the appropriate code to Arduino device through Serial interface. However there are six distinct parts of robot which attributes to it's different degrees of freedom and all the parts needs to be coded through command. For instance JAW OPEN, JAW CLOSE, WRIST DOWN, WRIST UP, ARM UP, ARM DOWN, SHOULDER UP, SHOULDER DOWN, MOVE FORWARD and MOVE BACK. 

As we want different control interface to be integrated with the Client system to enable the user to use different modalities to generate the command, we will wrap Sending Command to Serial port through set of methods.

First we shall build a simple UI to let the user know which part of the the robot is marked as what and provide simple button interface so that we can test each of the parts independently. Now as the we already have the program that controls the relays in Arduino, all we need to do is test which number belongs to which relay and prepare our method accordingly. A method that has to make a motor move clockwise must send command 8 followed by the command number for that specific relay. For the same  motor's reverse movement, it should be sending 7 followed by the number for that relay. This is because command 7 and 8 will control the polarity selection relay. 

Before we proceed ahead, I urge you to look into our tutorial of turning Arduino into an IoT node for getting an overall idea about the serial communication and IoT integration.

Do not forget to comment , 

C++
val=val-48;

part of Arduino code and upload new sketch before integrating C# Client.

Image 15

Figure 5.1: Panel in Form For Serial Communication and Robotic Control

The above screenshot contains a very simple Button GUI that makes it easy for the user to understand what are the control he needs to work with. Further I have added I^  for demonstrating "going up", "V"  For demonstrating "going down", ">>" for "Clockwise" and "<<" for "anti clockwise" movement. Now before getting into .Net coding, test your Arduino Serial command listing of section 5 and note down which number is associated with which command.

This simple UI is helpful for new users to immidiately understand which part he wants to control.

We will first build simple methods for each of the actions associated with buttons shown in above interface and then simply invoke the methods from button-Click listener.   

C#
#region Control relay codes

static void AntiClockWise()
    {
        serialPort1.Write(new byte[] { (byte)8 }, 0, 1);
        System.Threading.Thread.Sleep(50);
        serialPort1.Write(new byte[] { (byte)5 }, 0, 1);

    }

    static void WristDown()
    {
        serialPort1.Write(new byte[] { (byte)7 }, 0, 1);
        System.Threading.Thread.Sleep(50);
        serialPort1.Write(new byte[] { (byte)3 }, 0, 1);
    }
    static void WristUp()
    {
        serialPort1.Write(new byte[] { (byte)8 }, 0, 1);
        System.Threading.Thread.Sleep(50);
        serialPort1.Write(new byte[] { (byte)3 }, 0, 1);
    }
<span style="font-size: 9pt;">    static void OpenJaw()</span>

    {

        serialPort1.Write(new byte[] { (byte)8 }, 0, 1);
        System.Threading.Thread.Sleep(50);
        serialPort1.Write(new byte[] { (byte)1 }, 0, 1);

    }
    static void CloseJaw()
    {

        serialPort1.Write(new byte[] { (byte)7 }, 0, 1);
        System.Threading.Thread.Sleep(50);
        serialPort1.Write(new byte[] { (byte)1 }, 0, 1);
    }
  static  void Forward()
    {
        serialPort1.Write(new byte[] { (byte)8 }, 0, 1);
        System.Threading.Thread.Sleep(50);
        serialPort1.Write(new byte[] { (byte)4 }, 0, 1);

    }
  static void Reverse()
  {

      serialPort1.Write(new byte[] { (byte)7 }, 0, 1);
      System.Threading.Thread.Sleep(50);
      serialPort1.Write(new byte[] { (byte)4 }, 0, 1);
      System.Threading.Thread.Sleep(50);
      serialPort1.Write(new byte[] { (byte)8 }, 0, 1);

  }
  static void ElbowUP()
  {

      serialPort1.Write(new byte[] { (byte)7 }, 0, 1);
      System.Threading.Thread.Sleep(50);
      serialPort1.Write(new byte[] { (byte)2 }, 0, 1);
     
      

  }
  static void ElbowDown()
  {

      serialPort1.Write(new byte[] { (byte)8 }, 0, 1);
      System.Threading.Thread.Sleep(50);
      serialPort1.Write(new byte[] { (byte)2 }, 0, 1);     
  }
  static void ShoulderUp()
  {
      serialPort1.Write(new byte[] { (byte)8 }, 0, 1);
      System.Threading.Thread.Sleep(50);
      serialPort1.Write(new byte[] { (byte)0 }, 0, 1);

  }
  static void ShoulderDown()
  {
      serialPort1.Write(new byte[] { (byte)7 }, 0, 1);
      System.Threading.Thread.Sleep(50);
      serialPort1.Write(new byte[] { (byte)0 }, 0, 1);

  }
  static void ClockWise()
  {
      serialPort1.Write(new byte[] { (byte)7 }, 0, 1);
      System.Threading.Thread.Sleep(50);
      serialPort1.Write(new byte[] { (byte)5 }, 0, 1);

  }
    #endregion

You might observe that all the methods are being made static. That is because we will integrate several modality into the client like voice recognition and face detection. Therefore there would be cross threaded call to the methods. In order to avoid using beginInvoke for every call, I have preferred to use static method. 

Once these methods are static, the declaration of the serialPort1 also needs to be changed to static and the initialization must be done in FormLoad() method instead of InitializeComponent().

Here are button event handlers which call the above methods in order to perform the operations.

C#
private void btnJawOpen_Click(object sender, EventArgs e)
        {
            OpenJaw();
            
        }

        private void btnElbowUp_Click(object sender, EventArgs e)
        {
            ElbowUP();
        }

        private void btnRoverReverse_Click(object sender, EventArgs e)
        {
            Reverse();
        }

        private void btnRoverForward_Click(object sender, EventArgs e)
        {
            Forward();
        }

        private void btnBaseAnticlockwise_Click(object sender, EventArgs e)
        {
            AntiClockWise();
        }

        private void btnBaseClockwise_Click(object sender, EventArgs e)
        {
            ClockWise();
        }

        private void btnShoulderDown_Click(object sender, EventArgs e)
        {
            ShoulderDown();
        }

        private void btnShoulderUp_Click(object sender, EventArgs e)
        {
            ShoulderUp();
        }

        private void btnElbowDown_Click(object sender, EventArgs e)
        {
            ElbowDown();
        }

        private void btnWristUp_Click(object sender, EventArgs e)
        {
            WristUp();
        }

        private void btnWristDown_Click(object sender, EventArgs e)
        {
            WristDown();
        }

        private void btnJawClose_Click(object sender, EventArgs e)
        {
            CloseJaw();
        }

When you run the program and test it, you will see that you are now able to control the robot from UI itself.Image 16

Figure 5.2 Demonstration of Controlling Robot From UI ( Clockwise Command)

 5.3 IoT Integration for Robotic Control

5.3.1 Service Integration in C# Client

Remember we have already developed our own IoT middleware in our Arduino IoT Integration Tutorial and hosted as 

http://grasshoppernetwork.com/IoTService.asmx

We are also reusing the C# client we developed in Arduino IoT integration tutorial where we have discussed about binding the client with the WebService in this part of the Arduino integration tutorial. All we need to do now is to change the project ID and reduce the polling time. As Robotic control should be much more responssive, we must reduce the polling time to something around 100ms to give a real time feeling to the remote user.

Let us change the value of projId variable first.

C#
string projId = "UROBI";

Now change the timPollCommands timer's Interval property to 100.

Now user will send commands specific to the Robotic control instead of LED ON and LED OFF command that we used in previous tutorial. So you need to change the way remote commands are handled by the client.

Here is the event handler of timPollCommands. Note that because of heavy load expectancy, we will delete the command soon as it is processed by the client.

C#
ArduinoSerial.ServiceReference1.IoTServiceSoapClient iotClient = new ArduinoSerial.ServiceReference1.IoTServiceSoapClient();
       private void timPollCommands_Tick(object sender, EventArgs e)
       {
           timPollCommands.Enabled = false;
           try
           {
               string[] result = iotClient.CommandToExecute(projId, ipAddress).Split(new char[] { '#' });
               if (result.Length < 3)
               {
                   timPollCommands.Enabled = true;
                   return;
               }
               string command = result[0].ToUpper();
               string status = result[1];
               if (command.Equals("FORWARD") && !status.Equals("OK"))
               {
                   try
                   {
                       Forward();
                       iotClient.DeleteCommand(projId, ipAddress);

                   }
                   catch
                   {
                       iotClient.UpdateCommandStatus(projId, ipAddress, "FAILED");
                   }
               }
               if (command.Equals("REVERSE") && !status.Equals("OK"))
               {
                   try
                   {
                       Reverse();
                       iotClient.DeleteCommand(projId, ipAddress);

                   }
                   catch
                   {
                       iotClient.UpdateCommandStatus(projId, ipAddress, "FAILED");
                   }
               }
               if (command.Equals("CLOCKWISE") && !status.Equals("OK"))
               {
                   try
                   {
                       Forward();
                       ClockWise();
                       iotClient.DeleteCommand(projId, ipAddress);

                   }
                   catch
                   {
                       iotClient.UpdateCommandStatus(projId, ipAddress, "FAILED");
                   }
               }
               if (command.Equals("ANTI CLOCKWISE") && !status.Equals("OK"))
               {
                   try
                   {
                       AntiClockWise();
                       iotClient.DeleteCommand(projId, ipAddress);

                   }
                   catch
                   {
                       iotClient.UpdateCommandStatus(projId, ipAddress, "FAILED");
                   }
               }
               if (command.Equals("OPEN JAW") && !status.Equals("OK"))
               {
                   try
                   {
                       OpenJaw();
                       iotClient.DeleteCommand(projId, ipAddress);

                   }
                   catch
                   {
                       iotClient.UpdateCommandStatus(projId, ipAddress, "FAILED");
                   }
               }
               if (command.Equals("CLOSE JAW") && !status.Equals("OK"))
               {
                   try
                   {
                       CloseJaw();
                       iotClient.DeleteCommand(projId, ipAddress);

                   }
                   catch
                   {
                       iotClient.UpdateCommandStatus(projId, ipAddress, "FAILED");
                   }
               }
               if (command.Equals("ELBOW UP") && !status.Equals("OK"))
               {
                   try
                   {
                       ElbowDown();
                       iotClient.DeleteCommand(projId, ipAddress);

                   }
                   catch
                   {
                       iotClient.UpdateCommandStatus(projId, ipAddress, "FAILED");
                   }
               }
               if (command.Equals("ELBOW DOWN") && !status.Equals("OK"))
               {
                   try
                   {
                       ElbowDown();
                       iotClient.DeleteCommand(projId, ipAddress);

                   }
                   catch
                   {
                       iotClient.UpdateCommandStatus(projId, ipAddress, "FAILED");
                   }
               }
               if (command.Equals("SHOULDER UP") && !status.Equals("OK"))
               {
                   try
                   {
                       ShoulderUp();
                       iotClient.DeleteCommand(projId, ipAddress);

                   }
                   catch
                   {
                       iotClient.UpdateCommandStatus(projId, ipAddress, "FAILED");
                   }
               }
               if (command.Equals("SHOULDER DOWN") && !status.Equals("OK"))
               {
                   try
                   {
                       ShoulderDown();
                       iotClient.DeleteCommand(projId, ipAddress);

                   }
                   catch
                   {
                       iotClient.UpdateCommandStatus(projId, ipAddress, "FAILED");
                   }
               }

           }
           catch
           {
           }
           timPollCommands.Enabled = true;

       }

5.3.2 IoT Test Client

We can simply use the ASP.Net TestClient.aspx client we developed in our Arduino IoT integration Tutorial. But as we are expecting the response to be little quicker, I also have added a DropDownListBox with all the commands so that the user is saved from typing the commands. By activating the AutoPostback property of the DropDownListBox, we ensure that the selected command is available in the text box immidiately after selection and then can be sent to Server through our IoTService.asmx middleware. Do not forget to change the project ID to UROBI in the test client. Figure 5.3 Shows the design of the test client.

Image 17

Figure 5.3: TestClient.aspx design

Following is the Code Behind for the client.

C#
void Button1_Click(object sender, EventArgs e)
{
iics.IoTService c=new iics.IoTService();
 c.InsertCommand("UROBI",txtIp.Text,txtCommand.Text,"WebClient",DateTime.Now,"EXECUTE");
 Label4.Text="Command Sent";
}

void Button2_Click(object sender, EventArgs e)
{
iics.IoTService c=new iics.IoTService();
string s=c.FetchNotification("UROBI",txtIp.Text);
Label4.Text=s;

}

void DropDownList1_SelectedIndexChanged(object sender, EventArgs e)
{
txtCommand.Text=DropDownList1.SelectedItem.ToString();
}

Once you run the file in the browser and generate the command with IP address same as your C# client's IP address, you can control the robot remotely from this simple web interface. Figure 5.4 Shows the Controlling of our robot from IoT test client.

Image 18

Figure 5.4: Result of Controlling Robot remotely using IoT Service

6. Remote Webcam Streaming for Robot Monitoring 

6.1 Objective and Design Issues

See figure 5.4. Now consider that you are controlling the Robot from a remote location with the help of browser through TestClient.aspx. Would it be possible for you to control anything if you can't see the position of the Robot? What command would you generate for the robot to lift certain thing? It is impossible to know what to control and how much to control if you can't see what is happening at the other place. Therefore remote webcam stream is an integral part of remote device control ( especially automation).

In order to achive this, we require to first integrate live webcam frames into our form. We can than write our streaming server to make the forms available remotely. 

As we intend to integrate face detection into control decision, we will integrate EmguCV for detecting face which comes with set of API's for acquiring camera frame. So we will use this technique from word go. I also urge you to please read this exceptional tutorial of FaceDetection from Sergio Andrés Gutiérrez Rojas from which we would use face detection module. EmguCV is a wrapper for OpenCV which provides wonderful sets of real time video/image processing APIs.

One of the reasons to use the EmguCV is that it is relatively simple to convert the  EmguCV images to .Net image and vice versa. So in this section we will learn on integrating webcam with the application, enabling face detection and then streaming the webcam frame over LAN. So if user is sitting in this C# Client machine, he can utilize the camera interface to generate some cool commands to the robot. If he is away and controlling the robot remotely, he can visualize the robot from remote location which makes it easier for him to control the machine.

Though we will activate face detection here, we will not work with coverting face movements to convert in this section. We will cover the control part as an entirely separete segment.

6.2 Integrating EmguCV and FaceDetection

All the essential dll files needed to add Emgu with the project is provided in the bin folder of the C# ArduinoClient project folder. You will see set of dlls for OpenCV as well as Emgu specific dll files. You need to first add Emgu.CV, Emgu.CV.GPU, Emgu.CV.ML, Emgu.CV.UI and Emgu.CV.Util dll files by selecting add reference from project menu followed by browsing the files from bin directory.

Once dll files are added, add a PictureBox object for displaying the acquired frames. Now declare an object of Emgu.CV.Capture class, let's say grabber.

Initialize the object from a button click event ( or you may initialize it at the form load if you chose so). Add an handler for Application.Idle event, capture a frame and process it.

Face detection uses Haar Cascade. Cascade is basically combination of many weak classifier. Emgu also comes with Haar_cascade file which is basically set of features obtained from training several face images. The object detection module compares the features of each part of current frame with this cascade features and locates the area best matched as face.

We initialize a Haar Cascade object in Start Camera button event handler, folowed by initializing grabber. In the FrameGrabber() method, we capture a frame, try to detect a face in it, if present draw a rectangle around the face and finally display the frame in the picturebox. Here is the listing for the above concept.

C#
#region camera Utility and Face Detection Part
      Image<Bgr, Byte> currentFrame;
      Capture grabber;
      HaarCascade face;
      MCvFont font = new MCvFont(FONT.CV_FONT_HERSHEY_TRIPLEX, 0.5d, 0.5d);
      Image<Gray, byte> result, TrainedFace = null;
      Image<Gray, byte> gray = null;
      void FrameGrabber(object sender, EventArgs e)
      {


          //Get the current frame form capture device
          currentFrame = grabber.QueryFrame().Resize(320, 240, Emgu.CV.CvEnum.INTER.CV_INTER_CUBIC);

          //Convert it to Grayscale
          gray = currentFrame.Convert<Gray, Byte>();

          //Face Detector
          MCvAvgComp[][] facesDetected = gray.DetectHaarCascade(
        face,
        1.2,
        10,
        Emgu.CV.CvEnum.HAAR_DETECTION_TYPE.DO_CANNY_PRUNING,
        new Size(20, 20));

          //Action for each element detected
          foreach (MCvAvgComp f in facesDetected[0])
          {

              result = currentFrame.Copy(f.rect).Convert<Gray, byte>().Resize(100, 100, Emgu.CV.CvEnum.INTER.CV_INTER_CUBIC);

              //draw the face detected in the 0th (gray) channel with blue color
              currentFrame.Draw(f.rect, new Bgr(Color.Red), 2);




          }
          pictureBox1.Image = currentFrame.ToBitmap();


      }
      bool cameraStarted = false;
      private void button4_Click(object sender, EventArgs e)
      {

          if (!cameraStarted)
          {
              cameraStarted = true;
              button4.Text = "Stop Camera";
              face = new HaarCascade("haarcascade_frontalface_default.xml");
              grabber = new Capture();
              grabber.QueryFrame();
              //Initialize the FrameGraber event
              Application.Idle += new EventHandler(FrameGrabber);
              return;
          }
          else
          {
              Application.Idle -= new EventHandler(FrameGrabber);
              cameraStarted = false;
              button4.Text = "Start Camera";
          }

      }
      #endregion

Once you run the project, you would start seeing the live frame in the picture box. You will also see a red rectangle around your face. Figure 6.1 Shows the result of one of the Live sessions.

Image 19

Figure 6.1: Result of Camera integration and FaceDetection

6.3 Streaming WebCam images to LAN

Codeproject is  really like an Eden Garden when it comes to finding a piece of code that you need, especially if you want to do something with C#.Net. When I was looking for integarting streaming services with the project, I tried different techniques, code blocks before I finally discovered this gem of a tutorial on image streaming

http://www.codeproject.com/Articles/371955/Motion-JPEG-Streaming-Server

This tutorial shows how to stream current desktop of set of images stored in a folder whcih could be viewed remotely without installing any specific software just from browser.  The code was straight forward and design very good. So we would utilize this specific tutorial and tweak the code a bit to facilitate streaming live webcam frames over internet.

Firstly we would include ImageStreamingServer.cs and MjpegWriter.cs files into our project. Now existing work either streams images from a directory or captures current screen and stream it. We want the server to stream the frame grabbed by frame grabber. As both grabber and Streaming Servers are two entirely different threads, we need a shared variable to write from FrameGrabber() and read in ImageStreamingServer thread. We declare an object of Image called camImage

C#
public static Image camImage;

Let us update the constructor so as to initialize the server with a method in Screen class that can stream the image being created by FrameGrabber().

C#
public ImageStreamingServer()
           : this(Screen.WebCamStream())
       {

       }

Finally WebCamStream() method inside ServerImageStream class locks camImage while streaming so that there are no access violation, yeilds the image in an IEnumerable<Image> object which is sent as packets by server to client, 

C#
public static IEnumerable<Image> WebCamStream()
       {
           while (true)
           {

               lock (ImageStreamingServer.camImage)
               {

                   yield return ImageStreamingServer.camImage;
               }


           }



           yield break;
       }

That's all modification the Streaming Server needs. Once you present the server with IEnumerable<Image> object, it takes care of the rest of the things.

I strongly urge you to read the tutorial for more indepth understanding of the streaming process. 

Now from our FrameGrabber() method, we can assign currentFrame.ToBitmap() to camImage of ServerImageStream class.

C#
try
 {
     ImageStreamingServer.camImage = currentFrame.ToBitmap();
 }
 catch
 {
 }

I have set the server to run on Port number 8090. So you can view the stream by typing the IP address of the machine as seen in the label followed by specifying port number 8090 with a colon (  :  ).

You can see the result in Figure 6.2.

Image 20

Figure 6.2: Stream of Robot attached machine is viewed remotely

6.4 IoT .Net Client for Robotic Control

We have already seen how Robotic control system can be built over the IoT service we developed and the robot can be controlled remotely. We also understood the need to implement a webcam streaming service at the server to view the status of the robot remotely. What we really need now is a single interface that acts as a perfect client to out UROBI system. User should be able to remotely control as well as view the robot remotely from single interface.

That work is not that difficult considering what we have achived so far! So Let us create a new C# Project ( Window's Form Application). We would first create a MJPEG Client for viewing remote client followed by a UI interface for remote Robot Control.

6.4.1 MJPEG Stream Viewer

First Download MJPEG Decoder from Codeplex. Unzip the folder. It contains several dll files. All you have to do is reference MJpegProcessor.dll to your project. Having added the dll file, building a client is rather simple.

Declare an object of  MjpegDecoder

C#
MjpegDecoder m_mjpeg;

Create a simple interface with a Button for Opening and Closing the stream and a TextBox to specify the remote address. If no stream is selected than initialize m_mjpeg object. Add an event handler for handling the new frames and call ParseStream() method with server Uri. Handle the case if user has forgotten to provide http:// extension by concatinating the extension with the address string. The objject opens remote stream and triggers the event handler whenever a new stream is available. In the event handler, just assign e.Bitmap object to Image property of your picturebox.

C#
private void button1_Click(object sender, EventArgs e)
{
    var b = (Button)sender;
    if (b.Text.Equals("Open Stream"))
    {
        m_mjpeg = new MjpegDecoder();

        m_mjpeg.FrameReady += new EventHandler<FrameReadyEventArgs>(m_mjpeg_FrameReady);
        if (!txtServerAddress.Text.StartsWith("http://"))
            m_mjpeg.ParseStream(new Uri("http://" + txtServerAddress.Text));
        else
            m_mjpeg.ParseStream(new Uri("http://" + txtServerAddress.Text));
        button1.Text = "Stop Stream";
        return;
    }
    else
    {
        button1.Text="Open Stream";
        m_mjpeg.FrameReady -= new EventHandler<FrameReadyEventArgs>(m_mjpeg_FrameReady);

    }

}

Run C# Arduino Client. Start camera and start streaming server. Run the UROBI Client and open stream. You can see the remote stream in your client as shown in figure 6.3.

Image 21

Figure 6.3: Viewing Remote Stream in UROBI Client

6.4.2 Building IoT Remote Control  Interface for Robotic Control

We want to make Same Control Interface available at the client side so that it becomes easy for the client to know which button to click. But unlike the Serial Client which sends the command directly to Arduino, UROBI Client must push the commands to IoT command stack through the web service.

We shall first copy the same interface of C# Serial Arduino client into UROBI client. Add a service reference to  our custom IoT service:

http://grasshoppernetwork.com/IoTService.asmx

We will change the button texts a little and will keep the exact command that would be process by the client which is connected to Arduino. All the buttons must have a single click event handler.

From the server address specified in the text box, we will separate only the ip address part and generate the command when the button is clicked.

C#
ServiceReference1.IoTServiceSoapClient iot = new ServiceReference1.IoTServiceSoapClient();
C#
string projId = "UROBI";

 private void AllButtonClickHandler(object sender, EventArgs e)
        {
            try
            {
                var b = (Button)sender;
                string command = b.Text;

                string ip = "";
                if (txtServerAddress.Text.StartsWith("http://"))
                {
                    ip = txtServerAddress.Text.Split(new string[] { "http://" }, StringSplitOptions.None)[1];
                }
                if (ip.Contains(":"))
                {
                    ip = ip.Split(new char[] { ':' }, StringSplitOptions.None)[0];
                }
                iot.InsertCommand(projId, ip, command, "UROBO Client", DateTime.Now, "PENDING");
            }
            catch
            {
            }
        }

Yes, it is that Simple! Here is the screenshot of Controlling the robot from UROBI Client.

Image 22

Figure 6.4: Controlling Robot Remotely Using UROBI Client

While testing, I figured out that due to too many polling, often the C# Arduino Serial client was failing to fetch data using webservices due to timeout issue. I therefore changed the binding with a custom binding in App.Config of C# Serial Client.

XML
<?xml version="1.0" encoding="utf-8" ?>
<configuration>
    <system.serviceModel>
      <bindings>
        <basicHttpBinding>
          <binding name="IoTServiceSoap" closeTimeout="00:01:00" openTimeout="00:01:00" receiveTimeout="00:10:00" sendTimeout="00:01:00" allowCookies="false" bypassProxyOnLocal="false" hostNameComparisonMode="StrongWildcard" maxBufferSize="2147483647" maxBufferPoolSize="2147483647" maxReceivedMessageSize="2147483647" messageEncoding="Text" textEncoding="utf-8" transferMode="Buffered" useDefaultWebProxy="true">
            <readerQuotas maxDepth="32" maxStringContentLength="2147483647" maxArrayLength="2147483647" maxBytesPerRead="4096" maxNameTableCharCount="16384"/>
          </binding>
        </basicHttpBinding>
      </bindings>

      <client>
            <endpoint address="http://grasshoppernetwork.com/IoTService.asmx"
                binding="basicHttpBinding" bindingConfiguration="IoTServiceSoap"
                contract="ServiceReference1.IoTServiceSoap" name="IoTServiceSoap" />
        </client>
    </system.serviceModel>
</configuration>

6.5 Performance Tuning With BackgroundWorker

Even though we have been successfull in getting our IoT framework working and controlling robot remotely, you might have noticed that with increased polling rate, GUI becomes less and less responssive. That is because we are blocking the main thread for remote call too often. Hence it becomes essential to separate the call of WebMethod from main thread. We want to fork a background thread for every polling instance and calling instance at C# Arduino Serial client and UROBI remote client respectively. We can  do this by creating a background thread in the timPollCommands's tick event handler and performing the web method call in DoWork method of the worker. Similarly the remote call can be made within DoWork of a BackgroundWorker forked from AllButtonClickHandler in UROBI Client.

Here is the updated code at the C# Arduino Serial Client

C#
private void timPollCommands_Tick(object sender, EventArgs e)
       {
           timPollCommands.Enabled = false;
           BackgroundWorker bw = new BackgroundWorker();
           bw.DoWork += new DoWorkEventHandler(bw_DoWork);
           bw.WorkerSupportsCancellation = true;
           bw.RunWorkerAsync();
           timPollCommands.Enabled = true;

       }

       void bw_DoWork(object sender, DoWorkEventArgs e)
       {
           //throw new NotImplementedException();
           try
           {
               string[] result = iotClient.CommandToExecute(projId, ipAddress).Split(new char[] { '#' });
               if (result.Length < 3)
               {
                   timPollCommands.Enabled = true;
                   return;
               }
               string command = result[0].ToUpper();
               string status = result[1];
               if (command.Equals("FORWARD") && !status.Equals("OK"))
               {
                   try
                   {
                       Forward();
                       iotClient.DeleteCommand(projId, ipAddress);

                   }
                   catch
                   {
                       iotClient.UpdateCommandStatus(projId, ipAddress, "FAILED");
                   }
               }
               if (command.Equals("REVERSE") && !status.Equals("OK"))
               {
                   try
                   {
                       Reverse();
                       iotClient.DeleteCommand(projId, ipAddress);

                   }
                   catch
                   {
                       iotClient.UpdateCommandStatus(projId, ipAddress, "FAILED");
                   }
               }
               if (command.Equals("CLOCKWISE") && !status.Equals("OK"))
               {
                   try
                   {
                       Forward();
                       ClockWise();
                       iotClient.DeleteCommand(projId, ipAddress);

                   }
                   catch
                   {
                       iotClient.UpdateCommandStatus(projId, ipAddress, "FAILED");
                   }
               }
               if (command.Equals("ANTI CLOCKWISE") && !status.Equals("OK"))
               {
                   try
                   {
                       AntiClockWise();
                       iotClient.DeleteCommand(projId, ipAddress);

                   }
                   catch
                   {
                       iotClient.UpdateCommandStatus(projId, ipAddress, "FAILED");
                   }
               }
               if (command.Equals("OPEN JAW") && !status.Equals("OK"))
               {
                   try
                   {
                       OpenJaw();
                       iotClient.DeleteCommand(projId, ipAddress);

                   }
                   catch
                   {
                       iotClient.UpdateCommandStatus(projId, ipAddress, "FAILED");
                   }
               }
               if (command.Equals("CLOSE JAW") && !status.Equals("OK"))
               {
                   try
                   {
                       CloseJaw();
                       iotClient.DeleteCommand(projId, ipAddress);

                   }
                   catch
                   {
                       iotClient.UpdateCommandStatus(projId, ipAddress, "FAILED");
                   }
               }
               if (command.Equals("ELBOW UP") && !status.Equals("OK"))
               {
                   try
                   {
                       ElbowDown();
                       iotClient.DeleteCommand(projId, ipAddress);

                   }
                   catch
                   {
                       iotClient.UpdateCommandStatus(projId, ipAddress, "FAILED");
                   }
               }
               if (command.Equals("ELBOW DOWN") && !status.Equals("OK"))
               {
                   try
                   {
                       ElbowDown();
                       iotClient.DeleteCommand(projId, ipAddress);

                   }
                   catch
                   {
                       iotClient.UpdateCommandStatus(projId, ipAddress, "FAILED");
                   }
               }
               if (command.Equals("SHOULDER UP") && !status.Equals("OK"))
               {
                   try
                   {
                       ShoulderUp();
                       iotClient.DeleteCommand(projId, ipAddress);

                   }
                   catch
                   {
                       iotClient.UpdateCommandStatus(projId, ipAddress, "FAILED");
                   }
               }
               if (command.Equals("SHOULDER DOWN") && !status.Equals("OK"))
               {
                   try
                   {
                       ShoulderDown();
                       iotClient.DeleteCommand(projId, ipAddress);

                   }
                   catch
                   {
                       iotClient.UpdateCommandStatus(projId, ipAddress, "FAILED");
                   }
               }

           }
           catch
           {
           }
       }

This obviously improves the overall responssiveness of the UI. Similarly at UROBI Client we make the web method call from DoWork method of a BackgroundThread forked from button click event handler.

C#
string projId = "UROBI";
        String command = "";
        string ip = "";
        private void AllButtonClickHandler(object sender, EventArgs e)
        {
            try
            {
                var b = (Button)sender;
                command = b.Text;

                
                if (txtServerAddress.Text.StartsWith("http://"))
                {
                    ip = txtServerAddress.Text.Split(new string[] { "http://" }, StringSplitOptions.None)[1];
                }
                else
                {
                    ip = txtServerAddress.Text;
                }
                if (ip.Contains(":"))
                {
                    ip = ip.Split(new char[] { ':' }, StringSplitOptions.None)[0];
                }
                BackgroundWorker bw = new BackgroundWorker();
                bw.DoWork += new DoWorkEventHandler(bw_DoWork);
                bw.RunWorkerAsync();
              
            }
            catch
            {
            }
        }

        void bw_DoWork(object sender, DoWorkEventArgs e)
        {
            try
            {
                iot.InsertCommand(projId, ip, command, "UROBI Client", DateTime.Now, "EXECUTE");
            }
            catch
            {
            }
            //throw new NotImplementedException();
        }

 

Part C: Control Modalities

7. Why Multimodality and What Different Modalities are available

Ideally the tutorial on our UROBI, the ultimate robotic control framework would have been sufficient if had limited the topic to section 6 because by section 6 we have built our Robot, connected it to internet through services and the services are extended for webcam sharing to so that we have remote access of of both the visual stream of the robot as well as controlling it.

IoT is basically one homogeneous environment which provides several services ( loacal and web) to intercat with the underneath machine in a seamless way. Therefore how certain utility services can be added with existing framework also plays an important part in determining if the framework is well suited for real and complex hardware workflows.

Human beings are always inclined towards natural interaction and controlling stuff through voice, gesture and natural movements. We therefore need to build systems that can accomodate new techniques of communication, interaction and notification.

Part C of the tutorial is dedicated to integrating different means of modalities into the application to show that the application works good even when different input methods are integrated into the system. This also proves that the system is responssive enough to handle multiple inputs. 

Sometimes such inputs plays an important role, sometimes they are just fun to work with. None the less in this part we shall integrate different modalities to make the application more fun. 

Some of the common modes of providing input to the systems are:

  • Face movement
  • Speech Recognition
  • Lasser Gesture
  • Hand Gesture
  • Eye blink, Eye movement
  • GSM
  • RF remote control
  • IR remote controls

In this part we shall see and integrate different modalities. The objective is to make the framework more robust and remain real time. 

8. Control Through Computer Vision

8.1 Controlling Robot With Face Movement

We have already detected face while integrating camera through EmguCV. I already have written a tutorial on Converting Face Movement to Morse Code. In that particular tutorial I have elaborated how we can detect simple face gesture such as UP, DOWN, LEFT and RIGHT. All we need to do is integrate the logic that detects face movements into our face detection part. Once gesture is detected, we can take appropriate action . Here I have used Clockwise and Anti Clockwise movements for LEFT and RIGHT gestures and Shoulder UP and Shoulder Down movements for UP and DOWN command respectively.

C#
enum HeadState { UP,DOWN,LEFT,RIGHT,NONE};
        double x1 = -1, y1 = -1, x, y;
        HeadState head = HeadState.NONE;
        int nn = 0;
        void FrameGrabber(object sender, EventArgs e)
        {
         

            //Get the current frame form capture device
            currentFrame = grabber.QueryFrame().Resize(320, 240, Emgu.CV.CvEnum.INTER.CV_INTER_CUBIC);
            
            //Convert it to Grayscale
            gray = currentFrame.Convert<Gray, Byte>();

            //Face Detector
            MCvAvgComp[][] facesDetected = gray.DetectHaarCascade(
          face,
          1.2,
          10,
          Emgu.CV.CvEnum.HAAR_DETECTION_TYPE.DO_CANNY_PRUNING,
          new Size(20, 20));

            //Action for each element detected
            ///////////////////// Lasser recognizer////////////
            Bitmap lastFrame=currentFrame.ToBitmap();
            detector.ProcessFrame(ref lastFrame);
            currentFrame = new Emgu.CV.Image<Bgr, Byte>(lastFrame);
            ///////////////////////////////////////////////////////
            foreach (MCvAvgComp f in facesDetected[0])
            {
            
                result = currentFrame.Copy(f.rect).Convert<Gray, byte>().Resize(100, 100, Emgu.CV.CvEnum.INTER.CV_INTER_CUBIC);
                currentFrame.Draw(f.rect, new Bgr(Color.Red), 2);
                x = f.rect.Left;
                y = f.rect.Top;

                if (x1 == -1)
                {
                    x1 = x;
                    y1 = y;
                }
                else
                {
                    double variation = 0;

/////////////////////////////// This code block comes from Face to Morse code tutorial///////////////////////

#region face gesture detection logic from morse code tutorial
                    try
                    {
                        variation = (float)Math.Sqrt((double)((x - x1) * (x - x1) + (y - y1) * (y - y1)));
                    }
                    catch
                    {
                    }
                   if (!head.Equals(HeadState.NONE))
                   {
                       head = HeadState.NONE;
                       x1 = x;
                       y1 = y;
                      
                   }
                   if (nn != 0)
                   {
                       if (nn > 0)
                           nn--;

                   }
                   if ((variation > 29) && (nn <= 0))
                   {
                       double xvar = x - x1;
                       double yvar = y - y1;
                       nn = 18;
                       if (Math.Abs(yvar) > Math.Abs(xvar))
                       {
                           if (Math.Abs(yvar) > 9)
                           {
                               if (yvar < 0)
                               {
                                   head = HeadState.UP;
                               }
                               if (yvar > 0)
                               {
                                   head = HeadState.DOWN;
                               }
                           }
                       }
                       else
                       {
                           if (Math.Abs(xvar) > Math.Abs(yvar))
                           {
                               if (Math.Abs(xvar) > 9)
                               {
                                   if (xvar < 0)
                                   {
                                       head = HeadState.LEFT;
                                   }
                                   if (xvar > 0)
                                   {
                                       head = HeadState.RIGHT;
                                   }
                               }
                           }
                       }
                   }

#endregion

////////////////////////////////// End of Face gesture code from Morse code tutorial////////////////////
                }

                //draw the face detected in the 0th (gray) channel with blue color
                
                 
                

            }

/////////////////////////// Robotic Control Based On Detected Gesture///////////////////
            switch (head)
            {
                case HeadState.UP:
                    currentFrame.Draw("UP", ref font, new Point(20, 20), new Bgr(0, 255, 0));
                    ShoulderUp();
                    nn = 18;
                    break;
                case HeadState.DOWN:
                    currentFrame.Draw("DOWN", ref font, new Point(20, 20), new Bgr(0, 255, 0));
                    nn = 18;
                    ShoulderDown();
                    break;
                case HeadState.RIGHT:
                    currentFrame.Draw("RIGHT", ref font, new Point(20, 20), new Bgr(0, 255, 0));
                    ClockWise();
                    nn = 18;
                    break;
                case HeadState.LEFT:
                    currentFrame.Draw("LEFT", ref font, new Point(20, 20), new Bgr(0, 255, 0));
                    AntiClockWise();
                    nn = 18; ;
                    break;
            }
     //////////////////////////////////// Face Gesture based Controlling Ends/////////////////////////           
            pictureBox1.Image = currentFrame.ToBitmap();
         
            try
            {
                ImageStreamingServer.camImage = currentFrame.ToBitmap(); 
            }
            catch
            {
            }
            
        }

Now you can control the Arm UP-DOWN and LEFT-RIGHT movement simply through your face movement. Figure 8.1 shows LEFT movement of the Robotic Arm.

Image 23

Figure 8.1: Controlling Robot With Head Movement

 

8.2 Control Through Lasser Gesture

Lasser Gesture is another very popular modality for wireless control of devices. It is widely used for slide control during powerpoint presentations.  Again there are not too many things you need to work from scratch. Codeproject already has a wonderful tutorial  on Lasser Gesture recognition and Windows Media Player Control.

The project has it's own camera interface. But since we are using EmguCV to capture and process frame, we require only MotionDetector1.cs  and UnsafeBitmap.cs files in our work. Just initialize an object of MotionDetector class and call ProcessFrame() method after face detection is completed and before face is drawn on the image. 

The original project calls ControlMediaPlayer() method with recognized gesture as parameter. Here we are more concerned with Robotic Control. So we will modify the method as ControlRobot() method. For Gestures like UP, DOWN, LEFT, RIGHT, we can call public static methods of Form1 class which are developed for controlling different parts of the robot like Forward(), Reverse(),ClockWise() and so on.

In MotionDetector1 class, we update the ProcessFrame() method, such that it calls ControlRobot() method once gesture is detected.

C#
 if (gesture != "?")
ControlRobot(gesture);

ControlRobot() method calls independent methods for robotic control depending upon Gesture:

C#
private void ControlRobot(string gesture)
        {
            try
            {

                switch (gesture)
                {
                    case "LEFT":
                        Form1.AntiClockWise();
                        break;

                    case "RIGHT":
                        Form1.ClockWise();
                        break;

                    case "UP":
                        Form1.ShoulderUp();
                        break;

                    case "DOWN":
                        Form1.ShoulderDown();

                        break;
                }
            }
            catch
            {
            }
        }

Threshold is set to 250 through Form1 variable

C#
double threshold=250;

You can easily integrate a numeric updown control to select it's value in the run time. Figure 8.2 Shows detection of UP command and a slight movement of jaw through it.

Image 24

Figure 8.2: Controlling Robot Through Lasser Gesture

9. Integrating Speech Recognition And TTS 

Speech Recognition is not too difficult either. First you need to add a reference to System.Speech from Project-.Add Reference->.Net

Once reference is added, you can use an object of SpeechRecognitionEngine

C#
SpeechRecognitionEngine  _recognizer;

to load sets of grammer. Grammer is a word or sentence that you want the recognizer to recognize. In our case we will load same grammer that we have used in the GUI components like "Clockwise","Anti ClockWise","Forward","Reverse" and so on. 

Once grammer is loaded, add an event handler which should be triggered once a speech is detected.

C#
_recognizer.SpeechRecognized += _recognizeSpeechAndWriteToConsole_SpeechRecognized;

From the event handler we can call the appropriate static methods.

I have wrapped entire speech recognition logic into a single method called RecognizeSpeechAndControlRobot(). You can just call this method once your serial communication with Arduino is established.

C#
#region Recognize speech and write to console
        static SpeechRecognitionEngine _recognizer = null;
        static void RecognizeSpeechAndControlRobot()
        {
            Thread.CurrentThread.CurrentCulture = new CultureInfo("en-GB");
            Thread.CurrentThread.CurrentUICulture = new CultureInfo("en-GB");
            _recognizer = new SpeechRecognitionEngine();
           

            _recognizer.RequestRecognizerUpdate(); // request for recognizer update
            _recognizer.LoadGrammar(new Grammar(new GrammarBuilder("shoulder up"))); // load a "test" grammar
            _recognizer.RequestRecognizerUpdate(); // request for recognizer update

            _recognizer.LoadGrammar(new Grammar(new GrammarBuilder("shoulder down"))); // load a "test" grammar
            _recognizer.RequestRecognizerUpdate(); // request for recognizer update

            _recognizer.LoadGrammar(new Grammar(new GrammarBuilder("wrist up"))); // load a "test" grammar
            _recognizer.RequestRecognizerUpdate(); // request for recognizer update

            _recognizer.LoadGrammar(new Grammar(new GrammarBuilder("wrist down"))); // load a "test" grammar
            _recognizer.RequestRecognizerUpdate(); // request for recognizer update

            _recognizer.LoadGrammar(new Grammar(new GrammarBuilder("elbow up"))); // load a "test" grammar
            _recognizer.RequestRecognizerUpdate(); // request for recognizer update

            _recognizer.LoadGrammar(new Grammar(new GrammarBuilder("elbow down"))); // load a "test" grammar
            _recognizer.RequestRecognizerUpdate(); // request for recognizer update

            _recognizer.LoadGrammar(new Grammar(new GrammarBuilder("clockwise"))); // load a "test" grammar
            _recognizer.RequestRecognizerUpdate(); // request for recognizer update

            _recognizer.LoadGrammar(new Grammar(new GrammarBuilder("anticlockwise"))); // load a "test" grammar
            _recognizer.RequestRecognizerUpdate(); // request for recognizer update

            _recognizer.LoadGrammar(new Grammar(new GrammarBuilder("forward"))); // load a "test" grammar
            _recognizer.RequestRecognizerUpdate(); // request for recognizer update

            _recognizer.LoadGrammar(new Grammar(new GrammarBuilder("reverse"))); // load a "test" grammar
            _recognizer.RequestRecognizerUpdate(); // request for recognizer update

            _recognizer.LoadGrammar(new Grammar(new GrammarBuilder("open"))); // load a "test" grammar
            _recognizer.RequestRecognizerUpdate(); // request for recognizer update

            _recognizer.LoadGrammar(new Grammar(new GrammarBuilder("close"))); // load a "test" grammar
            _recognizer.RequestRecognizerUpdate(); // request for recognizer update
            _recognizer.LoadGrammar(new Grammar(new GrammarBuilder("right"))); // load a "test" grammar
            _recognizer.RequestRecognizerUpdate();
            _recognizer.LoadGrammar(new Grammar(new GrammarBuilder("left"))); // load a "test" grammar
            _recognizer.RequestRecognizerUpdate();
            _recognizer.LoadGrammar(new Grammar(new GrammarBuilder("enter"))); // load a "test" grammar
            _recognizer.RequestRecognizerUpdate(); // request for recognizer update
            _recognizer.LoadGrammar(new Grammar(new GrammarBuilder("exit"))); // load a "exit" grammar
            _recognizer.RequestRecognizerUpdate(); // request for recognizer update
            _recognizer.SpeechRecognized += _recognizeSpeechAndWriteToConsole_SpeechRecognized; // if speech is recognized, call the specified method
            _recognizer.SpeechRecognitionRejected += _recognizeSpeechAndWriteToConsole_SpeechRecognitionRejected; // if recognized speech is rejected, call the specified method
            _recognizer.SetInputToDefaultAudioDevice(); // set the input to the default audio device
            _recognizer.RecognizeAsync(RecognizeMode.Multiple); // recognize speech asynchronous

        }
        static System.Speech.Synthesis.SpeechSynthesizer speaker = new System.Speech.Synthesis.SpeechSynthesizer();
        static void _recognizeSpeechAndWriteToConsole_SpeechRecognized(object sender, SpeechRecognizedEventArgs e)
        {
            //Console.WriteLine(e.Result.Text);
            //  MessageBox.Show(e.Result.Text);
            string s = e.Result.Text;
            //// Speaking/////////////////
            speaker.Rate = -4;

            ///// Forward///////////
            if (s.Equals("forward"))
            {
                Forward();
            }
            ///////Reverse//////////////////////////
            if (s.Equals("reverse"))
            {
                Reverse();
            }
            ///////////////////shoulder up/////////////////////////////////////
            if (s.Equals("shoulder up"))
            {
                ShoulderUp();
            }
            ////////////////// Shoulder down/////////////////////////////
            if (s.Equals("shoulder down"))
            {
                ShoulderDown();
            }
            /////////////////// elbow up////////////////////
            if (s.Equals("elbow up"))
            {
                ElbowUP();
            }
            ////////////////// elbow down//////////////////////////////
            if (s.Equals("elbow down"))
            {
                ElbowDown();
            }
            ///////////////////wrist up//////////////////////////////
            if (s.Equals("wrist up"))
            {
                WristUp();
            }
            ////////////// Wrist Down/////////////////////////////////
            if (s.Equals("wrist down"))
            {
                WristDown();
            }
            ////////////////// close//////////////////////////////////

            if (s.Equals("close"))
            {
                CloseJaw();
            }
            //////////////////////open ///////
            if (s.Equals("open"))
            {
                OpenJaw();
            }
            /////////////// anticlockwise//////////////////////
            if (s.Equals("anticlockwise"))
            {
                ClockWise();
            }
            //////////////////////clockwise//////////////
            if (s.Equals("clockwise"))
            {
                AntiClockWise();
            }

        }
        static void _recognizeSpeechAndWriteToConsole_SpeechRecognitionRejected(object sender, SpeechRecognitionRejectedEventArgs e)
        {

        }
        #endregion

We will call this method after connecting to serial port from button2 click event handler.

C#
private void button2_Click(object sender, EventArgs e)
        {
            try
            {
                serialPort1.PortName = comboBox1.SelectedItem.ToString();
                serialPort1.BaudRate = 115200;
                serialPort1.Open();

                RecognizeSpeechAndControlRobot();

                MessageBox.Show("Success");
                timPollCommands.Enabled = true;
                startTime = DateTime.Now;
            }
            catch (Exception ex)
            {
                MessageBox.Show("Failed: "+ex.Message);
            }
        }

The testing part is left on you as I can not show the result through gif file!

10. Wireless Robotic Control with IR

This IR Section  of my tutorial on Basic Arduino programming with Hardware explains how to connect an IR and Program with it. Unfortunately the IR library has it's own timer control which does not permit ArdOS to function properly with IR library. Therefore if you intend to integrate IR remote control you need to give away with ArdOS.

IR commands can be interpreted in Two ways: Either you process the commands directly in the hardware or you can send the detected command over the Serial port to C# Client where you can take the decision on what to be done with the command. The advantage of the first technique is that it is faster and provides efficient no-lag control for hardware. However in that technique, you can not do much with higher level services. In the second approch, the code has to be analyzed by c# program and again needs to generate a serial command ( if needed) for the device to enable control. The process introduces severe time lag for critical applications. But implementing the logic in C# has advantage that underneath firmware for the connected device need not to be changed and only higher software has to be updated.

None the less, I leave the design on your choice. I demonstrate the usage by deploying the logic at Arduino board. My IR's receiver PIN is connected to pin 3.

Here is updated mySimpleRobo.ino sketch that we had developed as an alternative to ArdOS sketch.

MC++
#include <IRremote.h>
#define SIZE 6
int RECV_PIN = 3;

IRrecv irrecv(RECV_PIN);
decode_results results;

int pins[SIZE]={12,11,10,9,8,7};
int i=0;
int polarity=13;
//15->13 or polarity on
//16->13 OFF
//17->ALL OFF
int DIR=1;
void setup()
{
  for(i=0;i<SIZE;i++)
  {
    pinMode(pins[i],OUTPUT);
    digitalWrite(pins[i],LOW);
  }
    pinMode(polarity,OUTPUT);
    digitalWrite(polarity,LOW);

  Serial.begin(115200);
       irrecv.enableIRIn();
}

void loop()
{
   if (irrecv.decode(&results)) {
    Serial.println(results.value, HEX);
  
   switch(results.value)
    {
      case 0xC090060A:
      digitalWrite(polarity,HIGH);
      delay(10);
      digitalWrite(pins[3],HIGH);
      delay(150);
      digitalWrite(pins[3],LOW);
      digitalWrite(polarity,LOW);
      Serial.println("Arm Down");
      break;
   
    case 0xD090060A:

      digitalWrite(pins[3],HIGH);
      delay(150);
      digitalWrite(pins[3],LOW);
      Serial.println("Arm UP");
      break;
    }
    irrecv.resume(); // Receive the next value
  }
if(Serial.available()>0)
{
  int val=  Serial.read();
  val=val-48;
  Serial.println(val);
  if(val==7)
  {
    DIR=-1;
  digitalWrite(polarity,HIGH);
  }
  if(val==8)
  {
    DIR=1;
  digitalWrite(polarity,LOW);
  }
  if(val==17)
  {
  for(i=0;i<SIZE;i++)
  {
   
    digitalWrite(pins[i],HIGH);
  }
  }
  if(val<SIZE)
  {
    digitalWrite(pins[val],HIGH);
    delay(100);
   
    digitalWrite(pins[val],LOW);
  }
  
  
  
}
delay(20);  
}

I have implemented a simple logic of ShoulderUP and ShoulderDown commands at Arduino with my AC remote's ON-OFF command pair. You can analyze the code and develop different command for different buttons.

Figure 10.1 Shows the result.

Image 25

Figure 10.1: Robot Control With IR Remote

Part D: Other Important IoT Services

11. Notification Services

11.1 Understanding Notification System

In our case at leaset there is no such thing as notification. But Many IoT application demands good integrated notification service. So what is a notification service? Suppose you are creating a centralized security app, you want the system to generate alarm if an invader is trying to intrude your premise like ATMs, Museums at night and so on. Monitoring is the process of continuesly sensing data like we have done so far. The responsibility of the system in such cases is to just acquire the data and make it available either through custom services or through more pertaining IoT service like ThingsSpeak.  I am not covering monitoring service here. If you are interested to know about monitoring services, refer to this monitoring IoT service tutorial with ThingsSpeak. You can see that temperature acquired by Arduino device is webcasted through a channel in ThingsSpeak. But what if the sensor was meant to be monitoring the temperature in an area with high probability of fire? What if the sensor was deplyed near a smaller furness or to give our imagination a little more wings, what if the sensor was part of your Microwave Oven? In those cases the system not only should make the data available to the end users but also ensure that relevent party is notified in case of an alert. You can't always be sure that some one is monitoring the webcasted data always. Therefore we need a system in place that can get this data, apply certain statistical process like filtering, regression or thresholding and then take a decision whether the data refers to certain event or alert or not. Leakage of Gas, Fire, Earthquake are some of the events for sensors like MQ-35, PT100 and accelerometer respectively.

Therefore even though our current setup does not have any sensors, we would drill down the notification system. And for checking it out in real time, we will use the same hardware setup of LM35 temperature sensor with Arduino example that we had seen in our previous article on ThingsSpeak.

So our objective is little modified here, we want an alert to propagate when there is a fire. We can simulate this situation by bringing candle nearer to the sensor ( be careful with such adventures and be carefull not to burn your sensor on the course of the experiement). So whenever such a situation is detected an alert should be generated.

Now from the experiements in aforementioned tutorial you know that once high temperature value is being detected by the sensor, it would take some time fr sensor to read a lower value even after you remove the fire source from nearer to sensor. This is becuase sensor surface would take some time to dissipate the heat. Hence the alarm system should not overflood with alarm. It must be configured to generate concise event.

In the hardware and embedded terminology we use two terms which sometimes might look same but are largely different. An event can be responded by an alert which can be continues like a buzzer/ alarm system/ relay drive so on or a notification system which predominantly means prpagating the alert remotely. So in this section we will work on the notification system of IoT framework.

11.2 IFTTT ( If this then that)

11.2.1 Introducing IFTTT

When our hardware generates an alert, we want the alert to reach us through mail, we may also want to send a SMS to a perticular number, we may want to tweet about it, may want a facebook update and so on. A few years back, it would have been quite a task to automate this process as you would have to integrate various services independently which brings into table a very high cost for maintainance( keeping in mind that cloud service providers keep changing the APIs). But with the launch of IFTTT service, this has just become easier and quite an efficient one too.

In order to understand such a notification system beforehand, checkout the twitter profile of our own @Codeproject. There are huge number of articles and tips that are published every day. Imagine if Chris  had to sit and tweet about every article, then would there by anyone working on that Site Bugs and suggestion section? No, it needed a smart automation of tweeting every article as and when it got posted. With the kind of developer support here, codeproject would have easily implemented it's own notification service. But instead what you see?

As an addictive tweeter user, I am quite fond of taking screenshots. So here is a screenshot of @Codeproject's Time Line ( TL).

Image 26

Figure 11.1: Codeproject's Twitter Time Line Demonstrating IFTTT

You can see that the guys here have played smart, saved coding and maintainance stuff by integrating notification through IFTTT. Whenever there is an article, it posts the link into CP's time lime. And all this years I thought Chris has a great life as all he does is posts links of articles in Twitter!

 

Never the less, coming back to subject, IFTTT therefore is that notification service which offers you recipes. It's basically a way of generating notification. If a Tweet then a gmail  mail.  But it offers an integration with range of other services.  

So create an account in IFTTT and check out the channels section. As you can see in figure 11.2 that IFTTT offers range of services with which it offers an integration. Every channel further has their corresponding events like "Posting new tweet" an option for twitter, receiving new email an option for gmail and so on. So just head towards the reciepies section and create your recipe.

11.2.2 : Generating IFTTT based Notification System for UROBI

With the brilliant talent pool here in codeproject, I assume that any further explaination of IFTTT will be a gross disrespect to the intelligence of the developers onboard. You will surely kick start with making recepies quicker than you learn't making your first kitchen recipe.

But the question is much deeper here. It is not how to use IFTTT, it's how to use IFTTT in the current context? How to mitigate alert generated by our hardware through all the channels?

The answer is quite simple. By actually pushing the alert into one of the channels and then keep integrating other channels through IFTTT based on the input event of the channel we integrated through our code.

We are using C#.Net as Serial Client for Arduiono. So we can send gmail from our Application, we can integrate twitter services and so on.

I would prefer a Gmail here. Look, the objective is to mitigate certain alerts. But I might be interested to keep track of other alerts. Say for instance I might be interested to keep track of extremely low and very high temperature. But I want to generate the notification only for high temperature value. 

Therefore integrating alert generated by first with a private channel and then linking the notification with other channels through recipe is what I adopt. 

Let's first update our UROBI's Serial communication client to generate a Gmail message. 

11.3 Sending A Mail Through Gmail On Alert

C#
bool GmailSend(double tempValue, double threshold)
        {
            var fromAddress = new MailAddress("rupam.iics@gmail.com", "Rupam");// sender's mail address and name
            var toAddress = new MailAddress("rupam.iics@gmail.com", "Rupam");// To whom you want to send
            string fromPassword = "YOUR_FROM_EMAIL_ACCOUNT'S PASSWORD";
            string subject = "Arduino ALERT : Temperature Exceeds Threshold";
            string body = String.Format("TEMPERATURE = {0} 'C, Threshold={1}'C !\nALERT: Temperature Exceeds Value", tempValue, threshold);

            //Attachment att = new Attachment("form.jpg");
            // Use the above line if you want to send any image also.
            // You can save PictureBox1's image with pictureBox1.Save("hello.jpg",System.Drawing.Imaging.ImageFormat.JPEG);
            //and the attaching the hello.jog with att variable

            var smtp = new SmtpClient
            {
                Host = "smtp.gmail.com",
                Port = 587,
                EnableSsl = true,// Very important. You forget this and your mail is not going
                DeliveryMethod = SmtpDeliveryMethod.Network,
                Credentials = new NetworkCredential(fromAddress.Address, fromPassword),
                Timeout = 20000
            };
            using (var message = new MailMessage(fromAddress, toAddress)
            {
                Subject = subject,
                Body = body,

            })
            {
                //message.Attachments.Add(att);
                // Uncomment the above line if you are sending an Image
                try
                {
                    smtp.Send(message);
                 
                 
                }
                catch
                {
                    labCommandStat.Text = "Could Not Send Mail";
                    return false;

                }

            }
            return true;
        }

So the above code is a time tested way of sending email through gmail server p[rogramatically. Do not forget the EnableSSL part or the code is not working.

However do not expect this App to work out of the box. As Google has changed it's security profile, your first login attempt will be blocked and a mail will be sent to you regarding this. Click on security profile and select  Enable low secured apps accessing my account. ( Though I was really not aware of the this change untill I started testing. So I will stick to this option till a better alternative is on the cards).

Image 27

Figure 11.3: Security Setting for Sending Data to Gmail

Image 28

Figure 11.4: Screenshot of Successful Email Sending from our UROBI Interface

11.4: Broadcasting Gmail Data to Other Channels through IFTTT

Once the data is available to one of the supported channels by IFTTT, it can then be sent to other channels using a perfectly designed recipe. I want a tweet to be generated as soon as there is a gmail notification so that notification is available to everyone who follows me!

So you need to first go for Create New Recipe, then select Gmail in this  clause. Select :

If new email in inbox from search

 In search term put "ALERT:" as we are posting a gmail message with "ALERT:" Keywork.Now in that  clause select post a Tweet.

And here is a look at our complete recipe:

Image 29

11.5: Twitter Notification Based On Gmail Alert Recipe

Here is the screenshot that shows how our IoT module is now able to mitigate the navigation to different channels through IFTTT.

Image 30

11.6 Mitigating Notification from our UROBI System to World through IFTTT

Do remember one thing though, a bad design is a bad design. And you are not going away with that even in such sophisticated services. For letting you know what kind of cares must be adopted while designing such notification services, I created another recipe that generates a mail whenever I tweet something. Figure 11.7 shows how both of these two recipes can create a loop and disturb the entire system.

Image 31

11.7: Bad IFTTT Design: Gmail and Twitter services in a loop

11.5 Calling GmailSend Method on High Temperature

We are receiving serial data through asynchronous SerialReceived event handler which further is handled by the delegate member my display. We will push temperature value from arduino as "T:52" format. So in display method, we must extract the temperature part, compare with a threshold which I have kept at 50, and then generate gmail message. In order not to block the UI thread, we must use a background worker. We also need to ensure that unnecessary messages are not transmitted. So we have set a timerNotification, which is set to 30 sec immidiately after data being transmitted. No data is sent to gmail before the timer is deactivated. In that way you can check from flooding the message.

C#
if (s.Contains("T:"))
           {
               string s1 = s.Split(new string[] { "T:"}, StringSplitOptions.None )[1];
               try
               {
                   temp = double.Parse(s1);
                   if (temp > th)
                   {
                       if (!timerNotificationAlert.Enabled)
                       {
                           BackgroundWorker bw1 = new BackgroundWorker();
                           bw1.WorkerSupportsCancellation = true;
                           bw1.WorkerReportsProgress = true;
                           bw1.DoWork += new DoWorkEventHandler(bw1_DoWork);
                           bw1.RunWorkerCompleted += new RunWorkerCompletedEventHandler(bw1_RunWorkerCompleted);

                           bw1.RunWorkerAsync();
                           timerNotificationAlert.Enabled = true;
                       }


                   }
               }
               catch
               {
               }
           }

 

12. Security Service

12.1 Need of Security in IoT Context

Security is another essential, important and critical part of any enterprise or distributed services. If you use a domain with SSL, then all data to and from your browser is encrypted. Therefore SSL provides more security. Thus services like Gmail, IFTTT, ThingSpeak that we discussed so far are all too secured to think about any additional security.

Howver custom services are the catch. Beside securing custom services, it is also essential to have a security framework in place with basic encryption-decryption services.  While working on this project, I always wanted to have a simple but security layer embedded into the whole framework, because that just extends the capability of the type of data you can share, with the type of services you share and how good the services can handle such protected data.

There are basically three mechanisms commonly used for secured communication in Internet/ Distributed system. 1) Symmetric Cryptography 2) Public Key Cryptography and 3) Role based Encryption.

Public key cryptography is more commonly used and is automated. However as in our case we want to have a secured communication between services and clients, we will opt for Symmetric Cryptography. It is a mechanism where by both ends knows that key that is being used for encryption and decryption and data is encrypted and decrypted using the same key.

So objective of this section will be to use AES and secure the commands and response being exchanged between Serial Client and UROBI client with AES encryption.

12.2 AES Encryption and Decryption

Fine gran detail of  AES is out of scope of this tutorial and I assume the reader has a basic understanding of the technique. If you want to know more, you can always refer to this wiki on AES.

Our role in this section will be to demonstrate the integration of the encryption system into both ends.

AES is a block cipher algorithm. It takes a key from the user, combines it with a random initial key to obtain a cipher. The cipher is converted to binary stream and the input data stream is encrypted block by block. I have used a 16 byte block cipher. The catch is that if the stream length is not divisible by the block length, then the system pads extra characters at the end. This is fine for something like text data, but fails other kind of encryption. Thus I have tweaked the AES technique such that there is no padding up of extra characters. Last block is encrypted to the length of last block and not to fixed 16 byte length. This tweaks allows me to use AES for any kind of encryption and I have shown the mechanism of both text as well as image encryption service.

Here is my implementation of AES encryption code, where basic string encryption and decryption was lifted from internet and modified. Yes, you need a little work on existing codes as AES result will be UTF characters and you want them to be handled carefully.

C#
namespace EncryptStringSample
    {
        public static class StringCipher
        {
            // This constant string is used as a "salt" value for the PasswordDeriveBytes function calls.
            // This size of the IV (in bytes) must = (keysize / 8).  Default keysize is 256, so the IV must be
            // 32 bytes long.  Using a 16 character string here gives us 32 bytes when converted to a byte array.
            private const string initVector = "tu89geji340t89u2";

            // This constant is used to determine the keysize of the encryption algorithm.
            private const int keysize = 256;

            public static string <code>Encrypt</code>(string plainText, string passPhrase)
            {
                byte[] initVectorBytes = Encoding.UTF8.GetBytes(initVector);
                byte[] plainTextBytes = Encoding.UTF8.GetBytes(plainText);
                PasswordDeriveBytes password = new PasswordDeriveBytes(passPhrase, null);
                byte[] keyBytes = password.GetBytes(keysize / 8);
                RijndaelManaged symmetricKey = new RijndaelManaged();
                symmetricKey.Mode = CipherMode.CBC;
                ICryptoTransform encryptor = symmetricKey.CreateEncryptor(keyBytes, initVectorBytes);
                MemoryStream memoryStream = new MemoryStream();
                CryptoStream cryptoStream = new CryptoStream(memoryStream, encryptor, CryptoStreamMode.Write);
                cryptoStream.Write(plainTextBytes, 0, plainTextBytes.Length);
                cryptoStream.FlushFinalBlock();
                byte[] cipherTextBytes = memoryStream.ToArray();
                memoryStream.Close();
                cryptoStream.Close();
                return Convert.ToBase64String(cipherTextBytes);
            }
            public static Bitmap  <code>EncryptImage</code>(Bitmap bmp, string passPhrase)
            {
                byte[] initVectorBytes = Encoding.UTF8.GetBytes(initVector);

                MemoryStream ms = new MemoryStream();
                bmp.Save(ms, ImageFormat.Bmp);
                var header = ms.ToArray().Take(54).ToArray();
                //Take rest from stream
                var imageArray = ms.ToArray().Skip(54).ToArray();

                byte[] plainTextBytes = imageArray;
                PasswordDeriveBytes password = new PasswordDeriveBytes(passPhrase, null);
                byte[] keyBytes = password.GetBytes(keysize / 8);
                RijndaelManaged symmetricKey = new RijndaelManaged();
                symmetricKey.Mode = CipherMode.CFB;
                symmetricKey.Padding = PaddingMode.None;
                ICryptoTransform encryptor = symmetricKey.CreateEncryptor(keyBytes, initVectorBytes);
                MemoryStream memoryStream = new MemoryStream();
                CryptoStream cryptoStream = new CryptoStream(memoryStream, encryptor, CryptoStreamMode.Write);
                cryptoStream.Write(plainTextBytes, 0, plainTextBytes.Length);
                cryptoStream.FlushFinalBlock();
                byte[] cipherTextBytes = memoryStream.ToArray();
                memoryStream.Close();
                cryptoStream.Close();

               
                
                var image = Combine(header, cipherTextBytes);
                MemoryStream mstream = new MemoryStream(image);
                Bitmap b = (Bitmap)Bitmap.FromStream(mstream, true, false);
                //Bitmap b = new Bitmap(mstream);
                //return Convert.ToBase64String(cipherTextBytes);
                return b;
            }
            public static byte[] Combine(byte[] first, byte[] second)
            {
                byte[] ret = new byte[first.Length + second.Length];
                Buffer.BlockCopy(first, 0, ret, 0, first.Length);
                Buffer.BlockCopy(second, 0, ret, first.Length, second.Length);
                return ret;
            }

            public static Bitmap <code>DecryptImage</code>(Bitmap encBmp, string passPhrase)
            {
                byte[] initVectorBytes = Encoding.ASCII.GetBytes(initVector);

                MemoryStream ms = new MemoryStream();
                encBmp.Save(ms, ImageFormat.Bmp);
                var header = ms.ToArray().Take(54).ToArray();
                //Take rest from stream
                var imageArray = ms.ToArray().Skip(54).ToArray();

                byte[] cipherTextBytes = imageArray;
                PasswordDeriveBytes password = new PasswordDeriveBytes(passPhrase, null);
                byte[] keyBytes = password.GetBytes(keysize / 8);
                RijndaelManaged symmetricKey = new RijndaelManaged();
                symmetricKey.Padding = PaddingMode.None;
                symmetricKey.Mode = CipherMode.CFB;
                ICryptoTransform decryptor = symmetricKey.CreateDecryptor(keyBytes, initVectorBytes);
                MemoryStream memoryStream = new MemoryStream(cipherTextBytes);
                CryptoStream cryptoStream = new CryptoStream(memoryStream, decryptor, CryptoStreamMode.Read);
                byte[] plainTextBytes = new byte[cipherTextBytes.Length];
                int decryptedByteCount = cryptoStream.Read(plainTextBytes, 0, plainTextBytes.Length);
                memoryStream.Close();
                cryptoStream.Close();

                var image = Combine(header, plainTextBytes);
                MemoryStream mstream = new MemoryStream(image);
                Bitmap b = (Bitmap)Bitmap.FromStream(mstream, true, false);
                //return Convert.ToBase64String(cipherTextBytes);
                return b;

                //return Encoding.UTF8.GetString(plainTextBytes, 0, decryptedByteCount);
            }
            public static string <code>Decrypt</code>(string cipherText, string passPhrase)
            {
                byte[] initVectorBytes = Encoding.ASCII.GetBytes(initVector);
                byte[] cipherTextBytes = Convert.FromBase64String(cipherText);
                PasswordDeriveBytes password = new PasswordDeriveBytes(passPhrase, null);
                byte[] keyBytes = password.GetBytes(keysize / 8);
                RijndaelManaged symmetricKey = new RijndaelManaged();
                symmetricKey.Mode = CipherMode.CBC;
                ICryptoTransform decryptor = symmetricKey.CreateDecryptor(keyBytes, initVectorBytes);
                MemoryStream memoryStream = new MemoryStream(cipherTextBytes);
                CryptoStream cryptoStream = new CryptoStream(memoryStream, decryptor, CryptoStreamMode.Read);
                byte[] plainTextBytes = new byte[cipherTextBytes.Length];
                int decryptedByteCount = cryptoStream.Read(plainTextBytes, 0, plainTextBytes.Length);
                memoryStream.Close();
                cryptoStream.Close();
                return Encoding.UTF8.GetString(plainTextBytes, 0, decryptedByteCount);
            }
        }
    }

So, there are four important methods: Encrypt, EncryptImage, Decrypt, DecryptImage. Carefull observation will show you that EncryptImage uses Encrypt method's concept where as DecryptImage uses Decrypt method's concept.

In image encryption, image header is separated from the image first. The data is encrypted and then the header is added back. So the image is visible and decodable by any image viewing software, only the contents are encrypted. Bmp image headers have 54 bytes. So they are separated before Encryption and Decryption.

InitVector is hardcoded. You can also include the vector as an argument to the methods.

The methods are straight forward and very easy to use. Just pass the data and password and get Encrypted/Decrypted data.

Now we are going to add this class in both SerialClient as well as in UROBI client. We want the commands to be encrypted at the URBOBI client and Decrypted at SerialClient.

12.3 Integrating Encryption-Decryption in UROBI Framework

At the UROBI Client side we are going to Encrypt the command before sending. Therefore Encrypt would be used in the Do_Work.

C#
void bw_DoWork(object sender, DoWorkEventArgs e)
       {
           try
           {

               //iot.InsertCommand(projId, ip, command, "UROBI Client", DateTime.Now, "EXECUTE");
               // Above is for Insecured Command Transmission
               iot.InsertCommand(projId, ip, EncryptStringSample.StringCipher.Encrypt(command, "integrated ideas"), "UROBI Client", DateTime.Now, "EXECUTE");
               // In Above Line we are sending Encrypted Command
           }
           catch
           {
           }
           //throw new NotImplementedException();
       }

In Serial Client, we will decrypt the command once it is received and deserialized.

C#
string[] result = iotClient.CommandToExecute(projId, ipAddress).Split(new char[] { '#' });
            if (result.Length < 3)
            {
                timPollCommands.Enabled = true;
                return;
            }
            string command = result[0];
            // For secured command transfer, the received command is encrypted. You need to decrypt
            command = EncryptStringSample.StringCipher.Decrypt(command,"integrated ideas").ToUpper();

Note that the password is hard coded as "integrated ideas" which you can obviously change! Figure 12.1 shows you how secured text is saved in the remote database which makes it difficult to decode. Upon receiving it by Serial Client, the command is decrypted.

Image 32

Figure 12.1: AES Based Secured Command Exchange over IoT

Point of Interest: Note that this Image Encryption and Decryption service can not be used for secured image communication in current context as Streaming server changes the Image format from Bmp to MJPEG, hence it is not possible to decrypt the image at the receiver. Image Encryption service can be used only in the context of Bmp images. However, you can test the services locally. You can use them with some creativity while 

12.4 A Discussion about other Services

Though there are endless possibilities of essential services in an IoT context, I must admit that the tutorial would have been incomplete without talking about couple them in specific. Firstly about storage: We have seen how data from devices can be stored with ThingSpeak. We have also used our custom framework for storing the data. But what if you want to store files? For instance you want to append data from your devices into a log file? Or you want to log the key frames from a security app? What about exchanging files between UROBI client and Serial module?

If you want such a workflow to be integrated into the framework, you might well be looking at integrating DropBox or SkyDrive services with the application. I would have loved to cover one of them in this tutorial, but the sheer size of the tutorial has discouraged me to extend it with a storage service which would also be quite a lengthy subsection. But the fact that SkyDrive integration is not entirely tough, you can try that out as par your workflow requirement.

Other important service is that of Authentication. IoT supports range of smart objects. Either you can provide connventional password based authentication or you can opt for biometric authentication or you may even prefer to use smart objects. In a previous tutorial I have already covered one such Authentication with RFID Objects. Face Recognition based authentication can be developed by referring to the tutorial of Multi Face Detection and Recognition. GSM is another service which is extenssively used for Notification and remote command execution. I have personally used this GSM Modem code  extenssively without any hassles for years. The beuty of that code is you can simply import the form in your project and get it running. GSM Modems are available from $20-30$ range and mostly work with RS232. You can refer to the RFID tutorial to know how to use the modem with USB.

13. Conclusion

This tutorial was an effort to develop a cool robotic control framework for OWI robotic Kit, more so over IoT. But often people feels that IoT is all about connecting devices over internet and using services. However in a real time work there are several constraints and design challenges. In this work we have built a core IoT bridge and Client through which a user can control a OWI robotic arm over cloud( Internet+ Local LAN).

In order to show how other local services can be seamlessly added to this framework, we have added computer vision techniques for robotic control as well as speech recognition. As several control system also works at the hardware level, we have used IR remote control to control the part of the robot.

This project completes my set of tutorials for IoT with Arduino. Treat this work as a bundle of every possible services you would need for building your supercool DIY robots. IoT is an ocean with infinite possibilities. So instead of claiming a completeness, I tried to touch most important aspects of the frmaework. In my earlier tutorials I have already covered sensor integration, so I leave it on you to attach sensors with the robotic board and diffuse the data over IoT.

Some times it is easy to write an interface for say controlling robot through hand movement or controlling them over internet and so on. But when so many services and options are put togather, the system is challenged for it's performance, it's ability to respond in real time to various services which might at time be conflicting.

I hope that this article will be helpful to those of you working in IoT or wanting to take a dig at IoT.

 

License

This article, along with any associated source code and files, is licensed under The Code Project Open License (CPOL)