Click here to Skip to main content
65,938 articles
CodeProject is changing. Read more.
Articles / desktop / WPF

How-to: Benefit from Kinect.Toolbox and Coding4Fun on Kinect Programming

4.87/5 (22 votes)
19 Nov 2012CPOL5 min read 70.7K  
The use of Kinect.Toolbox and Coding4Fun APIs could save time on Kinect programming this is what we will see in this article and also how to modify those APIs and use it

Introduction

Since July 2011, the publishing date of the beta version of the Kinect SDK, the number of programmer, students and fans who are interested to this new technology is increasing, also the development of tools and API that could make the Kinect programming very easy has increased, the most used API on Kinect programming are Kinect.Toolbox and coding4Fun. The use of those two APIs is what we will see and study in this article.

 

Image 1

Background

C#, Visual Studio 2010, Kinect device, Download and install the Kinect SDK: Download Kinect SDK Beta.

Step 0 : Download and install the APIs

First of all, we should download the APIs with its codes (Kinect.Toolbox and Coding4Fun), to do this we go to those sites and download the API : Coding4Fun API and Kinect.Toolbox

P.S: make sure that you have downloaded the codes and the API for Winform.

Step 1 : Getting started with Microsoft Kinect SDK

After downloading and installing the APIs, the DLL files that should be used as a reference in our project will be available on the downloaded zip files.

Note : the DLL of Kinect SDK is available on : C:\Program Files (x86)\Microsoft Research KinectSDK

  1. Now, go to Visual Studio and open a new WinForms project.
  2. Image 2

  3. Add a references of the APIs that we well use on our project:
  4. To do this, go to the Solution Explorer on Visual Studio and right click on References after that click add new reference, and for each DLL browse into its location and choose it. 

  5. Open a Form1.cs on Visual Studio and add those references:
  6. C#
    using Microsoft.Research.Kinect.Nui;
    using Kinect.Toolbox;
    using Coding4Fun.Kinect.WinForm;

Step 2 : Detect a hand moving event

The hand is the most used part of the body on Kinect programming, cause it the most interactive part of our body, so in this part we will detect the right hand move and using API we will know the direction of this movement : right to left , lift to right ...etc

Firstly, we initiate some fields on Form1 class that will be used after:

C#
private void Form1_Load(object sender, EventArgs e)
{
   try
   {
        nui.Initialize(RuntimeOptions.UseSkeletalTracking);
   }
   catch (InvalidOperationException)
   {
        MessageBox.Show("Runtime initialization failed. " + 
           "Please make sure Kinect device is plugged in.");
        return;
   }

    #region add events
    
    nui.SkeletonFrameReady += 
      new EventHandler<SkeletonFrameReadyEventArgs>(nui_SkeletonFrameReady);
    SwG.OnGestureDetected += On_GestureDetected;

    #endregion
}

Step 3 : Initialization of the Kinect and link events with methods

Now, we go to the load event of Form1 and we initiate the Kinect and link the event with methods (the principle of delegate on C#).

Add this code to the load event of Form1:

C#
private void Form1_Load(object sender, EventArgs e)
{
    try
    {
        nui.Initialize(RuntimeOptions.UseSkeletalTracking);
    }
    catch (InvalidOperationException)
    {
        MessageBox.Show("Runtime initialization failed. " + 
          "Please make sure Kinect device is plugged in.");
        return;
    }

    #region add events

    nui.SkeletonFrameReady += 
      new EventHandler<SkeletonFrameReadyEventArgs>(nui_SkeletonFrameReady);
    SwG.OnGestureDetected += On_GestureDetected;

    #endregion

In the try catch block of this code we run the Kinect device, if the device is not Ok : power off or the USB cable is not plugged on the Computer, the exception will be launched.

In the second block, we detect the body so if a body is detected by the Kinect the nui_SkeletonFrameReady method will respond to this event, this event has an argument which is the detected body, called in Kinect SDK the SkeletonFrame.

Finally, we link the move event with the method that will responds to this move.

Step 4 : Responds to the events

To respond to the two events that we have seen in the last step, we should use the methods linked with those events, to do this add those two methods to the Form1 class:

Method 1: Detect the right hand of the body (detect one Joint of the first Skelton). void nui_SkeletonFrameReady(object sender, SkeletonFrameReadyEventArgs e)

C#
void nui_SkeletonFrameReady(object sender, SkeletonFrameReadyEventArgs e)
{
    SkeletonFrame allSkeletons = e.SkeletonFrame;

    //get the first tracked skeleton
    SkeletonData skeleton = (from s in allSkeletons.Skeletons
                             where s.TrackingState == SkeletonTrackingState.Tracked
                             select s).FirstOrDefault();
    //Test if skelton exist and tracked
    if (skeleton != null && skeleton.TrackingState == SkeletonTrackingState.Tracked)
    {
        SwG.Add(skeleton.Joints[JointID.HandRight].Position, nui.SkeletonEngine);

        // scale those Joints to the primary screen width and height
        Joint scaledRight = skeleton.Joints[JointID.HandRight].ScaleTo(
          (int)SystemInformation.PrimaryMonitorSize.Width, 
          (int)SystemInformation.PrimaryMonitorSize.Height, SkeletonMaxX, SkeletonMaxY);

In this code we get the first body detected by the Kinect device (It can detect two bodies), after that we link the right hand with the move event, to detect the move of the right hand : SwG.Add( )...etc.

Method 2: Detect the right hand gesture.

To do this add this method to the Form1 class that responds to the OnGestureDetected event:

C#
public void On_GestureDetected(string gest)
{
    //if gest is swip to right go to the next picture
    if (gest == "SwipeToRight")
        suivant();
    //if gest is swip to left go to the previouse picture
    if (gest == "SwipeToLeft")
        precedente(); // e.g :is a method that 
    
}

When the Kinect device detect that the tracked Joint (Right hand) swipe to left or to right this Gesture event is launched in this method responds to this event, "SwipeToRight" and "SwipeToLeft" are a two string declared on the Kinect.Toolbox API, so in our method we test if the right hand is going from left to right or from right to left and we do something responding to this hand move.

Tricks

Trick 1 

If we test this code we will find that the response to the hand move is too quick, and we don't have a good interactivity between the application and the hand move, so to correct this problem we modify the jitter of our SkeltonEngine like this :

C#
#region TransformSmooth
//Must set to true and set after call to Initialize
nui.SkeletonEngine.TransformSmooth = true;
//Use to transform and reduce jitter
var parameters = new TransformSmoothParameters
{
    Smoothing = 0.75f,
    Correction = 0.07f,
    Prediction = 0.08f,
    JitterRadius = 0.08f,
    MaxDeviationRadius = 0.07f
};
nui.SkeletonEngine.SmoothParameters = parameters;
#endregion

Trick 2 

In the Kinect.toolbox we have seen that we have only two types of gesture SwipeToRight3 and SwipeToLeft, the other types of gesture could be implemented on this API by modifying the source code, in this trip we will see how to add a BackToFront and use it on our code, for example to choose something in the screen.

First of all we go to the Kinect.Toolbox source code, in the SwipeGestureDetector.cs class we modify the LookForGesture() method by adding those lines of code:

C#
void LookForGesture()
{
    / / From left to right
    if (ScanPositions ((P1, P2) => Math.Abs ??(p2.Y - p1.Y) <0.20f, 
       (P1, P2) => p2.X - p1.X> - 0.01f, (P1, P2 ) => 
       Math.Abs ??(p2.X - p1.X)> 0.2f, 250, 2500))
    {
        RaiseGestureDetected ("LeftToRight");
        return;
    }

    / / from right to left
    if (ScanPositions ((P1, P2) => Math.Abs ??(p2.Y - p1.Y) <0.20f, 
       (P1, P2) => p2.X - p1.X <0.01f, (P1, P2) => 
       Math.Abs ??(p2.X - p1.X)> 0.2f, 250, 2500))
    {
        RaiseGestureDetected ("RightToLeft");
        return;
    }

    / / From back to front
    if (ScanPositions ((P1, P2) => Math.Abs ??(p2.Y - p1.Y) <0.15f, 
       (P1, P2) => p2.Z - p1.Z <0.01f, (P1, P2) => 
        Math.Abs ??(p2.Z - p1.Z)> 0.2f, 250, 2500))
    {
        RaiseGestureDetected ("BackToFront");
        return;
    }

    / / from front to back
    if (ScanPositions ((P1, P2) => Math.Abs ??(p2.Y - p1.Y) <0.15f, 
       (P1, P2) => p2.Z - p1.Z>-0.04f, (P1, P2 ) 
         => Math.Abs ??(p2.Z - p1.Z)> 0.4f, 250, 2500))
    {
        RaiseGestureDetected ("FrontToBack");
        return;
    }
}

After modifying this method, we compile the solution to have the new modified DLL, this modified DLL could be used instead of the first and now we can detect the back to front gesture in our application.

So the new On_GestureDetected() method will be like this :

C#
public void On_GestureDetected(string gest)
{
          //if gest is swip to right go to the next picture
    if (gest == "SwipeToRight")
        suivant(); // e.g : Go to the next item
         //if gest is swip to left go to the previouse picture
    if (gest == "SwipeToLeft")
        precedente(); // e.g : Go to the previous item
         //if gest is swip to front 
    if (gest == "BackToFront")
        ClickItem(); //e.g : Click to the displayed item
}

 

I hope that the most of you have now the first ideas about how to develop a Kinect Application using those two powerful API, and can right an interactive applications that could help and develop the life of disabled persons, illiterates ...etc. I'm waiting for your feedback and comments.

License

This article, along with any associated source code and files, is licensed under The Code Project Open License (CPOL)