Click here to Skip to main content
65,938 articles
CodeProject is changing. Read more.
Articles / Languages / C#

Drawing using Kinect v2

4.00/5 (1 vote)
15 Mar 2016CPOL4 min read 7.2K  
Drawing using Kinect v2

A few days ago, I received an interesting comment from one of the fans of this blog. The fellow-developer was asking me about a good way to draw on top of a virtual canvas using his hands. So, today, I’ll show you how you can easily use a Kinect sensor to draw in the air! This video shows what we are about to build:

Prerequisites

Video & Source Code

As usual, I’m providing you with the complete source code, as well as a demo video.

The XAML User Interface

First things first. Launch Visual Studio and create a new WPF or Windows Store project. Select .NET Framework 4.5 or higher.

If using WPF, add a reference to Microsoft.Kinect.dll.

If using Windows Store (WinRT), add a reference to WindowsPreview.Kinect.dll.

Navigate to the XAML code of your main page. Nothing fancy – we just want to display the feed of the Color camera and the drawing brush. We’ll use an Image component for the RGB stream and a Canvas component for the drawing brush. For proper scaling reasons, I’ve placed both the Image and a Canvas into a Viewbox element. The Viewbox element will automatically scale its contents according to the size of your screen.

XML
<Viewbox>
    <Grid Width="1920" Height="1080">
        <Image Name="camera" />
        <Canvas Name="canvas">
            <Polyline Name="trail" Stroke="Red" StrokeThickness="15">
                <Polyline.Effect>
                    <BlurEffect Radius="20" />
                </Polyline.Effect>
            </Polyline>
        </Canvas>
    </Grid>
</Viewbox>

Inside the Canvas, I placed a Polyline control. A Polyline is a set of points in the 2D space, connected with a line. The Polyline will be updated with new points on each frame.

Hint: Some developers are using multiple Ellipse or Line controls instead. Even though this approach would definitely work, the Polyline control is much more efficient when dealing with big volumes of data. We’ll be drawing 15 to 30 ellipses per second, so a single XAML control would work better than multiple XAML controls in the same Canvas.

The C# Code

Our user interface is ready to accept the drawing functionality. Let’s get started. Navigate to your .xaml.cs file and add the C# code below.

Step 1 – Declare the Required Kinect Members

First things first! We need to specify the objects we’ll work with. If you are following this blog, you already know what you need:

C#
// Kinect Sensor reference
private KinectSensor _sensor = null;

// Reads Color frame data
private ColorFrameReader _colorReader = null;

// Reads Body data
private BodyFrameReader _bodyReader = null;

// List of the detected bodies
private IList<Body> _bodies = null;

// Frame width (1920)
private int _width = 0;

// Frame height (1080)
private int _height = 0;

// Color pixel values (bytes)
private byte[] _pixels = null;

// Display bitmap
private WriteableBitmap _bitmap = null;

Step 2 – Initialize Kinect

After you have declared the required members, you need to initialize the Kinect sensor and create the event handlers. You can do it in the constructor of your page/window.

C#
if (_sensor != null)
{
    _sensor.Open();

    _width = _sensor.ColorFrameSource.FrameDescription.Width;
    _height = _sensor.ColorFrameSource.FrameDescription.Height;

    _colorReader = _sensor.ColorFrameSource.OpenReader();
    _colorReader.FrameArrived += ColorReader_FrameArrived;

    _bodyReader = _sensor.BodyFrameSource.OpenReader();
    _bodyReader.FrameArrived += BodyReader_FrameArrived;

    _pixels = new byte[_width * _height * 4];
    _bitmap = new WriteableBitmap(_width, _height, 96.0, 96.0, PixelFormats.Bgra32, null);

    _bodies = new Body[_sensor.BodyFrameSource.BodyCount];

    camera.Source = _bitmap;
}

Step 3 – Display the RGB Color Stream

The event named “ColorReader_FrameArrived” will be called whenever Kinect has a new RGB frame. All we need to do is grab the frame and transform it into a WriteableBitmap. The following source code acquires the raw RGB values, copies the values into our byte array, and, finally, creates the displayable bitmap:

C#
private void ColorReader_FrameArrived(object sender, ColorFrameArrivedEventArgs e)
{
    using (var frame = e.FrameReference.AcquireFrame())
    {
        if (frame != null)
        {
            frame.CopyConvertedFrameDataToArray(_pixels, ColorImageFormat.Bgra);

            _bitmap.Lock();
            Marshal.Copy(_pixels, 0, _bitmap.BackBuffer, _pixels.Length);
            _bitmap.AddDirtyRect(new Int32Rect(0, 0, _width, _height));
            _bitmap.Unlock();
        }
    }
}

Step 4 – Detect the Hand

Now, let’s move to our BodyReader_FrameArrived event handler. This is where we’ll detect the active body and its joints. We only need one joint: the right (or left) hand. We detect the desired hand by using the JointType enumeration.

Hint: For educational purposes, I am only showing you how to get the hand joint. In a real-world project, you could check a number of additional parameters, such as the distance between the hand and the body, or the distance between the body and the sensor. You could even have custom gestures to start and stop the painting process.

C#
private void BodyReader_FrameArrived(object sender, BodyFrameArrivedEventArgs e)
{
    using (var frame = e.FrameReference.AcquireFrame())
    {
        if (frame != null)
        {
            frame.GetAndRefreshBodyData(_bodies);

            Body body = _bodies.Where(b => b.IsTracked).FirstOrDefault();

            if (body != null)
            {
                Joint handRight = body.Joints[JointType.HandRight];
            }
        }
    }
}

Step 5 – Coordinate Mapping

As you know, Kinect provides us with the position of the hand in the 3D space (X, Y, and Z values). To detect where the hand is pointing in the 2D space, we’ll need to convert the 3D coordinates to 2D coordinates. The 3D coordinates are measured in meters. The 2D coordinates are measured in, well, pixels. So, how could we convert meters to pixels? The SDK includes a powerful utility, called CoordinateMapper. Using CoordinateMapper, we can find the position of the hand in the 2D space!

You can read more about coordinate mapping in this blog post. Alternatively, you can use Vitruvius and have automatic coordinate mapping.

The CoordinateMapper will convert the position of the hand to a ColorSpacePoint with X and Y values. The X and Y values of the ColorSpacePoint are limited to our 1920×1080 canvas.

C#
CameraSpacePoint handRightPosition = handRight.Position;
ColorSpacePoint handRightPoint = _sensor.CoordinateMapper.MapCameraPointToColorSpace(handRightPosition);

float x = handRightPoint.X;
float y = handRightPoint.Y;

Step 6 – Draw!

Finally, we’ll update the point collection of our Polyline component by adding the new color point.

C#
if (!float.IsInfinity(x) && ! float.IsInfinity(y))
{
    // DRAW!
    trail.Points.Add(new Point { X = x, Y = y });
}

Compile and run the code. You’ll notice that I have also applied a blur effect to make the trail smoother.

Conclusion

So, this is it, folks! Hope you enjoyed this tutorial. Feel free to extend the source code, add your own effects, constraints, and functionality, and use it in your own projects.

‘Till the next time, keep Kinecting!

PS: Vitruvius

If you enjoyed this article, consider checking Vitruvius. Vitruvius is a set of powerful Kinect extensions that will help you build stunning Kinect apps in minutes. Vitruvius includes avateering, HD Face, background removal, angle calculations, and more. Check it now.

The post Drawing using Kinect v2 appeared first on Vangos Pterneas.

License

This article, along with any associated source code and files, is licensed under The Code Project Open License (CPOL)