Introduction
In this short tip, I am going to teach how to create a Virtual Wall using the Kinect sensor through the Kinect Toolbox. The Kinect Toolbox is a framework for developing with Kinect for Windows SDK (1.7).
Virtual Wall is a simple, but efficient algorithm that consists of defining a spatial reference (known distance) in order to remove the image background. It is usually used in order to separate specific parts of the user’s body from the other parts. In this example, we are going to separate the user’s hands from the rest of the body. In order to do it, we have to define an attribute:
Where DM is the depth map, d is the pixel’s value in the map and t is the threshold chosen that will define the real position of the Virtual Wall in relation to the depth Kinect camera.
Pre-Requisites
- Visual Studio 2012
- .NET 4.5
- Kinect.Toolbox 1.3
- Kinect for Windows SDK (1.7)
Code
In the Kinect.Toolbox
, we have the class DepthStreamManager
. This class updates each depth frame of the Kinect camera. We have two methods in this class. First, the method Update
that receives a DepthImageFrame
as parameter. Second, the method ConvertDepthFrame
that defines the values of all pixels of an image. In the last mentioned method, we have the variable realDepth
. In this variable, we have the distance between the camera and the user. What we are going to do is to create a variable that will be our threshold. If the depth pixel value is lesser than our threshold, we will draw it on the screen, otherwise we will not.
Here we have the Kinect.Toolbox
original code:
void ConvertDepthFrame(short[] depthFrame16)
{
for (int i16 = 0, i32 = 0; i16 < depthFrame16.Length && i32 < depthFrame32.Length; i16++, i32 += 4)
{
int user = depthFrame16[i16] & 0x07;
int realDepth = (depthFrame16[i16] >> 3);
byte intensity = (byte)(255 - (255 * realDepth / 0x1fff));
depthFrame32[i32] = 0;
depthFrame32[i32 + 1] = 0;
depthFrame32[i32 + 2] = 0;
depthFrame32[i32 + 3] = 255;
switch (user)
{
case 0: depthFrame32[i32] = (byte)(intensity / 8);
depthFrame32[i32 + 1] = (byte)(intensity / 8);
depthFrame32[i32 + 2] = (byte)(intensity / 8);
break;
case 1:
depthFrame32[i32] = intensity;
break;
case 2:
depthFrame32[i32 + 1] = intensity;
break;
case 3:
depthFrame32[i32 + 2] = intensity;
break;
case 4:
depthFrame32[i32] = intensity;
depthFrame32[i32 + 1] = intensity;
break;
case 5:
depthFrame32[i32] = intensity;
depthFrame32[i32 + 2] = intensity;
break;
case 6:
depthFrame32[i32 + 1] = intensity;
depthFrame32[i32 + 2] = intensity;
break;
case 7:
depthFrame32[i32] = intensity;
depthFrame32[i32 + 1] = intensity;
depthFrame32[i32 + 2] = intensity;
break;
}
}
}
And here we have the same method, but now with our threshold:
void ConvertDepthFrame(short[] depthFrame16)
{
for (int i16 = 0, i32 = 0; i16 < depthFrame16.Length && i32 < depthFrame32.Length; i16++, i32 += 4)
{
int user = depthFrame16[i16] & 0x07;
int realDepth = (depthFrame16[i16] >> 3);
byte intensity = (byte)(255 - (255 * realDepth / 0x1fff));
if (realDepth < this.GThreshold)
{
depthFrame32[i32] = 0;
depthFrame32[i32 + 1] = 0;
depthFrame32[i32 + 2] = 0;
depthFrame32[i32 + 3] = 255;
switch (user)
{
case 0: depthFrame32[i32] = (byte)(intensity / 8);
depthFrame32[i32 + 1] = (byte)(intensity / 8);
depthFrame32[i32 + 2] = (byte)(intensity / 8);
break;
case 1:
depthFrame32[i32] = intensity;
break;
case 2:
depthFrame32[i32 + 1] = intensity;
break;
case 3:
depthFrame32[i32 + 2] = intensity;
break;
case 4:
depthFrame32[i32] = intensity;
depthFrame32[i32 + 1] = intensity;
break;
case 5:
depthFrame32[i32] = intensity;
depthFrame32[i32 + 2] = intensity;
break;
case 6:
depthFrame32[i32 + 1] = intensity;
depthFrame32[i32 + 2] = intensity;
break;
case 7:
depthFrame32[i32] = intensity;
depthFrame32[i32 + 1] = intensity;
depthFrame32[i32 + 2] = intensity;
break;
}
}
else
{
depthFrame32[i32] = 0;
depthFrame32[i32 + 1] = 0;
depthFrame32[i32 + 2] = 0;
depthFrame32[i32 + 3] = 255;
}
}
}
Doing so, we are able to get only the body parts that we want. Here, we have the result:
|
Original Depth Image |
|
Depth Image With Virtual Wall |
Conclusion
In this tip, we used the Kinect.Toolbox
in order to create a Virtual Wall with the Kinect depth camera. Personally, I have used this method to create a sign recognition application, but it has been used in many other areas. This is my first CodeProject post and I hope it helps someone.