Click here to Skip to main content
65,938 articles
CodeProject is changing. Read more.
Articles
(untagged)

Using Windows 8 Interaction Context for Processing Touch Input in a .NET WinForms App

0.00/5 (No votes)
16 Jun 2013 1  
Introduces a managed wrapper for the Interaction Context API and provides a consistent way for processing touch input in a managed desktop app for WinForms.

Introduction

Interaction Context is a new Windows 8 API designed for developers who are building UI frameworks that provide a consistent touch-optimized user experience across desktop applications. This article introduces a managed wrapper for this API and an example of using Interaction Context in a WinForms application. 

Background

If you’re planning to support touch input in a desktop application, you have a few options:

Processing WM_GESTURE message is the simplest option. It is compatible with Windows 7 and supports useful gestures, such as panning, zooming, and rotation. While panning, you can limit perpendicular movement to primary direction (use the gutter) and use inertia to smoothly slow when the pan gesture stops. It supports the two-finger tap and the press and tap gestures. There is no a dedicated Tap gesture however. You have to wait for the WM_LBUTTONUP message and distinguish between the mouse and touch input using the GetMessageExtraInfo() function from Win32 API. The main problem of the WM_GESTURE message is that it doesn’t support multi-touch interactions. For example, you can’t start moving a tile with one hand and scroll the list of tiles with another hand, as on Windows 8 Start Screen. Also, you can’t perform a complex manipulation, such as pan, zoom, and rotation at the same time.  

WM_TOUCH message is a more advanced thing. It is also compatible with Windows 7 and offers the ability to work with multiple touch inputs at once. It doesn’t detect any high-level gestures however. You just obtain an array of (x, y) points and flags indicating whether the specified touch point was added, moved, or removed. That may be enough for some applications, such as the piano simulator or something like that. For other applications, that’s not enough and you want to detect touch gestures. If you’re programming in C++, you can use the standard IManipulationProcessor and IInertiaProcessor COM interfaces. They are pretty powerful but require implementation of event sinks. I’m not aware whether anybody tried to work with those interfaces from managed code. Perhaps, it’s more efficient to create the managed counterparts, such as the classes from System.Windows.Input.Manipulations namespace in .NET 4. In any case, processing WM_TOUCH messages is tricky and error-prone. They are also followed with fake mouse messages that you need to distinguish from real mouse messages. 

Windows 8 introduces the concept of Pointer Input Messages. These messages come from various input devices, such as touch, pen, or mouse (after executing EnableMouseInPointer function). They can also be “injected” programmatically with the new InjectTouchInput function. The WM_POINTER* messages are somewhat similar to WM_TOUCH but give a lot more information about user input. You can easily distinguish mouse input from touch input, for example. The special WM_TOUCHHITTESTING message helps to determine the most probable touch target and so on. Of course, it supports multi-touch input. But, in general, as with WM_TOUCH message, you have only (x, y) coordinates of the touch point (or mouse click) and the information about whether the pointer was added, moved, or removed. WM_POINTER* messages are low-level stuff and don’t tell us anything about touch gestures.

While browsing through MSDN, I discovered a section about the new API that enables Windows 8 applications to support multiple, concurrent interactions by providing gesture detection and manipulation processing. This is about Interaction Context. Its documentation is still very poor and foggy. I was unable to use it until I’ve found an excellent article and sample on Intel’s web site.

Interaction Context is an API that works in conjunction with WM_POINTER* messages. It accepts low-level data from pointers then executes a callback function passing high-level information about touch interactions as an argument to that function. Using Interaction Context functions is much easier than handling IManipulationProcessor and IInertiaProcessor interfaces in case of WM_TOUCH message. You don’t need to bother with COM and event sinks anymore. It is even possible to create a managed wrapper for this API and use it from C# code. 

Flexibility and simplicity are the main advantages of the new touch API. It separates touch destinations from windows. For example, in the InteractionContextTest sample, you can see a few geometric figures. They are not windows, just objects that can draw themselves on the Form. Each of these figures and the background creates and adjusts its own interaction context and, therefore, can show various behaviors when you move figures with the mouse or touch device. These contexts are independent of each other. So you can, for example, move two figures with two hands at the same time. And, even more, you can move/zoom/rotate one figure with the touch device while moving another figure with the mouse. That was almost impossible with the WM_TOUCH message.

Also, the visual feedback is more consistent for the Interaction Context. For example, if you enabled the “press&hold” behavior for some context (that simulates right mouse click), the first pointer added to this context will display a square on the screen after some time. The pointers that don’t belong to this context will not display a square despite the fact that all the pointers are passed to the same window. This means the interaction context may affect the associated pointers, not just get data from pointers.

Creating Interaction Context

In the sample, I moved all interop code to the Win32.cs file. There you can find definitions of many enumerations and structures that are passed to/from the API functions. These functions are also defined there as DllImports. See MSDN for more information about Pointer Input Messages and Interaction Context.

The object that needs to be associated with an interaction context must implement the IInteractionHandler interface. It is very simple:

internal interface IInteractionHandler
{
    void ProcessInteractionEvent(InteractionOutput output);
}

The InteractionOutput class contains data passed from the interaction context to a callback function: 

internal class InteractionOutput
{
    internal Win32.INTERACTION_CONTEXT_OUTPUT Data;

    internal IntPtr InteractionContext { get; }

    internal bool IsBegin();
    internal bool IsInertia();
    internal bool IsEnd();

    // and a few infrastructure members
}

Implementation of the IInteractionHandler interface can be found in the sample within the BaseHandler class. Both Figure and Background classes are derived from BaseHandler. The callback function is shared between all interaction contexts. It is a part of the infrastructure in Win32 static class. You can see there some tricks that make it thread-safe and allow processing inertia for all interaction contexts with a single timer.

To create an interaction context, we must call the Win32.CreateInteractionContext method. It accepts the IInteractionHandler interface as a parameter and the current SynchronizationContext:

IntPtr _context;

public BaseHandler()
{
    _context = Win32.CreateInteractionContext(this, SynchronizationContext.Current);
}

In the object’s Dispose method, we must call the Win32.DisposeInteractionContext method:

public void Dispose()
{
    Win32.DisposeInteractionContext(_context);
    _context = IntPtr.Zero;
}

Before using the interaction context, it has to be configured. For example, in the constructor of the Background class, we use the following code to update its configuration:

Win32.INTERACTION_CONTEXT_CONFIGURATION[] cfg = new Win32.INTERACTION_CONTEXT_CONFIGURATION[]
{
    new Win32.INTERACTION_CONTEXT_CONFIGURATION(Win32.INTERACTION.TAP,
        Win32.INTERACTION_CONFIGURATION_FLAGS.TAP |
        Win32.INTERACTION_CONFIGURATION_FLAGS.TAP_DOUBLE),

    new Win32.INTERACTION_CONTEXT_CONFIGURATION(Win32.INTERACTION.SECONDARY_TAP,
        Win32.INTERACTION_CONFIGURATION_FLAGS.SECONDARY_TAP),

    new Win32.INTERACTION_CONTEXT_CONFIGURATION(Win32.INTERACTION.HOLD,
        Win32.INTERACTION_CONFIGURATION_FLAGS.HOLD)
};
Win32.SetInteractionConfigurationInteractionContext(Context, cfg.Length, cfg);

The background doesn’t support any interactions, except tap, double tap, and secondary tap (right click). We also enable the hold interaction to give better feedback for the secondary tap interaction. 

In the constructor of the Figure class, we enable various manipulations depending on the parameters passed to the constructor:

Win32.INTERACTION_CONTEXT_CONFIGURATION[] cfg = new Win32.INTERACTION_CONTEXT_CONFIGURATION[]
{
    new Win32.INTERACTION_CONTEXT_CONFIGURATION(Win32.INTERACTION.MANIPULATION,
        Win32.INTERACTION_CONFIGURATION_FLAGS.MANIPULATION |
        Win32.INTERACTION_CONFIGURATION_FLAGS.MANIPULATION_SCALING |
        Win32.INTERACTION_CONFIGURATION_FLAGS.MANIPULATION_ROTATION |
        Win32.INTERACTION_CONFIGURATION_FLAGS.MANIPULATION_TRANSLATION_INERTIA |
        Win32.INTERACTION_CONFIGURATION_FLAGS.MANIPULATION_ROTATION_INERTIA |
        Win32.INTERACTION_CONFIGURATION_FLAGS.MANIPULATION_SCALING_INERTIA),

    new Win32.INTERACTION_CONTEXT_CONFIGURATION(Win32.INTERACTION.TAP,
        Win32.INTERACTION_CONFIGURATION_FLAGS.TAP |
        Win32.INTERACTION_CONFIGURATION_FLAGS.TAP_DOUBLE)
};
if (!pivot)
{
    cfg[0].Enable |=
        Win32.INTERACTION_CONFIGURATION_FLAGS.MANIPULATION_TRANSLATION_X |
        Win32.INTERACTION_CONFIGURATION_FLAGS.MANIPULATION_TRANSLATION_Y;
}
if (rails)
{
    cfg[0].Enable |=
        Win32.INTERACTION_CONFIGURATION_FLAGS.MANIPULATION_RAILS_X |
        Win32.INTERACTION_CONFIGURATION_FLAGS.MANIPULATION_RAILS_Y;
}
Win32.SetInteractionConfigurationInteractionContext(Context, cfg.Length, cfg);

Configuration parameters can be assigned at any time but the changes are applied only to new interactions.

Processing Pointer Messages

In the Form’s WndProc method, we should handle these messages:

  • WM_POINTERDOWN
  • WM_POINTERUP
  • WM_POINTERUPDATE
  • WM_POINTERCAPTURECHANGED

To get the pointer ID and its POINTER_INFO from window message parameters, we use the following code:

int pointerID = Win32.GET_POINTER_ID(m.WParam);
Win32.POINTER_INFO pi = new Win32.POINTER_INFO();
if (!Win32.GetPointerInfo(pointerID, ref pi))
{
    Win32.CheckLastError();
}

WM_POINTERDOWN message occurs when a user touches his screen or presses the mouse button and, hence, a new pointer is added. We need to do hit-testing when processing WM_POINTERDOWN message and determine a figure that contains the new pointer. Then we should add the pointer to the figure’s interaction context using the AddPointerInteractionContext function. All subsequent messages from this pointer can be passed to the same context without hit-testing because the pointer ID can easily be found in the figure’s ActivePointers hash-set.

The following is a simplified version of code processing WM_POINTERDOWN message:

case Win32.WM_POINTERDOWN:
    {
        Point pt = PointToClient(pi.PtPixelLocation.ToPoint());
        foreach (Figure f in _figures)
        {
            if (f.HitTest(pt.X, pt.Y))
            {
                f.AddPointer(pointerID);
                f.ProcessPointerFrames(pointerID, pi.FrameID);
                break;
            }
        }
    }
    break;

The HitTest method in the Figure class determines whether the point is contained within the bounds of the given figure. The method is simple still powerful. It can be used for any convex figures, not just rectangles. The AddPointer method associates the pointer ID with an interaction context and adds the ID to the ActivePointers hash-set of the given figure. ProcessPointerFrames method’s code is the following:

int _lastFrameID = -1;

public void ProcessPointerFrames(int pointerID, int frameID)
{
    if (_lastFrameID != frameID)
    {
        _lastFrameID = frameID;
        int entriesCount = 0;
        int pointerCount = 0;
        if (!Win32.GetPointerFrameInfoHistory(pointerID, ref entriesCount, 
		ref pointerCount, IntPtr.Zero))
        {
            Win32.CheckLastError();
        }
        Win32.POINTER_INFO[] piArr = new Win32.POINTER_INFO[entriesCount * pointerCount];
        if (!Win32.GetPointerFrameInfoHistory(pointerID, ref entriesCount, ref pointerCount, piArr))
        {
            Win32.CheckLastError();
        }
        IntPtr hr = Win32.ProcessPointerFramesInteractionContext(_context, 
                    entriesCount, pointerCount, piArr);
        if (Win32.FAILED(hr))
        {
            Debug.WriteLine("ProcessPointerFrames failed: " + Win32.GetMessageForHR(hr));
        }
    }
}

This method obtains the entire frame of information for the specified pointer and passes this frame to the interaction context.

WM_POINTERUP message occurs when a pointer is removed. We have to remove the pointer from ActivePointers hash-set and from the interaction context:

case Win32.WM_POINTERUP:
    foreach (Figure f in _figures)
    {
        if (f.ActivePointers.Contains(pointerID))
        {
            f.ProcessPointerFrames(pointerID, pi.FrameID);
            f.RemovePointer(pointerID);
            break;
        }
    }
    break;

WM_POINTERUPDATE message usually indicates that the pointer location is changed. Handling is trivial:

case Win32.WM_POINTERUPDATE:
    foreach (Figure f in _figures)
    {
        if (f.ActivePointers.Contains(pointerID))
        {
            f.ProcessPointerFrames(pointerID, pi.FrameID);
            break;
        }
    }
    break;

WM_POINTERCAPTURECHANGED message may occur when the window is losing capture. If so, we remove all pointers and stop the interaction context.

Processing Interaction Events

Eventually, the interaction context’s callback function executes the ProcessEvent method in the classes derived from BaseHandler. Use the IsBegin(), IsEnd(), and IsInertia() methods of the InteractionOutput object to determine the state of interaction processing. You should take a look at the definition of the INTERACTION_CONTEXT_OUTPUT structure that is passed to the ProcessEvent method as output.Data. The type of current interaction can be taken from the Interaction field (such as manipulation, tap, cross-slide, and others). The source of interaction can be obtained from InputType (such as mouse, pen, or touch). There are also the special sub-structures for manipulation, tap, and cross-slide interactions. For example, it is used in Background's code:  

internal override void ProcessEvent(InteractionOutput output)
{
    if (output.Data.Interaction == Win32.INTERACTION.TAP)
    {
        if (output.Data.Tap.Count == 2)
        {
            foreach (Figure f in _form.Figures)
            {
                f.ResetPoints();
            }
            _form.Invalidate();
        }
    }
}

In the InteractionContextTest sample, you can double-click the background to reset the position of all the figures. Also, the press&hold gesture (or mouse right-click) changes the background color. Figures can be moved (except the pinned one), scaled, rotated, and double-clicked. The pinned figure can be rotated with one finger. Hopefully, you’ll find this sample useful.

Conclusion

Windows 8 introduces the new APIs for unified processing input from various positioning devices. Pointer Input Messages and Interaction Context functions are especially helpful when dealing with touch input. You can use the InteractionContextTest sample as the base for your own development. Adding touch support to Windows 8 desktop applications becomes easy if you don't have to provide backward compatibility with Windows 7.

License

This article has no explicit license attached to it but may contain usage terms in the article text or the download files themselves. If in doubt please contact the author via the discussion board below.

A list of licenses authors might use can be found here