Click here to Skip to main content
65,938 articles
CodeProject is changing. Read more.
Articles / web / HTML5

Unifying touch and mouse: how Pointer Events will make cross-browsers touch support easy

5.00/5 (1 vote)
19 Oct 2015CPOL19 min read 10.7K  
In this article I’d like to show you some browser experiments using Pointers – an emerging multi-touch technology and polyfills that make cross-browser support, well less complex.

I often get questions from developers like, "with so many touch-enabled devices on phones and tablets, where do I start?" and "what is the easiest way to build for touch-input?" Short answer: "It’s complex." Surely there’s a more unified way to handle multi-touch input on the web – in modern, touch-enabled browsers or as a fallback for older browsers. In this article I’d like to show you some browser experiments using Pointers – an emerging multi-touch technology and polyfills that make cross-browser support, well less complex. The kind of code you can also experiment with and easily use on your own site.

First of all, many touch technologies are evolving on the web – for browser support you need to look the iOS touch event model and the W3C mouse event model in additional to at MSPointers, to support all browsers. Yet there is growing support (and willingness) to standardize. In September 2012, Microsoft submitted MSPointers to the W3C for standardization and in February 2015, we’ve reached W3C Recommendation: http://www.w3.org/TR/pointerevents. The MS Open Tech team also released an initial Pointer Events prototype for Webkit and recently, Mozilla has announced support for Pointer Events in Firefox Nightly!

The reason why I experiment with Pointer Events is not based on device share – it’s because Microsoft’s approach to basic input handling is quite different than what’s currently available on the web and it deserves a look for what it could become. The difference is that developers can write to a more abstract form of input, called a "Pointer." A Pointer can be any point of contact on the screen made by a mouse cursor, pen, finger, or multiple fingers. So you don’t waste time coding for every type of input separately.

If you’re running IE10, you’ll need to prefix the API or use our polyfill library Hand.js. You’ll find the original article with prefixes here: Unifying touch and mouse: how Pointer Events will make cross-browsers touch support easy

The concepts

We will begin by reviewing apps running inside Internet Explorer 11, Microsoft Edge, or Firefox Nightly which exposes the Pointer Events API and then solutions to support all browsers. After that, we will see how you can take advantage of IE/MS Edge gestures services that will help you handling touch in your JavaScript code in an easy way. As Windows 8.1/10 and Windows Phone 8.1/Mobile 10 share the same browser engine, the code & concepts are identical for both platforms. Moreover, everything you’ll learn on touch in this article will help you do the very same tasks in Windows Store apps built with HTML5/JS, as this is again the same engine that is being used.

The idea behind the Pointer is to let you addressing mouse, pen & touch devices via a single code base using a pattern that match the classical mouse events you already know. Indeed, mouse, pen & touch have some properties in common: you can move a pointer with them and you can click on element with them for instance. Let’s then address these scenarios via the very same piece of code! Pointers will aggregate those common properties and expose them in a similar way than the mouse events.

The most obvious common events are then: pointerdown , pointermove & pointerup which directly map to the mouse events equivalent. You will have the X & Y coordinates of the screen as an output.

You have also specific events like: pointerover , pointerout or pointercancel

Image 1

But of course, there could be also some cases where you want to address touch in a different manner than the default mouse behavior to provide a different UX. Moreover, thanks to the multi-touch screens, you can easily let the user rotate, zoom or pan some elements. Pens/stylus can even provide you some pressure information a mouse can’t. The Pointer Events will still aggregate those differences and will let you build some custom code for each devices’ specifics.

Note: it would be better to test the following embedded samples if you have a touch screen of course on a Windows 8.1/10 device or if you’re using a Windows Phone 8+. Still, you can have some options:

  1. Get a first level of experience by using the Windows 8.1/10 Simulator that ships with the free Visual Studio 2013/2015 Express development tools. For more details on how this works, please read this article: Using the Windows 8 Simulator & VS 2012 to debug the IE10 touch events & your responsive design .
  2. Have a look to this video also available in other formats at the end of the article. The video demonstrates all the below samples on a Windows 8 tablet supporting touch, pen & mouse.
  3. Use a virtual cross-browsing testing service like BrowserStack to test interactively if you don’t have access to the Windows 8 device or download one of our VMs.

Handling simple touch events

Step 1: do nothing in JS but add a line of CSS

Let’s start with the basics. You can easily take any of your existing JavaScript code that handles mouse events and it will just work as-is using some pens or touch devices in Internet Explorer 10/11 & MS Edge. IE & MS Edge are indeed firing mouse events as a last resort if you’re not handling Pointers Events in your code. That’s why, you can "click" on a button or on any element of any webpage using your fingers even if the developer never thought that one day someone would have done it this way. So any code registering to mouvedown and/or mouseup events will work with no modification at all. But what about mouvemove?

Let’s review the default behavior to answer to that question. For instance, let’s take this piece of code:

HTML
<!DOCTYPE html>
<html>
<head>
    <title>Touch article sample 1</title>
</head>
<body>
    <canvas id="drawSurface" width="400px" height="400px" style="border: 1px dashed black;">
    </canvas>
    <script>
        var canvas = document.getElementById("drawSurface");
        var context = canvas.getContext("2d");
        context.fillStyle = "rgba(0, 0, 255, 0.5)";

        canvas.addEventListener("mousemove", paint, false);

        function paint(event) {
            context.fillRect(event.clientX, event.clientY, 10, 10);
        }
    </script>
</body>
</html>

It simply draws some blue 10px by 10px squares inside an HTML5 canvas element by tracking the movements of the mouse. To test it, move your mouse inside the box. If you have a touch screen, try to interact with the canvas to check by yourself the current behavior:

Image 2

 

 

Sample 0 : default behavior if you do nothing

Result : only mousedown/up/click works with touch

Interactive example available here

You’ll see than when you’re moving the mouse inside the canvas element, it will draw some series of blue squares. But using touch instead, it will only draw a unique square at the exact position where you will tap the canvas element. As soon as you will try to move your finger in the canvas element, the browser will try to pan inside the page as it’s the default behavior being defined.

You then need to specify you’d like to override the default behavior of the browser and tell it to redirect the touch events to your JavaScript code rather than trying to interpret it. For that, target the elements of your page that shouldn’t react anymore to the default behavior and apply this CSS rule to them:

HTML
touch-action: auto | none | pan-x | pan-y;

You have various values available based on what you’d like to filter or not. You’ll find the values described in the W3C specification for IE11, MS Edge and Firefox Nightly: the touch-action css property and for IE10: Guidelines for Building Touch-friendly Sites

The classic use case is when you have a map control in your page. You want to let the user pan & zoom inside the map area but keep the default behavior for the rest of the page. In this case, you will apply this CSS rule only on the HTML container exposing the map.

In our case, add this block of CSS:

CSS
<style>
    #drawSurface
    {
        touch-action: none; /* Disable touch behaviors, like pan and zoom */
    }
</style>

Which now generates this result:

Image 3

 

 

Sample 1: just after adding touch-action: none

Result: default browser panning disabled and mousemove works but with 1 finger only

 

Interactive example available here

If testing for IE : use prefixed version here. Microsoft Edge supports the latest and you can always look-up web platform status along with other modern browsers

Now, when you’re moving your finger inside the canvas element, it behaves like a mouse pointer. That’s cool! But you will quickly ask yourself this question: why does this code only track 1 finger? Well, this is because we’re just falling in the last thing IE & MS Edge are doing to provide a very basic touch experience: mapping one of your fingers to simulate a mouse. And as far as I know, we’re using only 1 mouse at a time. So 1 mouse === 1 finger max using this approach. Then, how to handle multi-touch events?

Step 2: use Pointer Events instead of mouse events

Take any of your existing code and replace your registration to "mousedown/up/move" by "pointerdown/up/move" (or "MSPointerDown/Up/Move" in IE10) and your code will directly support a multi-touch experience inside Pointer Events enabled browsers!

For instance, in the previous sample, change this line of code:

CSS
canvas.addEventListener("mousemove", paint, false);

to this one:

CSS
canvas.addEventListener("pointermove", paint, false);

And you will get this result:

Image 4

 

 

Sample 2: using pointermove instead of mouse move

Result: multi-touch works

 

Interactive example available here

If testing for IE: use prefixed version here. Microsoft Edge supports the latest and you can always look-up web platform status along with other modern browsers.

You can now draw as many squares series as touch points your screen supports! Even better, the same code works for touch, mouse & pen. This means for instance that you can use your mouse to draw some lines at the same time you are using your fingers to draw other lines.

If you’d like to change the behavior of your code based on the type of input, you can test that through the pointerType property value. For instance, let’s imagine that we want to draw some 10px by 10px red squares for fingers, 5px by 5px green squares for pen and 2px by 2px blue squares for mouse. You need to replace the previous handler (the paint function) with this one:

C#
function paint(event) {
    if (event.pointerType) {
        switch (event.pointerType) {
            case "touch":
                // A touchscreen was used
                // Drawing in red with a square of 10
                context.fillStyle = "rgba(255, 0, 0, 0.5)";
                squaresize = 10;
                break;
            case "pen":
                // A pen was used
                // Drawing in green with a square of 5
                context.fillStyle = "rgba(0, 255, 0, 0.5)";
                squaresize = 5;
                break;
            case "mouse":
                // A mouse was used
                // Drawing in blue with a squre of 2
                context.fillStyle = "rgba(0, 0, 255, 0.5)";
                squaresize = 2;
                break;
        }

        context.fillRect(event.clientX, event.clientY, squaresize, squaresize);
    }
}

And you can test the result here:

Image 5

 

Sample 2b: testing pointerType to test touch/pen or mouse

Result: You can change the behavior for mouse/pen/touch but since 2a the code now only works in browsers supporting Pointer Events

 

Interactive example available here

If testing for IE: test the prefixed version here. Microsoft Edge supports the latest and you can always look-up web platform status along with other modern browsers.

If you’re lucky enough to have a device supporting the 3 types of inputs (like the Sony Duo 11, the Microsoft Surface Pro or the Samsung Tablet some of you had during BUILD2011), you will be able to see 3 kinds of drawing based on the input type. Great, isn’t it?

Still, there is a problem with this code. It now handles all type of inputs properly in browsers supporting Pointer Events but doesn’t work at all in others browsers like IE9, Chrome, Opera & Safari.

Step 3: do feature detection to provide a fallback code

As you’re probably already aware, the best approach to handle multi-browsers support is to do features detection. In our case, you need to test this:

window.PointerEvent

Beware that this only tells you if the current browser supports Pointer. It doesn’t tell you if touch is supported or not. To test support for touch or not, you need to check maxTouchPoints.

In conclusion, to have a code supporting Pointer and falling back properly to mouse events in other browsers, you need a code like that:

var canvas = document.getElementById("drawSurface");
var context = canvas.getContext("2d");
context.fillStyle = "rgba(0, 0, 255, 0.5)";
if (window.PointerEvent) {
    // Pointer events are supported.
    canvas.addEventListener("pointermove", paint, false);
}
else {
    canvas.addEventListener("mousemove", paint, false);
}

function paint(event) {
    var squaresize = 2;
    if (event.pointerType) {
        switch (event.pointerType) {
            case "touch":
                // A touchscreen was used
                // Drawing in red with a square of 10
                context.fillStyle = "rgba(255, 0, 0, 0.5)";
                squaresize = 10;
                break;
            case "pen":
                // A pen was used
                // Drawing in green with a square of 5
                context.fillStyle = "rgba(0, 255, 0, 0.5)";
                squaresize = 5;
                break;
            case "mouse":
                // A mouse was used
                // Drawing in blue with a square of 2
                context.fillStyle = "rgba(0, 0, 255, 0.5)";
                squaresize = 2;
                break;
        }
    }

    context.fillRect(event.clientX, event.clientY, squaresize, squaresize);
}

And again you can test the result here:

Image 6

 

 

Sample 3: feature detecting PointerEvent to provide a fallback

Result: full complete experience in IE11/MS Edge & Firefox Nightly and default mouse events in other browsers

 

Interactive example available here

If testing for IE: prefixed version here. Microsoft Edge supports the latest and you can always look-up web platform status along with other modern browsers.

Step 4: support all touch implementation

If you’d like to go even further and support all browsers & all touch implementations, you have two choices:

  1. Write the code to address both events models in parallel like described for instance in this article: Handling Multi-touch and Mouse Input in All Browsers
  2. Just add a reference to HandJS, the awesome JavaScript polyfill library written by my friend David Catuhe like described in his article: HandJS a polyfill for supporting pointer events on every browser

David wrote a cool little library that lets you write code leveraging the Pointer Events specification. As I mentioned in the introduction of this article, Microsoft recently submitted the MSPointer Events specification to W3C for standardization. The W3C created a new Working Group and it has already published a W3C Recommendation based on Microsoft’s proposal. The MS Open Tech team also released an initial Pointer Events prototype for Webkit that you might be interested in.

While the Pointer Events specification is not yet a standard, you can still already implement code that supports it leveraging David’s Polyfill and be ready for when Pointer Events will be a standard implemented in all modern browsers. With David’s library the events will be propagated to MSPointer on IE10, to Touch Events on Webkit based browsers and finally to mouse events as a last resort. In Internet Explorer 11, MS Edge & Firefox Nightly, it will be simply disabled as they implement the last version of the W3C spec. It’s damn cool! Check out his article to discover and understand how it works. Note that this polyfill will also be very useful then to support older browser with elegant fallbacks to mouse events.

To have an idea on how to use this library, please have a look to this article: Creating an universal virtual touch joystick working for all Touch models thanks to Hand.JS which shows you how to write a virtual touch joystick using pointer events. Thanks to HandJS, it will work on IE10 on Windows 8/RT, Windows Phone 8, iPad/iPhone & Android devices with the very same code base!

We are also using Hand.js in our WebGL open-source 3d engine: http://www.babylonjs.com/ . Launch a scene in your WebGL compatible browser and switch the camera to the "Touch camera" for mono-touch control or "Virtual joysticks camera" to use your two thumbs in the control panel:

Image 7

And you’ll be able to move into the 3d world thanks to the power of your finger(s)!

Recognizing simple gestures

Now that we’ve seen how to handle multi-touch, let’s see how to recognize simple gestures like tapping or holding an element and then some more advanced gestures like translating or scaling an element.

Thankfully, IE10/11 & MS Edge provide a MSGesture object that’s going to help us. Note that this object is currently specific to IE & MS Edge and not part of the W3C specification. Combined with the MSCSSMatrix element in IE11 (MS Edge is rather using the WebKitCSSMatrix one), you’ll see that you can build very interesting multi-touch experiences in a very simple way. CSSMatrix is indeed representing a 4×4 homogeneous matrix that enables Document Object Model (DOM) scripting access to CSS 2-D and 3-D Transforms functionality. But before playing with that, let’s start with the basics.

The base concept is to first register to pointerdown. Then inside the handler taking care of pointerdown, you need to choose which pointers you’d like to send to the MSGesture object to let it detect a specific gesture. It will then trigger one of those events: MSGestureTap, MSGestureHold, MSGestureStart, MSGestureChange, MSGestureEnd, MSInertiaStart. The MSGesture object will then take all the pointers submitted as input parameters and will apply a gesture recognizer on top of them to provide some formatted data as output. The only thing you need to do is to choose/filter which pointers you’d like to be part of the gesture (based on their ID, coordinates on the screen, whatever…). The MSGesture object will do all the magic for you after that.

Sample 1: handling the hold gesture

We’re going to see how to hold an element (a simple DIV containing an image as a background). Once the element will be held, we will add some corners to indicate to the user he has currently selected this element. The corners will be generated by dynamically creating 4 div added on top of each corner of the image. Finally, some CSS tricks will use transformation and linear gradients in a smart way to obtain something like this:

Image 8

The sequence will be the following one:

  1. register to pointerdown & MSPointerHold events on the HTML element you’re interested in
  2. create a MSGesture object that will target this very same HTML element
  3. inside the pointerdown handler, add to the MSGesture object the various PointerID you’d like to monitor (all of them or a subset of them based on what you’d like to achieve)
  4. inside the MSPointerHold event handler, check in the details if the user has just started the hold gesture (MSGESTURE_FLAG_BEGIN flag). If so, add the corners. If not, remove them.

This leads to the following code:

HTML
<!DOCTYPE html>
<html>
<head>
    <title>Touch article sample 5: simple gesture handler</title>
    <link rel="stylesheet" type="text/css" href="toucharticle.css" />
    <script src="Corners.js"></script>
</head>
<body>
    <div id="myGreatPicture" class="container" />
    <script>
        var myGreatPic = document.getElementById("myGreatPicture");
        // Creating a new MSGesture that will monitor the myGreatPic DOM Element
        var myGreatPicAssociatedGesture = new MSGesture();
        myGreatPicAssociatedGesture.target = myGreatPic;

        // You need to first register to pointerdown to be able to
        // have access to more complex Gesture events
        myGreatPic.addEventListener("pointerdown", pointerdown, false);
        myGreatPic.addEventListener("MSGestureHold", holded, false);

        // Once pointer down raised, we're sending all pointers to the MSGesture object
        function pointerdown(event) {
            myGreatPicAssociatedGesture.addPointer(event.pointerId);
        }

        // This event will be triggered by the MSGesture object
        // based on the pointers provided during the MSPointerDown event
        function holded(event) {
            // The gesture begins, we're adding the corners
            if (event.detail === event.MSGESTURE_FLAG_BEGIN) {
                Corners.append(myGreatPic);
            }
            else {
                // The user has released his finger, the gesture ends
                // We're removing the corners
                Corners.remove(myGreatPic);
            }
        }

        // To avoid having the equivalent of the contextual  
        // "right click" menu being displayed on the MSPointerUp event, 
        // we're preventing the default behavior
        myGreatPic.addEventListener("contextmenu", function (e) {
            e.preventDefault();    // Disables system menu
        }, false);
    </script>
</body>
</html>

And here is the result:

Image 9

If testing for IE: prefixed version of Pointers here. Microsoft Edge supports the latest and you can always look-up web platform status along with other modern browsers.

Try to just tap or mouse click the element, nothing occurs. Touch & maintain only 1 finger on the image or do a long mouse click on it, the corners appear. Release your finger, the corners disappear.

Touch & maintains 2 or more fingers on the image, nothing happens as the Hold gesture is only being triggered only if 1 unique finger is holding the element.

Note: the white border, the corners & the background image are set via CSS defined in toucharticle.css. Corners.js simply creates 4 DIVs (with the append function) and places them on top of the main element in each corner with the appropriate CSS classes.

Still, there is something I’m not happy with in the current result. Once you’re holding the picture, as soon as you’re slightly moving your finger, the MSGESTURE_FLAG_CANCEL flag is raised and caught by the handler which removes the corners. I would rather like to remove the corners only once the user releases his finger anywhere above the picture or as soon as he’s moving his finger out of the box delimited by the picture. To do that, we’re going to remove the corners only on the pointerup or the pointerout. This gives this code instead:

C#
var myGreatPic = document.getElementById("myGreatPicture");
// Creating a new MSGesture that will monitor the myGreatPic DOM Element
var myGreatPicAssociatedGesture = new MSGesture();
myGreatPicAssociatedGesture.target = myGreatPic;

// You need to first register to pointerdown to be able to
// have access to more complex Gesture events
myGreatPic.addEventListener("pointerdown", pointerdown, false);
myGreatPic.addEventListener("MSGestureHold", holded, false);
myGreatPic.addEventListener("pointerup", removecorners, false);
myGreatPic.addEventListener("pointerout", removecorners, false);

// Once touched, we're sending all pointers to the MSGesture object
function pointerdown(event) {
    myGreatPicAssociatedGesture.addPointer(event.pointerId);
}

// This event will be triggered by the MSGesture object
// based on the pointers provided during the pointerdown event
function holded(event) {
    // The gesture begins, we're adding the corners
    if (event.detail === event.MSGESTURE_FLAG_BEGIN) {
        Corners.append(myGreatPic);
    }
}

// We're removing the corners on pointer Up or Out
function removecorners(event) {
    Corners.remove(myGreatPic);
}

// To avoid having the equivalent of the contextual  
// "right click" menu being displayed on the pointerup event, 
// we're preventing the default behavior
myGreatPic.addEventListener("contextmenu", function (e) {
    e.preventDefault();    // Disables system menu
}, false);

which now provides the behavior I was looking for:

Image 10

If testing for IE: test with the unprefixed version of Pointers here. Microsoft Edge supports the latest and you can always look-up web platform status along with other modern browsers.

Sample 2: handling scale, translation & rotation

Finally, if you want to scale, translate or rotate an element, you simply need to write a very few lines of code. You need to first to register to the MSGestureChange event. This event will send you via the various attributes described in the MSGestureEvent object documentation like rotation, scale, translationX, translationY currently applied to your HTML element.

Even better, by default, the MSGesture object is providing an inertia algorithm for free. This means that you can take the HTML element and throw it on the screen using your fingers and the animation will be handled for you.

Lastly, to reflect these changes sent by the MSGesture, you need to move the element accordingly. The easiest way to do that is to apply some CSS Transformation mapping the rotation, scale, translation details matching your fingers gesture. For that, use the MSCSSMatrix element.

In conclusion, if you’d like to handle all this cool gestures to the previous samples, register to the event like that:

C#
myGreatPic.addEventListener("MSGestureChange", manipulateElement, false);

And use the following handler:

C#
function manipulateElement(e) {
    // Uncomment the following code if you want to disable the built-in inertia provided by dynamic gesture recognition
    // if (e.detail == e.MSGESTURE_FLAG_INERTIA)
    // return;

    var m;
    if (window.WebKitCSSMatrix) {
        // Get the latest CSS transform on the element in MS Edge 
        m = new WebKitCSSMatrix(window.getComputedStyle(myGreatPic, null).transform); 
    }
    else if (window.MSCSSMatrix) {
        // Get the latest CSS transform on the element in IE11 
        m = new MSCSSMatrix(window.getComputedStyle(myGreatPic, null).transform);
    }
    if (m) {
        e.target.style.transform = m
        .translate(e.offsetX, e.offsetY) // Move the transform origin under the center of the gesture
        .rotate(e.rotation * 180 / Math.PI) // Apply Rotation
        .scale(e.scale) // Apply Scale
        .translate(e.translationX, e.translationY) // Apply Translation
        .translate(-e.offsetX, -e.offsetY); // Move the transform origin back
    }
}

which gives you this final sample:

Image 11

If testing for IE: test with the prefixed version of Pointers here: http://david.blob.core.windows.net/html5/touchsample7.html/. Microsoft Edge supports the latest and you can always look-up web platform status along with other modern browsers.

Try to move and throw the image inside the black area with 1 or more fingers. Try also to scale or rotate the element with 2 or more fingers. The result is awesome and the code is very simple as all the complexity is being handled natively by IE / MS Edge.

Video & direct link to all samples

If you don’t have a touch screen experience available for IE / MS Edge and you’re wondering how these samples works, have a look at this video where I’m describing all samples shared in this article on the Samsung BUILD2011 tablet:

Image 12

And you can also have a look to all of them here:

Associated resources

Logically, with all the details shared in this article and associated links to other resources, you’re now ready to implement the Pointer Events model in your websites & Windows Store applications. You have then the opportunity to easily enhance the experience of your users in Internet Explorer 10/11, Microsoft Edge and soon every Firefox users!

More hands-on with JavaScript

This article is part of the web development series from Microsoft tech evangelists on practical JavaScript learning, open source projects, and interoperability best practices including Microsoft Edge browser and the new EdgeHTML rendering engine.

We encourage you to test across browsers and devices including Microsoft Edge – the default browser for Windows 10 – with free tools on dev.modern.IE:

In-depth tech learning on Microsoft Edge and the Web Platform from our engineers and evangelists:

More free cross-platform tools & resources for the Web Platform:

License

This article, along with any associated source code and files, is licensed under The Code Project Open License (CPOL)