The WPF/Windows App Development experience – Part 2: WPF Touch
After a description of the problems of Microsoft libraries / choice of (MS) technology in part 1, this post is about using WPF for a touch applications.
When you create WPF application, you will be able to click-use it with a touch obviously, but using touch panning to scroll and other touch gestures does not work by default.
Fortunately, enabling panning is very easy; you can enable PanningMode on ScrollViewer, which in turn also enables manipulation events (IsManipulationEnabled). This seems to work just fine at first sight. Unfortunately, once you put it on a touch device and test it for yourself, you will notice touch clicks not to go through.
My guess is that touch moves are interpreted as panning too early (with little movement), which leads to minimal movement (which is not preventable with a finger deforming on pressing it against a hard surface) and in turn misinterpretation from the default PanningMode implementation. Debugging the events and their positional values seems to confirm this.
Sadly, the PanningMode is not configurable or even well documented. And the WPF being a proprietary, closed-source project can not be analyzed either. This is even worse once you try to work around the native panning mode implementation.
We analyzed the event behaviour and tried to work around, but eventually just disabled the PanningMode, and instead increased scrollbar size for now (which was a wish anyway), as an end was not in sight yet. The problem is the manipulation mode itself, so just disabling the panning mode and handling the manipulation events yourself is not enough. If you take this approach (which also gives you native inertia), you will have to notice “clicks” yourself, so would have to adjust all the controls you may embed within a ScrollViewer with adjusted activation logic. Not a feasable, preferrable approach.
An alternative approach is to extend the ScrollViewer, disable manipulation mode, and implement your own scrolling mechanism from touch events. This still leads to problems with nested scroll viewers; as you want to capture touch events (still scroll when you “move out”), and only do so in the inner-most scroll viewer (prevent stealing captures which leads to other problems), you will trigger your own event type from the inner-most scroll viewer if it is at the edge meaning in can scroll no further, and handle these events in the next outer scroll viewer to make it scroll instead. You either have to introduce a flag to the move event specifying whether the outer scroll viewer should scroll or just update its “last move position” information, or alternatively you have to create new, code-generated touch events from the inner scroll viewer that are only triggered if the outer scroll viewer should scroll, but positional information will have to be adjusted (prevent jumps from missing touch events from when the inner scroll viewer moved).
Unfortunately, once you’re this far, you still have to re-implement inertia, and another unresolved problem I can’t remember right now (sorry 🙁 ).