Monthly Archives: June 2011

Kinect and GUI Interaction: The Magic is in the Scales

With the Kinect sensor being available on the open market (i.e. without console) for some time now, several different attempts have been made to use it for the manipulation of graphical user interfaces. The most generic approach is to simply control the operating system, and thereby enable gesture control of any program you have installed. Evoluce, for example, follows this idea. Similarly, Microsoft has demoed Kinect-based gesture control for its “Worldwide Telescope” software.

What’s striking in both of these approaches (and several others you can find on the web) is that the scaling of the hand movement and the corresponding action visible on the screen is very inconsistent. In many cases, your hands have to travel a distance far greater than the perceived distance of the action on the screen. Sometimes, the relative distance is 5:1 – a 5 cm movement in the air results in a 1 cm change in the GUI. Compare this to the good old mouse and you will find that, for efficient human computer interaction, the relative distance of movement is often around 1:3, while a lot of people also use a dynamic multiplier that increases the effect of mouse movement as it gets faster. This matching of small gestures to large effects is the recipe for the user’s satisfying notion of having power over the machine.

A lot of inspiration for Kinect-based interfaces seems to come from the fictitious UI shown in the movie Minority Report. Check out the relevant scenes on YouTube, and you will find that there, the scales are actually quite consistent, and they also include small gestures having larger effects.

Surely, gesture control of software is only at its very early stages, but it would seem to help the development – and its acceptance – to pay some good attention to the scaling of gestures in relation to their effects.

Video: Kinect-based control of Windows 7
Video: Kinect and the Worldwide Telescope

Advertisements