Guest Post: Evolving the Human Machine Interface

How the World Is Finally Ready For Virtual and Augmented Reality

By Mike Nichols, VP, Content and Applications at SoftKinetic

The year is 1979 and Richard Bolt, a student at MIT, demonstrates a program that enables the control of a graphic interface by combining both speech and gesture recognition. As the video of his thesis below demonstrates, Richard points at a projected screen image and issues a variety of verbal commands like “put that there”, to control the placement of images within a graphical interface in what he calls a “natural user modality”.

“Put-That-There”: Voice and Gesture at the Graphics Interface
Richard A. Bolt, Architecture Machine Group
Massachusetts Institute of Technology – under contract with the Cybernetics Technology Division of the Defense Advanced Research Projects Agency, 1979.

What Bolt demonstrated in 1979 was the first natural user interface. A simple pointing gesture combined with a verbal command, while an innate task in human communication, was and still is difficult for machines to understand correctly. It would take another 30 years for a consumer product to appear that might just fulfill that vision.

A new direction

In the years following Richard’s research, technology would advance to offer another choice to improve the Human Machine Interface (HMI). By the mid 80’s the mouse, a pointing device for 2D screen navigation, had evolved to provide an accurate, cost effective, and convenient method for navigating a graphical interface. Popularized by Apple’s Lisa and Macintosh computers, and supported by the largest software developer Microsoft, the mouse would become the primary input for computer navigation over the next 20 years.

Mac-lisa

“The Macintosh uses an experimental pointing device called a ‘mouse’. There is no evidence that people want to use these things.”

San Francisco Examiner, John C. Dvorak – image provided by…

 

In 2007, technology advancements helped Apple once again popularize an equally controversial device, the iPhone. With its touch sensitive screen and gesture recognition, the touch interface in all its forms has now become the dominant form of HMI.

The rebirth of natural gesture

Although seemingly dormant throughout the 80’s and 90’s, research continued to refine a variety of methods for depth and gesture recognition. In 2003 Sony released the Eye Toy for use with the PlayStation2. The Eye Toy enabled Augmented Reality (AR) experiences and could track simple body motions. Then in 2005 Nintendo premiered a new console, the Wii, which used infrared in combination with handheld controllers to detect hand motions for video games. The Wii controllers, with their improved precision over Sony’s Eye Toy, proved wildly successful and set the stage for the next evolution in natural gesture.

In 2009 Microsoft announced the Kinect for Xbox 360, with its ability to read human motions to control our games and media user interface (UI), without the aid of physical controllers.

What Richard Bolt had demonstrated some 30+ years prior was finally within grasp. Since the premier of Kinect we’ve seen more progress in the development of computer vision and recognition technologies than in the previous 35 years combined. Products like the Asus Xtion, Creative Senz3D, and Leap Motion have inspired an energetic global community of developers to create countless experiences across a broad spectrum of use cases.

The future’s so bright

To this day, Richard’s research speaks to the core of what natural gesture technology aims to achieve, that “natural user modality”. While advances in HMI have continued to iterate and improve over time, the medium for our visual interaction has remained relatively intact: the screen. Navigation of our modern UI has been forced to work within the limits of the 2D screen. With the emergence of AR and VR, our traditional forms of HMI do not provide the same accessible input as the mouse and touch interfaces of the past. Our HMI must evolve to allow users the ability to interact to the scene and not the screen.

CES 2014, Road to VR, SoftKinetic premiers hand and finger tracking for VR.

 

Next, we’ll explore how sensors, not controllers, will provide the “natural user modality” that will propel AR and VR to become more pervasive than mobile is today. The answer, it seems, may be right in front of us…we just need to reach out and grab it.

 

4 Responses

  1. Euhh êtes vous sûr de ce que vous nous écrivez ?

  2. Encore un poste assurément plaisant

  3. Encore un article réellement attrayant

Leave a comment