The Commercial Birth of Natural Computing

minorityreportPunch card. Keyboard. Mouse. Touchscreen. Voice. Gesture.

This abbreviated history of human-computer interaction follows a clear trajectory of improvement, where each mode of communication with technology is demonstrably easier to use than the last. We are now entering an era of natural computing, where our interaction with technology becomes more like a conversation, effortless and ordinary, and less like a chore, clunky and difficult. Those of us working in the field are focused on teaching computers to understand and adapt to the most natural human actions, instead of forcing people to learn to understand and adapt to technology.

Three years ago, the industry’s only point of reference to explain this technology was science fiction, like the movie “Minority Report.” Then in November 2010, Microsoft’s Kinect for Xbox 360 sensor was released, and broad adoption of voice and gesture technology found its way into millions of living rooms. A year later, Microsoft launched Kinect for Windows, which gives researchers and businesses the ability to take the Kinect natural computing technology to market in a variety of industries.

Since then, major investments in the field have been made by established companies like Intel and Samsung, maturing natural user interface (NUI) players like Primesense and SoftKinetic, and new entrants like Leap Motion and Tobii. Natural computing is moving from the realms of researchers to the minds of marketers, and a true commercial category is starting to emerge.

But even just a year ago, there was no definition, no language and no data for the commercial category. Clearly a richer, more informed language was needed. To this end, my colleagues and I have developed a category framework: Kinect and other voice and gesture technologies are part of the Natural Computing category, defined as input devices that enable users to trigger computing events in the easiest, most efficient way possible. Understanding that the term Natural Computing has a variety of different meanings in academia, we found it was a helpful term to describe the business side of human-computer interaction technologies.

In some respects, there is evidence of natural computing all around us, and there has been for many years. Think of automatic doorways, which open up for you with no effort required on your part beyond walking toward them. Think of automatic faucets, soap dispensers and hand driers — all you have to do is offer them your hand.

These systems are the most rudimentary forms of natural computing. They each recognize a single set of data (your hand placement), automatically interpret your intent (to wash or dry your hands) and immediately respond to it (by dispensing water or soap or air). Now imagine if more complicated forms of technology could understand your intent in all its complexity, and respond to it simply, immediately and perfectly. No learning required. This is how those of us working in this field see the future.

There are currently a limited set of ways that users can interact with computing devices, although there will certainly be more in the future. Today, these include everything from manipulating a mouse and keyboard, to touching, speaking and gesturing. The illustration below breaks down these methods according to how close the user is to the screen (“far” vs. “near”), and how hard or easy it is to learn the technology (“learned” vs. “natural”).

First, each input method is designed to solve for different distances. For example, you need to be right next to a screen to be able to touch it, yet you can be several feet or more away from it when using gesture technologies. Similarly, take into consideration how much time it takes someone to learn how to use the technology. Older technologies tend to take longer to learn (think typing lessons or early command line interfaces) while newer ones tend to take less time (think touchscreens). The combination of these two ideas — proximity and ease of use — make up the Natural Computing Category Map, which enables us to better envision where certain natural computing technologies play a role now and where they could grow in the future.

natcomp
Figure 1. Natural Computing Category Map (Illustrative)

Within this new, rising category, the technology receives new information with every single gesture, move or sound, and can adapt to what it learns. After one year in market, my colleagues and I continue to see Kinect for Windows as a fundamentally human technology — one that sees and recognizes users as a whole person, with thousands of examples of human-centered applications beyond gaming in industries like healthcare, retail, training and automotive. Additionally, competitive activity has also accelerated, with new sensor and SDK releases, updates to more established open source offerings and significant partnership and investment activity by major players and new entrants alike.

These other gesture-based technology companies have evolved to form partnerships with major computer hardware manufacturers or are exploring the possibilities of integrating the technology in smartphones. The category is growing and evolving rapidly. All this activity accretes to businesses and consumers, who benefit from the quickly evolving natural computing experiences.

The future of the natural computing category is to reach end-users directly, fundamentally changing everyday interactions with technology. Imagine walking by a storefront window and having an avatar mirror your every move, talking to your next-gen TV with the same tone and sentence structure you would use with a friend, or improving your tennis swing with an immersive simulation tool. If you are reading this and wonder what the future of natural computing holds in store for you, the future of natural computer interaction is here already, albeit unevenly distributed. And natural computing is quickly beginning to demonstrate what a computer can do if you give it eyes, ears and the capacity to use them.

Leslie Feinzaig is the Senior Product Manager for Kinect for Windows. Leslie plays an important role in Microsoft’s Kinect for Windows business and has researched and developed great insights into the industry and competitive landscapes around natural computing.

Must-Reads from other Websites

Panos Mourdoukoutas

Why Apple Should Buy China’s Xiaomi

Paul Graham

What I Didn’t Say

Benjamin Bratton

We Need to Talk About TED

Mat Honan

I, Glasshole: My Year With Google Glass

Chris Ware

All Together Now

Corey S. Powell and Laurie Gwen Shapiro

The Sculpture on the Moon

About Voices

Along with original content and posts from across the Dow Jones network, this section of AllThingsD includes Must-Reads From Other Websites — pieces we’ve read, discussions we’ve followed, stuff we like. Six posts from external sites are included here each weekday, but we only run the headlines. We link to the original sites for the rest. These posts are explicitly labeled, so it’s clear that the content comes from other websites, and for clarity’s sake, all outside posts run against a pink background.

We also solicit original full-length posts and accept some unsolicited submissions.

Read more »