Updated

When it comes to technological innovation, there are two basic approaches. You can start big, flashy, and expensive, and hope that eventually your tech invention comes down enough in price for an average user to afford – think of GPS devices, for instance, which were the realm of high-budget military agencies long before ordinary civilians could dream of buying one; or, you can set out from the beginning to design something life-changing that everyone can have access to, rather than just an elite few.

The research team behind WorldKit, a new, experimental technology system, is trying to straddle the gulf between these two extremes. The goal is to transform all of your surroundings into touchscreens, equipping walls, tables, and couches with interactive, intuitive controls.

So how does the magic happen? With a simple projector – a projector paired with a depth sensor, to be precise. “It’s this interesting space of having projected interfaces on the environment, using your whole world as a sort of gigantic tablet,” said Chris Harrison, a soon-to-be professor in human-computer interaction at Carnegie Mellon University. Robert Xiao, a PhD candidate at Carnegie Mellon and lead researcher on the project, explained that WorldKit uses a depth camera to sense where flat surfaces are in your environment. “We allow a user to basically select a surface on which they can ‘paint’ an interactive object, like a button or sensor,” Xiao said.

DigitalTrends recently chatted with both Harrison and Xiao about their work on the WorldKit project, and learned just how far their imaginations run when it comes to the future of touch technology and ubiquitous computing.

Understanding WorldKit’s workings
We know; the concept of a touchscreen on any surface is a little far out there, so let’s break it down. WorldKit works by pairing a depth-sensing camera lens, such as the one that the Kinect uses, with a projector lens. Then, programmers write short scripts on a MacBook Pro using Java, similar to those they might write for an Arduino, to tell the depth camera how to react when someone makes certain gestures in front of it. The depth camera interprets the gestures and then tells the projector to react by projecting certain interfaces. For instance, if someone makes a circular gesture, the system can interpret that by projecting a dial where the gesture was made. Then, when someone “adjusts” the dial by gesturing in front of it, the system can adjust a volume control elsewhere.

More On This...

The brilliance – and the potential frustration – of this system lies in its nearly endless possibilities. Currently, whatever you want WorldKit to do, you must program it to do yourself. Xiao and Harrison expressed hope that one day, once WorldKit reaches the consumer realm, there might be an online forum where people can upload and download programming scripts (much like apps) in order to make their WorldKit system perform certain tasks. However, at the moment, WorldKit remains in an R&D phase in the academic realm, allowing its creators to dream big about what they would like to make it do eventually.

In any case, the easiest way to understand how WorldKit works is to watch a demo video of it in action. In the video, researchers touch various surfaces to “paint” them with light from the projector. Afterward, the WorldKit system uses the selected area to display a chosen interface, such as a menu bar or a sliding lighting-control dial, which can then be manipulated through touch gestures.

Currently, WorldKit’s depth sensor is nothing other than a Kinect – the same one that shipped with the Xbox 360 – that connects to a projector that’s mounted to a ceiling or tripod. While this combo is already sensitive enough to track individual fingers and multi-directional gestures down to the centimeter, it does have one major drawback: size. “Certainly the system as it is right now is kind of big, and we all admit that,” Xiao said.

Lights, user, action: Putting WorldKit to use
But the team has high hopes for the technology on the near horizon. “We’re already seeing cell phones on the market that have projectors built in,” Xiao said. “Maybe the back camera, one day, is a depth sensor  … You could have WorldKit on your phone.” Harrison added that WorldKit could allow users to take full advantage of their phones for the first time. “A lot of smartphones you have nowadays are easily powerful enough to be a laptop, they just don’t have screens big enough to do it,” Harrison said. “So with WorldKit, you could have one of these phones be your laptop, and it would just project your desktop onto your actual desk.”

If Harrison and Xiao can imagine the mobile version of WorldKit on a smartphone in five years’ time, they have an even crazier vision for 10 or 15 years down the line. “We could actually put the entire WorldKit setup into something about the size of a lightbulb,” Xiao said. For these researchers, a lightbulb packed full of WorldKit potential has truly revolutionary implications. “We’re looking at that as almost as big as the lighting revolution of the early 1800s,” Xiao added.

The possibilities for WorldKit, as you might imagine, are limitless. The team’s already envisioning much more ambitious applications, such as experimental interior design. According to Harrison, you could make your own wallpaper, or change the look of your couch. “With projection, you can do some very clever things that basically alter the world in terms of aesthetics,” Harrison said. “Instead of mood lighting, you could have mood interaction.”

Xiao, meanwhile, fantasized about the system’s gaming potential. “You could augment the floor so that you didn’t want to step on it, and then play a lava game,” he said, describing a game where you have to cross from one end of the floor to the other, using only the tables and chairs. “You can imagine this being a very exciting gaming platform if you want to do something physical, instead of just using a controller.”

Google Glass and WorldKit: Seeing vs. touching
There is one realm in which Harrison seems certain that WorldKit’s unique blend of physical and digital properties are at an advantage, and that’s in contrast to Google Glass. While both approaches attempt to augment reality through embedded computing, Harrison believes that Google Glass’s reliance on virtual gestures falls a bit flat.

“The problem with clicking virtual buttons in the air is that’s not really something that humans do,” Harrison said. “We work from tables, we work on walls … that’s something we do on a daily basis … we don’t really claw at the air all that often.” To really understand what he means, just remember when Bluetooth first came out. Not only did everyone look crazy talking to themselves on street corners, it was hard not to feel self-conscious starting a conversation into empty air without the physical phone as a prop.

Xiao agreed, emphasizing that WorldKit is able to promote instinctual, unforced interaction by relying on physical objects. “One of the advantages of WorldKit is that all the interactions are out in the world, so you are interacting with something very real and very tangible,” Xiao said. “People are much more willing, much more able, to interact with it in a fluid and natural way.” In this case, perhaps touching – rather than seeing – means believing.

For more on this story visit DigitalTrends.com.