As far as 99.9 percent of the world population is concerned, Microsoft is a stodgy, old-guard technology company. Its bottom line is fully leveraged against PC operating systems and business software—hardly the building blocks of a future-thinking portfolio, right?
But scratch that cold, conservative, pedestrian surface, and you’ll find a Microsoft that’s a veritable hotbed of cutting-edge innovation. Indeed, the company doesn’t just loosen its purse strings when it comes to research and development. No, it practically throws money at really big thinkers to build a more wondrous, fantastical future. In 2011 alone, Microsoft’s R&D budget reached a record high of $9.6 billion (yes, with a “B”). That’s a lot of Benjamins, and they’re being spent on some decidedly awesome projects.
Let’s take at some of the more interesting examples.
Blending touch and touchscreens
Several Microsoft Research projects have revolved into transforming everyday objects into fully interactive computing surfaces. If these initiatives bear fruit, you may one day conduct your morning Facebook check on the back of a cereal box rather than on your phone.
First up is LightSpace, which uses a plethora of cameras and projectors to create interactive displays on everyday objects. The system needs to be calibrated to the room it’s installed in, but once it is, users can interact with projected menus and screens using their hands, or even move a projected display from one object to another. Don’t feel like trying to crowd your team around a projection on a small desk? Drag it over to the wall, instead. You can see a basic version of LightSpace in action in this intriguing demo video.
The OmniTouch project—a joint project between Microsoft Research and the Human-Computer Interaction Institute at Carnegie-Mellon University—mounts a rig containing small pico projector and a Kinect-like 3D scanner on the user’s shoulder. The projector displays graphical images onto virtually any surface, while the 3D scanner’s depth-sensing capabilities transform the projection into an interactive, multi-touch-enabled input—and, thanks to some technical trickery, there’s no special calibration or training required. Check out the video below for a demonstration as well as a more technical explanation.