I thought that the next talk given by MIT’s Pranav Mistry at TED India earlier this month was worth posting over here. True, most of the use cases shown in this video were already presented on February. And true, Graz’s Daniel wagner was absolutely right calling Sixth Sense conceptual. Yet, even as a conceptual work, it’s beautiful, and the new “dragging real life to the computer screen” demo makes this video worth watching (or just jump to it at the 10 minute mark):
Here are some more augmented reality news stories that happened this week:
MIT’s Sixth Sense to be open-sourced. Well, I know most don’t see any future in projector enabled augmented reality, yet there’s more to Sixth Sense than a projector (gesture recognition, image recognition), and it’s a lot more feasible than see-through HMD AR at this stage, so quite exciting news.
Augmented Planet has started doing a head to head comparison of the different AR browsers out there. Rumor has it that round two features very surprising results.
A demo of Esquire’s augmented reality issue. The comments are even more interesting than the video itself – “Holy balls you guys are innovative! Props!”.
We are signing off this week with a new video out of TU Graz, showing adding annotations online to panoramas. I’ve written about it in more detail before, here, so we can just relax and enjoy the video this time:
I find the next piece of research so amazingly cool that I can’t understand how I’ve missed for so long (a whole three days!). Submitted to next month’s SIGGRAPH, MIT’s Media Lab Bokode is a new way to visually code information.
I’m not going to try to explain the technology behind it (that’s what the paper for), but it a nutshell it uses a small light source to create an image consisting of thousands of pixels. The pixels are only discernible when a camera is looking at the Bokode while its focus is set to infinity. I hope the next video explains it better:
As the video above shows, there are very nice implications to augmented reality. Aside from coding the identity of the object, it can also encode how’s the object positioned in comparison to your camera. Though, if I understood correctly, the demonstration above uses two cameras, one shooting the object in focus, while the other looks at the Bokode.
Another obstacle in the way of wide adoption is that the Bokode currently requires an energy source to operate. Nevertheless, it has already taken a step in the right direction, and currently have a short page on Wikipedia.
More information here and here. Via Augmented.org.