Concept Glasses to Photoshop Reality

Good magazine has apparently asked some interviewees to imagine that will improve their daily life. The two ladies in the video below, Freya Estreller and Natasha Case, came up with Photoshop Glasses. A first step to improving the world may be seeing how much better it could be:

If you happen to think that’s a far fetched idea, you are probably right. However, we already saw head tracking software that puts on virtual masks on faces. So maybe in your next job interview, if the interviewer just looks a tad like your ex-boyfriend, all you’ll need to do is wear some dorky glasses, and see him as Optimus Prime.

via Red Tory

X-Ray Vision via Augmented Reality

The Wearable Computer Lab at the University of South Australia has recently uploaded three demos showing some of its researchers’ work to Youtube. Thomas covered one of those, AR Weather, but fortunately enough, he left me with the more interesting work (imho).
The next clip shows a part of Benjamin Avery’s PhD thesis, exploring the use of a head mounted display in order to view the scenery behind buildings (as long as they are brick-walled buildings). If understood correctly (and I couldn’t find the relavant paper online to check this up), the overlaid image is a three-dimensional rendition of the hidden scene reconstructed from images taken by a previously positioned camera.

The interesting thing here is that a simple visual cue, such as the edges of the occluding items, can have such a dramatic effect on the perception of the augmented scene. It makes one wonder what else can be done to improve augmented reality beyond better image recognition and brute processor power. Is it possible that intentionally deteriorating the augmented image (for example, making it flicker or tainted), will make a better user experience? After all, users are used to see AR in movies, where it looks considerably low-tech (think Terminator vision) compared with what we are trying to build today.

Anyway, here you can find Avery himself, presenting his work and giving some more details about it (sorry, couldn’t embed it here, even after several attempts)

Augmented Field Guides

The New York times ran a story yesterday about a new breed of field guides, those made not out of paper, but out data bytes and computer vision algorithms.
The article mostly revolved around a new application coming to the iPhone, that enables users to take photographs of leaves and by doing so identify the tree to which they belong.

The computer tree guide is good at narrowing down and finding the right species near the top of the list of possibilities, he said. “Instead of flipping through a field guide with 1,000 images, you are given 5 or 10 choices,” he said. The right choice may be second instead of first sometimes, “but that doesn’t really matter,” he said. “You can always use the English language — a description of the bark, for instance — to do the final identification.”

The technology comes from this group at Columbia University, which on their site you can find the academic papers describing the algorithms that were used in prior incarnations of that application. Now, I know some of you will say that this is not AR, since no image-registering was involved. Well, it fits my definition of AR (it augments our reality), and looking at a previous prototype that involves a HUD, and fiduciary markers, makes things even more obvious:

Anyway, I find this use of AR fascinating. It could really connect kids with nature, detaching them from the computer screen for a while, and transforming any outside walk into an exploration. What do you think?