Augmented Teddy Bear You Can Touch

Researchers from Tokyo Institute of Technology and the University of Electro-Communications may have created the cutest AR creature to date. In the last SIGGRAPH conference they have presented the following poster, discussing a haptic ring that let you pet virtual creatures (VC) and enable virtual creatures touch you.

We installed models of optical / touch sensation into our VCs. It
enables VCs to react for user’s actions to VCs in various and appropriate
ways. For example, they express attention by looking back
when they are hit from backward. They express happiness when
they are stroked gently, and they step away after a strong hit.

(if embedding does not work for you, video is here)

More details can be found here. Via Development Memo for Ourselves.

2d Sketches Become 3d Reality

The guys at Hit Lab New Zealand and the Visual Media Lab at the Ben Gurion University, Israel, have uploaded a new video presenting the results of their ISMAR09 paper “In-Place 3D Sketching for Authoring and Augmenting Mechanical Systems”. Since the paper is not online yet, I can’t really tell how much of it is really automatic, and how robust is it, but the video is nothing less than magical:

I really envy those future physics high-school students…

ACME – Augmented Collaboration in Mixed Environments

I couldn’t decide whether I should dedicate a whole post or just a tweet to the next project. On the one hand, I don’t know much about it, and its homepage is in Finnish. On the other hand, the video is in English, and shows a concept that can become a huge buisness – augmented telepresence:

In a nutshell telepresence is a turbo-charged version of video-conference, that aspire to give you the feeling that you are really in the remote location. There are some companies around the world that invest loads of money in developing better and better telepresence experiences, because they believe it’s going to be a billion-dollar market. Now, is there a better experience than seeing your remote pal in 3d across the table?

Obviously, ACME, the project featured in the above video, doesn’t come close to making this idea a reality. But it does let you see you companion’s avatar, which mimics his gestures, and share with him a virtual desktop.

Augmented Reality in Your Hands

Researchers from the University of California Santa Barbara have a lofty goal on their minds – “Anywhere Augmentation”, which means augmenting arbitrary environments with little prior preparation. Or as they put it:

The main goal of this work is to lower the barrier of broad acceptance for augmented reality by expanding beyond research prototypes that only work in prepared, controlled environments.

Now, if have been following the world of augmented reality for the last year, you are probably familiar with the following situation. There’s some site offering an AR experience, but in order to access it, you have to print at least one black and white symbol. Unfortunately, the marker you have just printed last week, for another site, just doesn’t cut it. Each site requires its own marker, that becomes obsolete after two minutes. It’s a defining example of prior preparation in order to experience AR, and the researcher at UCSB as a plan to eliminate it.
Enters HandyAR. Instead of using a marker, Taehee Lee and Tobias Höllerer want to track your outstretched hand.

You can even have some minimal interaction with virtual objects, dragging and dropping them, by closing and opening your hand, as the following video shows:

Ain’t it cool? You can find much more information over here, where you can also download a binary (Windows) and source code (Visual Studio 2005) to play with.
(via @totalimmersion)

How AR Browsers Should Be…

Frankly, I got tired with AR browsers. When Wikitude first launched I was excited. When Layar came out the whole blogosphere was thrilled. But now (only a couple of months after Layar went public), I’m feeling quite jaded. Everybody and his sister are making an AR browser application, and most of them are just he same.

Apparently, I’m not the only one harboring those feelings. The title of this post is taken from a mail sent to me by Daniel Wagner of Graz University of Technology, one of the best known names in the field of mobile AR. Wagner writes:

Rather than inventing the next (10th?) AR browser, we’ve been working on generally improving the usability of such applications. My team member Alessandro [Mulloni] has come up with some cool gestures and good ideas on how to avoid information overflow and how to let people easier navigate in a typical AR browser scenario. The result is something like. “this is how an AR browser should actually be” – without restricting to a specific application scenario.

While AR in general is from a first person perspective, Mulloni looked into extending it with panoramic and bird’s-eye perspectives, in order to enhance the user’s understanding of its surrounding. This is how it looks:


In his paper, Mulloni finds that such smooth transition into other perspectives can really help the user. So, what do you say? A new avenue for AR browsers, or is the real conclusion from this research is that AR still needs to be complemented by a top-down map view in order to be usable?

Bokode – Amazing New Type of Barcode

I find the next piece of research so amazingly cool that I can’t understand how I’ve missed for so long (a whole three days!). Submitted to next month’s SIGGRAPH, MIT’s Media Lab Bokode is a new way to visually code information.
I’m not going to try to explain the technology behind it (that’s what the paper for), but it a nutshell it uses a small light source to create an image consisting of thousands of pixels. The pixels are only discernible when a camera is looking at the Bokode while its focus is set to infinity. I hope the next video explains it better:

As the video above shows, there are very nice implications to augmented reality. Aside from coding the identity of the object, it can also encode how’s the object positioned in comparison to your camera. Though, if I understood correctly, the demonstration above uses two cameras, one shooting the object in focus, while the other looks at the Bokode.
Another obstacle in the way of wide adoption is that the Bokode currently requires an energy source to operate. Nevertheless, it has already taken a step in the right direction, and currently have a short page on Wikipedia.
More information here and here. Via Augmented.org.

Augmented Pool is very Cool

Yep, it’s the silliest post title I’ve ever come up with. Nevertheless, this next video is really cool. It features both a robotic pool player and an augmented reality guidance system for human pool players (starting at 2:00).

It was developed by a team of researchers from Canada’s Queen’s university. Sadly, I couldn’t find much information about the augmented reality implementation. However, here’s an article about the robotic system, and I guess that once they implemented the robot, advancing to AR only required identifying the cue stick.

Blair MacIntyre on UgoTrade

Tish Shute continues with her enlightening series of interviews on UgoTrade. After previously interviewing Ori Inbar and Robert Rice, Blair MacIntyre was a natural choice.
MacIntyre discusses his work at Georgia Tech (which I briefly wrote about here), and shares his perspective on future directions for mobile augmented reality.

A lot of folks think it will be tourist applications where there’s models of times square and models of central park and models of Notre Dame and the big square around that area in paris and along the river and so on, or the models of Italian and Greek history sites – the virtual Rome. As those things start happening and people start building onto the edges, and when Microsoft Photosynth and similar technologies become more pervasive you can start building the models of the world in a semi-automated way from photographs and more structured, intentional drive-by’s and so on. So I think it’ll just sort of happen. And as long there’s a way to have the equivalent of Mosaic for AR, the original open source web browser, that allows you to aggregate all these things. It’s not going to be a Wikitude. It’s not going to be this thing that lets you get a certain kind of data from a specific source, rather it’s the browser that allows you to link through into these data sources.

Read it all over here (and check some of the interesting links featured in the interview).
Curiously enough, a video of one of the games mentioned in the article, “Art of Defense“, was uploaded to Youtube today. It’s an interesting research in how people interact when playing a collaborative AR game (see Bragfish for a similar research with a competitive game):

X-Ray Vision via Augmented Reality

The Wearable Computer Lab at the University of South Australia has recently uploaded three demos showing some of its researchers’ work to Youtube. Thomas covered one of those, AR Weather, but fortunately enough, he left me with the more interesting work (imho).
The next clip shows a part of Benjamin Avery’s PhD thesis, exploring the use of a head mounted display in order to view the scenery behind buildings (as long as they are brick-walled buildings). If understood correctly (and I couldn’t find the relavant paper online to check this up), the overlaid image is a three-dimensional rendition of the hidden scene reconstructed from images taken by a previously positioned camera.

The interesting thing here is that a simple visual cue, such as the edges of the occluding items, can have such a dramatic effect on the perception of the augmented scene. It makes one wonder what else can be done to improve augmented reality beyond better image recognition and brute processor power. Is it possible that intentionally deteriorating the augmented image (for example, making it flicker or tainted), will make a better user experience? After all, users are used to see AR in movies, where it looks considerably low-tech (think Terminator vision) compared with what we are trying to build today.

Anyway, here you can find Avery himself, presenting his work and giving some more details about it (sorry, couldn’t embed it here, even after several attempts)