Help Understand Mobile AR Usage, Win 50 Euros

If you are a user of one the mobile AR applications, such as Layar, Junaio, Wikitude, Google Goggles (or any of the many others), and would like to help the (academic) research of augmented reality, boy do Markus Salo and Thomas Olsson have an offer for you.

Noticings Layar

The two researchers from Finland ask you to think about your most satisfying and unsatisfying experiences using AR application, and for your view on the usefulness of such applications and take the following survey. I must say that imho, mobile AR has not yet created a really satisfying moment, and the enjoyable moments it did create, didn’t last for long (though, some of the creative things people do with mobile AR are really mind boggling).

Best of all, by participating in this survey, you enter a raffle to win one of 10 Amazon-vouchers, 50 euros of worth each. Considering that AR is still a niche, you have pretty good chances to win it. Even better, Salo has promised to share the results, so we’ll can all learn from this survey.

take the survey

The Great Disappearing Act

If you thought that augmented reality can only place virtual object in real environments, think again. AR can also be used to ‘delete’ real object, making them transparent.

Case in point, Francesco I. Cosco’s work presented at ISMAR09 (which reminds me that ISMAR 2010 is less than two weeks away!). In this work, Cosco and other researchers from the University of Calabria, Italy and the Rey Juan Carlos university in Spain, tried to add haptic interaction to an augmented environment. Problem is, haptic devices are visually not attractive, and aren’t really a part of the scene. The solution they came up with was quite ‘magical’:

More details on Cosco’s home page.

Cosco F. I., Garre C., Bruno F., Muzzupappa M., Otaduy M. A. “Augmented Touch without Visual Obtrusion”. Proceedings of the IEEE International Symposium on Mixed and Augmented Reality 2009.

Online Image Learning – The Next Big Leap in Mobile AR?

Mobile, image recognition based, augmented reality is very cool, as evident from the Popcode’s demos we posted yesterday. However, creation of a model used by the mobile phone to recognize a new image still requires a desktop, hindering realtime creation and sharing of AR content.

Thanks to the work of researchers from the Korean Gwangju Institute of Science and Technology and the Swiss EPFL, this needn’t be the case anymore. In a paper titled “Point-and-Shoot for Ubiquitous Tagging on Mobile Phones” accepted to ISMAR 2010, they present a method to scan surfaces and create “recognition-models” by using your phone (no data is sent to a remote server).

You don’t even need to take the perfect straight-on picture. As the video below shows, this means you can augment hard to reach surfaces. Best of all, you can share those models with your friends.

A little bit more detail over Wonwoo Lee’s blog.

Three Augmented Reality Demos at SIGGRAPH 2010

Earlier this week I reported about an interesting paper accepted to this year’s SIGGRAPH conference, studying how augmented reality makes cookies taste better. However, it’s far from being the only appearance of AR in SIGGRAPH. Following are three more AR demos that took the stage during the conference:

Camera-less Smart Laser Projector
Using a laser beam both for scanning and projecting images, researchers from the University of Tokyo (where the cookies paper also comes from) gained several advantages over the usual projector+camera setup. Mainly, they eliminated the need to calibrate between the camera and projector, and gained the ability to scan 3d objects without stereoscopic cameras.

Camera-less Smart Laser Projector. Cassinelli, Zerroug, Ishikawa, Angesleva. SIGGRAPH 2010

QR-Code Calibration for Mobile Augmented Reality
This work by researchers of Kyushu University uses QR codes in order to fix the user position in the world. It shows you the problem with academic conferences. By the time your papers gets published, some company may already deploy similar technology in the real world, as Metaio did with Junaio.

QR-code calibration for mobile augmented reality applications: linking a unique physical location to the digital world. Nikolaos, Kiyoshi. SIGGRAPH 2010.

Canon brings to life dinosaurs
Not only researchers come to SIGGRAPH, but companies as well. Canon demoed head mounted assisted mixed reality in their booth, featuring a cute velociratpor like dino (lacking feathers, they should get some new biology books). Jump to the 25th second to see it in action:

Nothing Like the Smell of a Freshly Augmented Cookie

SIGGRAPH, the world’s most important conference on computer graphics and a pixel-fetishist wonderland is being held this week in Los Angeles, featuring several interesting papers on augmented reality. One of them explores the augmentation of cookies.

Yes, we have all seen cookies marked with augmented reality markers. Games Alfresco has mentioned such cookies back in 2008. So how come such a paper got accepted into SIGGRAPH? Well, the simple answer is that you’ve haven’t seen (or rather tasted) such cookies before.

Created by researchers at the University of Tokyo, Meta Cookie combines together a head up visual display and head up olfactory display to trick your senses. I’m not quite sure what was the goal of that research, but were successful in changing how the cookie tasted. Probably some smart marketer will find a way to sell it as a weight loss device. You just need a way to print a marker on broccoli to make it tastes (and look like) ice cream (with a crunchy texture).

Meta Cookie. Narumi, Kajinami, Tanikawa, Hirose. SIGGRAPH 2010.
image via BusinessWire

Augmented Reality Farmville

Addicted to Farmville? Have a green thumb but no garden? Envy real farmers but got allergies?
The guys from TU Munich have the perfect solution for you:

Augmented Farmville could be one heck of a layer for Layar/Junaio/Wikitude once better positioning is available. Think of the gold rush to get a virtual plot in major cities, imagine Times Square as a flower bed! Using real meteorological data to those virtual farms would add another interesting and educational twist. It may be the most stupid idea I have ever featured in this blog, but then again, nobody would guess that a farm simulator will be one of the most successful games in 2010.

AR – Not for the Faint Hearted

Here’s a fun video showing of the results of a paper by some students from Imperial College, London back in 2008.
The abstract says:

Accurate estimation and tracking of dynamic tissue deformation is important to motion compensation, intra-operative surgical guidance and navigation in minimally invasive surgery. Current approaches to tissue deformation tracking are generally based on machine vision techniques for natural scenes which are not well suited to MIS because tissue deformation cannot be easily modeled by using ad hoc representations. Such techniques do not deal well with inter-reflection changes and may be susceptible to instrument occlusion. The purpose of this paper is to present an online learning based feature tracking method suitable for in vivo applications.

In other words, they are augmenting a live beating heart, muhahaha!

Soft Tissue Tracking for Minimally Invasive Surgery Learning Local Deformation Online. Peter Mountney and Guang-Zhong Yang

Frown! You Are Augmenting Reality!

One of the hurdles in the future of augmented vision is avoiding sensory overload. In Tish Shute’s latest interview, Will Wright notes (and he is far from being the first one to allude to this problem):

our senses are set up to know how to filter out 99% of what is coming into them. That is why they work, and that is what is beneficial. I think that is why AR needs to focus on… You look at what I can find out on Google or whatever, the amount of information is just astronomical. The hard part, the intelligent part, is how do you figure out that one tenth of 1% that I actually care about at this given second?

Researchers from Tokyo’s Meiji University, haven’t quite figured out how to build that filter but they do have a neat way to avoid overloading your senses. In the F.A.R.vision system project, the level of augmentation is determined by your eyebrows. Bend them inward (that is, frown) to make virtual objects more visible.

You may look silly, but that explains why terminators always had an angry face when hunting down Sarah Connor. More information can be found here, in Japanese.

ARGO – Learn Go with Augmented Reality

Go. A game with such simple rules, that is surprisingly hard to master. It’s the last bastion of humanity against the rising power of game playing artificial intelligence. And now, there’s a cool projected AR board that will help you hone your skills in the game.
Presented by a group of researchers from Japan and Finland, ARGO uses a projector to show game situations, concepts and problems on top of a regular Go board.

As shown in these modes, the advantage of our approach is to allow players to get information through the original interaction offered by the Go board and the stones. By superimposing information onto the board, players can concentrate on the match at hand or self-training without fragmenting their attention towards an instructional book and etc. This is important to make it possible for the players to allocate enough cognitive resources for recognizing the situations in the game. Using original game items as the basis preserves Ma and traditional look-and-feel, such as distance between players, touch of a wooden board and sound of stones.

I really like they used the stones to control the menus. Nice touch, and a cool project as a whole.

More information here.

ARScope: Augmented Reality Through the Crystal Ball

ARScope originally Presented at SIGGRAPH 2008 by the University of Tokyo (yes, the guys behind ARForce), is obviously not a new concept. However, as far as I can see, it got only little coverage at the time, and certainly deserves our attention.

ARScope is an interesting combination of old world metaphors such as a magnifying glass and a crystal ball, head mounted display and projected AR. The user holds a handheld device covered by reflective material, on which an image is projected from a micro-projector the user wears on his head. Two cameras, one on the handheld device and one on the headset, and a sophisticated algorithm are used to calculate the user’s point of view relative to the handheld device, and thus ARScope is able to project a suitable image. In the case of the crystal ball, it even allows two users to see two different perspectives on the augmented world.

Can’t understand why no-one has commercialized this idea yet. It seems to be far more natural than HMD that blocks your peripheral vision.

Another video can be found here, and an interview with one of ARScope’s creators is here. More details on the project’s website.