The Multi-Sensor Problem

Sensor systems like cameras, markers, RFID, QR codes (and more) are usually done as single methods to align our augments.  One challenge for a ubiquitous computing enviroment will be meshing together the various available sensors so computers have a seemless understanding of the world.   

This video from the University of Munich shows us how a multi-sensor system works.  It appears to be from 2008 (or at least the linked paper is.)  Here’s the description on the project:

TUM-FAR 2008: Dynamic fusion of several sensors (Gyro, UWB Ubisens, flat marker, ART). A user walks down a hallway and enters a room seeing augmentations different augmentations as he walks on: a sign at the door and a sheep on a table.

In the process, he is tracked by different devices, some planted in the environment (UWB, ART, paper marker), and some carried along with a mobile camera pack (gyro, UWB marker, ART marker). Our Ubitrack system automatically switches between different fusion modes depending on which sensors are currently delivering valid data. In consequence, the stability of the augmentations varies a lot: when high-precision ART-based optical tracking is not available lost (outside ART tracking range, or ART marker covered by a bag), the sheep moves off the table. As soon as ART is back, the sheep is back in its original place on the table.

Note that the user does not have to reconfigure the fusion setup at any point in time. An independent Ubitrack client continuously watches the current position of the user and the associates it with the known range of individual trackers, reconfiguring the fusion arrangements on the fly while the user moves about.

The project brings up an interesting question.  Is anyone working with multi-sensor systems?  We know we’ll need a mix of GPS and local image recognition and markers to achieve our goals, but is anyone working on this complex problem for a real product?  We’ve seen good image recognition with Google Goggles or SREngine, and GPS/accelerometer based AR is popular, but I’d like to see an app use both to achieve their aims. 

If you’re working on a multi-sensor project.  We’d love to hear about it at Games Alfresco.

Google Newsflash

So, no point of guessing whether Google is to make a major AR move in 2010. It is going to do it in 2009:

Indeed, the Google has awoken.

Cool Augmented Business Card from Toxin Labs

While the whole web is gushing over James Alliban‘s augmented business card, I find the next implementation even more exciting. Don’t get me wrong, Alliban’s card is cool, but this one is a bit more useful:

It was created by Jonas Jäger, and more importantly, he doesn’t plan to keep the technology to himself. Jäger plans to release a front-end application that will let you create your own “presentation” that will be displayed when your business card is flashed in front of a web camera. It uses a QR code to identify your card from others, and an AR marker to have FLARToolKit something to get a fix on. All in all, it answers Thomas Carpenter’s call to create a service for these kind of augmented business cards, and really looks good.

(Augmented Business Card at Toxin Labs)