The Multi-Sensor Problem

Sensor systems like cameras, markers, RFID, QR codes (and more) are usually done as single methods to align our augments.  One challenge for a ubiquitous computing enviroment will be meshing together the various available sensors so computers have a seemless understanding of the world.   

This video from the University of Munich shows us how a multi-sensor system works.  It appears to be from 2008 (or at least the linked paper is.)  Here’s the description on the project:

TUM-FAR 2008: Dynamic fusion of several sensors (Gyro, UWB Ubisens, flat marker, ART). A user walks down a hallway and enters a room seeing augmentations different augmentations as he walks on: a sign at the door and a sheep on a table.

In the process, he is tracked by different devices, some planted in the environment (UWB, ART, paper marker), and some carried along with a mobile camera pack (gyro, UWB marker, ART marker). Our Ubitrack system automatically switches between different fusion modes depending on which sensors are currently delivering valid data. In consequence, the stability of the augmentations varies a lot: when high-precision ART-based optical tracking is not available lost (outside ART tracking range, or ART marker covered by a bag), the sheep moves off the table. As soon as ART is back, the sheep is back in its original place on the table.

Note that the user does not have to reconfigure the fusion setup at any point in time. An independent Ubitrack client continuously watches the current position of the user and the associates it with the known range of individual trackers, reconfiguring the fusion arrangements on the fly while the user moves about.

The project brings up an interesting question.  Is anyone working with multi-sensor systems?  We know we’ll need a mix of GPS and local image recognition and markers to achieve our goals, but is anyone working on this complex problem for a real product?  We’ve seen good image recognition with Google Goggles or SREngine, and GPS/accelerometer based AR is popular, but I’d like to see an app use both to achieve their aims. 

If you’re working on a multi-sensor project.  We’d love to hear about it at Games Alfresco.

4 Responses

  1. Interesting stuff.
    Seems to me at some point we might want these “positioning providing” techniques as plugins to other bits of software.

  2. I’m pretty sure metaio is working on this combination. :)

    The TU Munich, Gudrun Klinker and her team planned to release the Ubitrack framework as open source “one day”. But so far you can only access it if you’re enrolled there…

    http://campar.in.tum.de/UbiTrack/UbitrackInstallation

    cheers.

  3. Here’s another example, from 2004, of hybrid multi-sensor tracking for AR. In this case, the way that overlaid information is visualized changes, depending upon the kind of tracking available: http://www.cs.ucsb.edu/~holl/pubs/hallaway-2004-aai.pdf

  4. Hi Toby,

    sorry to corret you but your statement is not completely correct.
    Fact is that it is not possible to download from the SVN repository as a guest at the moment.

    But there is a possibility to donwnload the sourcecode of Ubitrack and a prepcompiled version for Windows using this link http://campar.in.tum.de/UbiTrack/Downloads
    This version of Ubitrack is not the latest one but as far as I know it integrates everything that is necessary to rebuild the shown sensor fusion approach.

    Cheers, Christian

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s

%d bloggers like this: