Three Things We Can Learn From Disney

Last year at ISMAR09, the keynote speech from Mark Mine of the Disney Imagineering group, really intrigued me.   I had been a hardcore Disney hater before that, but Mark’s behinds-the-scenes look at the technology of Disney, specifically how they used augmented reality, softened my stance.

Cue forward almost one year exactly, in a strange twist of fate and of overenthusiastic grandparents, I find myself at Disney for a week.  Since I was going to be at Disney, I decided to check out all the AR attractions that Mark Mine had talked about in his presentation.  I got to see all the applications I wanted to see except one (Magic Sand) and this is what I learned from the experience:

1) True location based gaming can be a blast

The Kim Possible Adventure game in Epcot was my kids favorite event from the Disney properties.  Each player receives a cellphone and then they follow the clues around until they solve the mystery.  The game uses RFID tags to know when the player is in the right location.  This game is as much an alternate reality game as AR, but either could do the job marvelously.  There were about eight total missions in the various countries of Epcot and the kids did all of them.  I did a few with them and then let them do the rest on their own.

Now that markerless AR is becoming more common with products like Junaio Glue and Google Goggles, I’d like to see someone make a few ARGames based on the Kim Possible model.  It was truly a fun experience that the whole family enjoyed.

2) AR needs to be a product not a feature

In the Disney Downtown area, there’s a wonderful LEGO store with amazing statues made of LEGO bricks.  In the back of the store, there’s a LEGO AR Kiosk.  Since Metaio’s LEGO kiosk was one of the first applications of AR a few years ago, I won’t go into the details of what it is.  But what I will talk about is the hour I stood in the back of the store and watched people interact with it.

Quite a number of parents and kids picked up boxes and held them in front of the camera.  They seemed amused for a second and then quickly put them down and moved on.  I asked a few people what they thought of it and they mostly shrugged without saying much.

The problem I see is that most usages of AR currently are add-on features that are cool in themselves, but don’t actually add to the experience of the product.  For AR to be truly memorable it needs to be both conspicuous and integral to the product.

3) Projection based AR is the future of amusement parks

Projection based AR at Disney was everywhere.  From Buzz Lightyear’s talking statue;  to projected skins across landscapes or objects; or full fledged projected realities that came alive when the haptic chair you sat in moved with the reality.  While this one isn’t going to do much for the average AR programmer, as their medium is the cell phone and not an amusement ride, the amusement parks are going to rely on AR more and more for their advanced special effects.  I think my favorite example was the Forbidden Journey ride at the Harry Potter area of Universal.  I honestly cannot tell you exactly what all was AR, or animatronics, or just smoke and mirrors, but it was truly awesome.  It actually felt like you were there in a place that only exists in our collective minds and sprung from JK Rowling.  That makes the far-future of AR both scary and exciting, and I’m glad to be along for the ride.

The Multi-Sensor Problem

Sensor systems like cameras, markers, RFID, QR codes (and more) are usually done as single methods to align our augments.  One challenge for a ubiquitous computing enviroment will be meshing together the various available sensors so computers have a seemless understanding of the world.   

This video from the University of Munich shows us how a multi-sensor system works.  It appears to be from 2008 (or at least the linked paper is.)  Here’s the description on the project:

TUM-FAR 2008: Dynamic fusion of several sensors (Gyro, UWB Ubisens, flat marker, ART). A user walks down a hallway and enters a room seeing augmentations different augmentations as he walks on: a sign at the door and a sheep on a table.

In the process, he is tracked by different devices, some planted in the environment (UWB, ART, paper marker), and some carried along with a mobile camera pack (gyro, UWB marker, ART marker). Our Ubitrack system automatically switches between different fusion modes depending on which sensors are currently delivering valid data. In consequence, the stability of the augmentations varies a lot: when high-precision ART-based optical tracking is not available lost (outside ART tracking range, or ART marker covered by a bag), the sheep moves off the table. As soon as ART is back, the sheep is back in its original place on the table.

Note that the user does not have to reconfigure the fusion setup at any point in time. An independent Ubitrack client continuously watches the current position of the user and the associates it with the known range of individual trackers, reconfiguring the fusion arrangements on the fly while the user moves about.

The project brings up an interesting question.  Is anyone working with multi-sensor systems?  We know we’ll need a mix of GPS and local image recognition and markers to achieve our goals, but is anyone working on this complex problem for a real product?  We’ve seen good image recognition with Google Goggles or SREngine, and GPS/accelerometer based AR is popular, but I’d like to see an app use both to achieve their aims. 

If you’re working on a multi-sensor project.  We’d love to hear about it at Games Alfresco.