Magic Vision Lab will Melt, Distort, X-Ray and then Augment your Reality

Chris Sandor, a veteran AR researcher, who worked with some of the most distinguished AR labs and researchers in the world such as Graz University, Columbia University, TU Munich, and Canon Research – is now is the Director of the Magic Vision Lab at the University of South Australia.
Chris and team have recently revamped the lab’s website which is a good opportunity to revisit some of their work previously covered during ISMAR.
First a word on the lab’s name.
In his “Sermon on the flatlands“, augmented reality prophet Bruce Sterling cautioned the AR community to stay away from terms like “magic”: “Magic is cheezy and deceitful. Practicing a leisure domain is a problem” he argued.
Chris brings to his defense a no lesser sci-fi luminary, Arthur C. Clarke, which famously coined the third law: “Any sufficently advanced technology is indistinguishable from magic”.
***
What do you think about the usage of the term “Magic” in conjunction with AR?
***
Now to substance in the revamped site:
Humans perceive their environment primarily through vision. Our goal is to enhance human vision with computer-generated graphics in order to amplify human intelligence on a world-wide scale. Our vision is shared by a large and growing community that is investigatingAugmented Reality.
The lab focuses on 2 key areas:

Mobile Augmented Reality Visualizations (melting, distorting, xray)

The following X-ray Vision is one of the more inspiring videos in AR
The next video depicts a technique called “melt vision” which is useful when a user wants to see points of interest hidden from view (occluded) and could also double as a simulation of a building demolition:)

Vodpod videos no longer available.

***
The second area of the lab’s focus is dubbed:

Visuo-haptic augmented reality systems

In laymen’s terms, it’s about combining visual and touch.
In this video a user, wearing a head-worn display, can see the virtual car and feel the car with his right hand through a haptic device called Phantom.

Vodpod videos no longer available.

This technology has huge potential in training newbies and experienced professionals in many fields such as healthcare, design, manufacturing, and many more.

To wrap up this AR Lab review –

Here is a classic: AR Weather, received our tongue-in-cheek award of the “most down-under demo” during ISMAR 2008. All I have to show is this image that depicts an AR application that overlays snow on a sunny day in Australia.

If you would like to showcase your work in augmented reality labs – let us know!

GDC 2009 Roundup: a (tiny) spark of augmented reality

The dust over GDC 2009 has settled a while ago and finally I got to reflect on the AR experience at the show.  Guess which headline would summarize it best:

a) augmented reality was the talk of the show

b) the expo floor was swarming with AR demos

c) AR games snatched lucrative game awards

d) none of the above

dsc03400

A friend in San Francisco wearing retro AR goggles

Unfortunately (d) is the right answer.

But – and it’s a big ‘but’ – the ignition spark was there. The seed was planted. The first shot of the revolution was fired.

(OK, maybe the last metaphor went too far.)

Here are 5 triggers I identified that ignited the spark:

1) Blair

The first GDC augmented reality talk – ever: Blair MacIntyre covered the latest and greatest about mobile augmented reality in front of a packed room of game developers. Awesome.

2) Demos

Was it the first time AR demos (and more demos) were presented at a major Game Developer Conference ?

Not sure – but it certainly was my first…

3)  Mentions in talks

Was Blair’s AR talk an isolated case?

Perhaps as a main topic it was. However, for the first time, I heard significant mentions of AR in multiple other talks. Check them out:

A talk about pervasive gaming. I liked the title: “Beyond the Screen: Principles of Pervasive Game” by the folks from Pervasive Games. The played with the concept in which the whole world is the playground.  These games, are founded in the belief that doing things for real is pleasurable. Games that harness reality as a source book have interesting dynamics.  Everything in reality matters to the game, the game play emerges from coincidence, and real and artificial blur.

Jane McGonigal fantastic talk “Learning to Make Your Own Reality: How to Develop Games that Re-invent Life As We Know It” introduced a concept she calls programmable reality. Augmented reality is among the key technologies to enable that idea.

Camera Based Gaming: The Next Generation by Diarmid Campbell attracted the attention of a room packed with game developers. He talked about Sony’s upcoming camera games for the PlayStation 3 such as Eye Pet. Armed with the EyeToy camera, these games will have the power to extract amusing gestures from players. Not quite AR – but sure smells like it.

Stretching Beyond entertainmentAR made a surprise appearance in a high profile panel discussion feturing some of the gods of the gaming industry: (from right) Ed Fries , Lorne Lanning,Bing Gordon,Will Wright, and Peter Molyneux.
The best quote award went to Ed Fries for saying: “We need to take game mechanics and apply them to the real world”.

dsc03394

4) Meet ups

Dinners, lunches, business meetups, brainstorming sessions – haven’t had that many meetings with AR enthusiasts since ISMAR 2008…

5) The iPhone

When it comes to bringing AR to the masses – the iPhone is a category on its own . And it doesn’t even know it yet…why the iPhone changed everything

-*-*-

Will we ever get to see answers a, b, or c become a real headline?

Most likely in the next few years, if you ask me.

A (tiny) spark was ignited.

Live From WARM ’09: The World’s Best Winter Augmented Reality Event

Welcome to WARM 2009, where augmented reality eggheads from both sides of the Danube meet for 2 days to share ideas and collaborate.

It’s the 4th year WARM is taking place – always in Graz university, and always in February – to provide an excuse for a skiing event, once the big ideas are taken in. Hence the cunning logo:

This year 54 attendees from 16 different organizations in 5 countries are expected (Austria, Germany Switzerland, England and the US). The agenda is jam-packed with XX sessions, Lab demos and a keynote by Oliver Bimber. I have the unenviable pleasure of speaking last.

It’s 10 am. Lights are off. Spotlight on Dieter Schmalstieg, the master host, taking the stage to welcome everybody.
He admits, the event started as a Graz meeting and just happened because guests kept coming.

Daniel Wagner, the eternal master of ceremony of WARM, introduces Simon Hay from Cambridge (Tom Drummond group) the first speaker in the Computer Vision session. Simon will talk about “Repeatability experiments  for interest point location and orientation assignment”  – an improvement in feature based matching for the rest of us…

The basic idea: detect interest regions in canonical parameters.
Use, known parameters that come through Ferns, PhonySift, Sit Mops, and MSERs searches,
and accelerate and improve the search with location detectors and orientation assignments.

After a very convincing set of graphs, Simon concludes by confirming Harris and FAST give reasonable performance and gradient orientation assignment works better than expected.

Next talk is by Qi Pan (from the same Cambridge group) about “Real time interactive 3D reconstruction.”

From the abstract:
“High quality 3D reconstruction algorithms currently require an input sequence of images or video which is then processed offline for a lengthy time. After the process is complete, the reconstruction is viewed by the user to confirm the algorithm has modelled the input sequence successfully. Often certain parts of the reconstructed model may be inaccurate or sections may be missing due to insufficient coverage or occlusion in the input sequence. In these cases, a new input sequence needs to be obtained and the whole process repeated.
The aim of the project is to produce a real-time modelling system using the  key frame approach which provides immediate feedback about the quality of the input sequence. This enables the system to guide the user to provide additional views for reconstruction, yielding a complete model without having to collect a new input sequence.”

Couldn’t resist pointing out the psychological sounding algorithms (and my ignorance) Qi uses such as Epipolar Geometry and PROSAC, reconstructing Delauney Triangulation followed by probabilistic Tetrahedral carving. You got to love these terms.

The result is pretty good, though still noisy – so stay tuned for future results of Qi’s research.

Third talk is by Vincent Lepetit from Computer Vision Lab from the Swiss CV Lab at EPFL.
Vincent starts with a recap of Keypoint recognition: Train the system to recognize keypoints of an object.
Vincent then demonstrates works leveraging this technique: an awarded work by Camille Scherrer “Le monde des montagnes” a beautiful augmented book, and a demo by Total Immersion targeted for advertising.

Now, on to the new research dubbed Generic Trees. The motivation is to speed up the training phase and to scale.
A comparison results shows it’s 35% faster. To prove, he shows a video of a SLAM application.
Generic Trees method is used by Willow Garages for autonomous robotics – which is implementing Open CV.

Next, he shows recognizing camera pose with 6 degrees of freedom (DOF) based on a single feature point (selected by the user). Impressive.

That’s a wrap of the brainy Computer Vision session. Next is Oliver Bimber’s keynote.

Mobile Augmented Reality Goes Way Beyond Markers

The dust from CES 2009 has barely settled over many shiny new devices, and new advancements in Handheld Augmented Reality software are already emerging from Vienna.

Daniel Wagner and his team at Graz University have come up with new and improved capabilities.

High Speed Natural Feature Tracking on a mobile phone

We saw an early implementation of Studierstube ES at ISMAR 08, so I asked Daniel what’s new about this capability, besides being faster and more robust.

Daniel: We can now track multiple images and switch arbitrarily. I believe it is now at a level that it can really be used in practice.

Games alfresco: Looks great. Based on the video it seems that it runs on Windows Mobile 6 (ASUS p552w, iPAQ 614c). What about other platforms?

Daniel: Not bad! It is written in C/C++, [but] since this is pure math code, it could be ported easily to any platform. Our AR framework is still Windows Mobile only, although we now also have Linux support (desktop only since we lack a Linux phone). MacOS and Symbian are in the making and should be available in a month or so.

Tracking of Business Cards on a mobile phone

Daniel: On January 20th we have the official opening of our “Christian Doppler Lab” (founded by the Christian Doppler agency). For that purpose I created a small demo for tracking business cards. In the future we’ll replace the 3D content with something more useful…

I can’t talk about Daniel without mentioning WARM ’09; He is the main organizer of this Winter Augmented Reality Event on 12th-13th February 2009 at Graz University, Austria. Registration is over, but if you really want to go and have something cool to present – you may be able to convince Daniel to let you in.

Should you get your hands on this powerful technology (assuming Imagination makes it available for licensing soon) what would YOU do with it?