Live from ISMAR ’08: Latest and Greatest in Augmented Reality Applications

It’s getting late in the second day of ISMAR ’08 and things are heating up…the current session is about my favorite topic: Augmented Reality applications.

Unfortunately, I missed the first talk (had a brilliant interview with Mark Bullinghurst) by Raphael Grasset about the Design of a Mixed-Reality Book: Is It Still a Real Book?

I will do my best to catch up.

Next, Tsutomu Miyashita and Peter Meier (Metaio) are on stage to present an exciting project that games alfresco covered in our Museum roundup: An Augmented Reality Museum Guide a result of a partnership between Louvre-DNP Museum lab and Metaio.

Miyashita introduces the project and describes the two main principles of this application are Works appreciation and guidance.

Peter describes the technology requirements:

  • guide the user through the exhibition and provide added value to the exhibitions
  • integrate with an audio guide service
  • no markers or large area trackin – only optical and mobile trackers

Technology used was Metaio’s Unifeye SDK, with a special program developed for the museum guide. Additional standard tools (such as Maia) were used for the modeling. All the 3d models were loaded on the mobile device. The location recognition was performed based on the approach introduced by Reitmayr and Drummond: Robust model based outdoor augmented reality (ISMAR 2006)

600 people experienced the “work appreciation” and 300 people the guidance application.

The visitors responses ranged from “what’s going on?” to “this is amazing!”.

In web terms, the AR application created a higher level of “stickiness”. Users came back to see the art work and many took pictures of the exhibits. The computer graphics definitely captured the attention of users. It especially appealed to young visitors.

The guidance application got high marks : ” I knew where I had to go”, but on the flip side, the device was too heavy…

In conclusion, in this broad exposure of augmented reality to a wide audience, the reaction was mostly positive. it was a “good” surprise from the new experience. Because this technology is so new to visitors, there is a need to keep making it more and more intuitive.

~~~

Third and last for this session is John Quarles discussing A Mixed Reality System for Enabling Collocated After Action Review (AAMVID)

Augmented reality is a great too for Training.

Case in point: Anesthesia education – keeping the patient asleep through anesthetic substance.

How cold we use AR to help educate the students on this task?

After action review is used in the military for ages: discussing after performing a task what happened? how did I do? what can I do better?

AR can provide two functions: review a fault test + provide directed instruction repetition.

With playback controls on a magic lens, the student can review her own actions, see the expert actions in the same situation, while viewing extra information about how the machine works (e.g. flow of liquids in tubes) – which is essentially real time abstract simulation of the machine.

The result of a study with testers showed that users prefer Expert Tutorial Mode which collocates expert log with realtime interaction.

Educators, on the other hand, can Identify trends in the class and modify the course accordingly.
Using “Gaze mapping” the educator can see where many students are pointing their magic lens and unearth an issue that requires a different teaching method. In addition, educators can see statistics of student interactions.

Did students prefer the “magic lens” or a desktop?

Desktop was good for personal review (afterward) which the Magic lens was better for external review.

The conclusion is that an after action review using AR works. Plus it’s a novel assessment tool for educators.

And the punch line: John Quarles would have killed to have such an After action review to help him practice for this talk…:-)

=====================

From ISMAR ’08 Program:

Applications

  • Design of a Mixed-Reality Book: Is It Still a Real Book?
    Raphael Grasset, Andreas Duenser, Mark Billinghurst
  • An Augmented Reality Museum Guide
    Tsutomu Miyashita, Peter Georg Meier, Tomoya Tachikawa, Stephanie Orlic, Tobias Eble, Volker Scholz, Andreas Gapel, Oliver Gerl, Stanimir Arnaudov, Sebastian Lieberknecht
  • A Mixed Reality System for Enabling Collocated After Action Review
    John Quarles, Samsun Lampotang, Ira Fischler, Paul Fishwick, Benjamin Lok

Exclusive! HitLab NZ Releases an Augmented Reality Authoring Tool for Non Programmers

I am excited. I have in my hands a flier I just received from Mark Billinghurst (one of the AR gods at ISMAR ’08)

This flier includes the URL for a totally new augmented reality authoring tool developed by HITLab New Zealand. What’s really new about this too is that it targets non programmers (as in you and me).

BuildAR is a software application that enables you to create simple augmented reality scenes on your desktop.

BuildAR provides a graphical user interface that simplifies the process of authoring AR scenes, allowing you to experience augmented reality first hand on your desktop computer. All you need is a computer, a webcam and some printed patterns.

Mark says I am the first one to receive the flier – hence the exclusive news.

Without further ado (I haven’t even tried it myself yet…), here is the URL: http://www.hitlabnz.org/wiki/BuildAR

I promised Mark that by tonight (as clocked in Honolulu) the entire world will have tried it.

Don’t let me be wrong…

Tell us, does it work? do you like it? want more of these?

Live from ISMAR ’08: Augmented Reality Layouts

Caffeine levels are set after the well deserved coffee break, and we are back to discuss AR layouts.

Onstage Steven Feiner introducing the speakers of this session.

First presenter is Nate Hagbi which is touching on an unusual topic that often  is seen as a given: In-Place Augmented Reality: A new way for storing and distributing augmented reality content.

In the past AR was used mostly by “AR Experts”. The main limiation for spearing it was mostly hardware related. We have come a long way since and AR can be done nowadays on a cell phone.

Existing encoding methods such as Artag, Artoolkit, Studierstube, MXRtoolkit as not human readable and require to store additional information in a back-end database.

Take the example of AR advertising for the Willington Zoo tried by Satchi and Satchi (2007).

This is a pretty complex approach which requires publishing printed material, creating a database for the additional AR info and querying database before presenting

In place Augmented reality is a vision based method for extracting content all encapsulated in the image itself.

The process includes: Using our visual language to encode the content in the image. The visualization is done as in a normal AR application.

The secret sauce of this method is the visual language used to encoding the AR information.

There are multiple benefits to this approach: the content is human readable and it avoids the need for an AR database, and for any user maintenance of the system. This approach also works with no network communication.

A disadvantage is that there is a limit of the amount of info which can be encoded in an image. Nate describes this as a trade off.

I am also asking myself, as a distributor of AR applications, what if I  want to change AR data on the fly? Nate suggests that in such a case a hybrid approach could be used: some of the info is extracted from the encoded image. Additional image coding could point to dynamic material from the network (e.g. updated weather or episodic content).

~~~

Second presenter is Kohei Tanaka which will unveils An Information Layout Method for an Optical See-through Head Mounted Display Focusing on the Viewability

The idea in short is to place virtual information on the AR screen in a way that always maintains a viewable contrast.

The amusing example demonstrates a case where this approach ca help dramatically: you are having tea with a friend, wearing your favorite see-through AR HMD. An alert generated the AR system tries to warn me about a train I need to catch, but due to the bright alert on top of a bright background – I miss the alert, and as a consequence miss the train…

Kohei’s approach, makes sure that the alert is displayed in a part of the image where the contrast is good enough to make me aware of the alert. Next time, I will not miss the train…

Question: Is in it annoying for users that the images on screen constantly change position…?

Kohei responds that it requires further research…

~~~

Last in this session is Stephen Peterson from Linkoping University with a talk about Label Segregation by Remapping Stereoscopic Depth in Far-Field Augmented Reality.

The domain: Air Traffic control. A profession that requires to maintain multiple sources of information and combine them into a single context cognitively.

Can Augmented Reality help?

The main challenge is labeling: how do you avoid clutter of labels that could quickly confuse the Air traffic controller?

The conclusion: Remapping stereoscopic depth of overlapping labels in far field AR improves the performance. In other words – when you need to display numerous labels on a screen that might overlap with each other – use the depth of the view and display the labels in different 3d layers.

================

From ISMAR ’08 Program:

Layout

  • In-Place Augmented Reality
    Nate Hagbi, Oriel Bergig, Jihad El-Sana, Klara Kedem, Mark Billinghurst
  • An Information Layout Method for an Optical See-through Head Mounted Display Focusing on the Viewability
    Kohei Tanaka, Yasue Kishino, Masakazu Miyamae, Tsutomu Terada, Shojiro Nishio
  • Label Segregation by Remapping Stereoscopic Depth in Far-Field Augmented Reality
    Stephen Peterson, Magnus Axholt, Stephen Ellis

Live from ISMAR ’08 in Cambridge: Enjoy the Weather…

“Enjoy the weather” uttered sarcastically a kindhearted British witch (aka air hostess) while we were leaving the aircraft; surprisingly – we did in the first day. We were then promised this is accidental and surely the last day of summer. Splendid.

Venice? nope, Cambridge!

Venice, Italy? nope, Cambridge, UK!

I have landed in Cambridge, UK (where people go to augment their reality) and all I ever heard about it – is true: British meadow green, majestic 600 year old buildings, cosmopolitan young folks, fish cakes…a combination that gives this university city its unique aura; a great setting for the event starting tomorrow – reality only better, at ISMAR ’08.

St. Catherine College - can't ask for a nicer place to stay...

St. Catherine College - can't ask for a nicer place to stay...

For those who couldn’t make it, stay tuned for a live coverage of ISMAR ’08, the world’s best augmented reality event.

Featuring AR pioneers such as: Tom Drummond, Paul McIlroy, Mark Billinghurst, Blair MacIntyre, Daniel Wagner, Wayne Piekarski, Uli Bockholt (Fraunhofer IGD), Peter Meier, Mark A. Livingston, Diarmid Campbell, David Murray, Rolf R. Hainich, Oliver Bimber, Hideo Saito and many more —

— overing topics such as: Industrial augmented reality, hand-held augmented reality, displays, user studies, applications, layouts, demos, state-of-the-art AR, and don’t miss the highly anticipated tracking competition.

Welcome all speakers and attendees to the event, and don’t forget: look right first!

If you are at the event (or not) and want to chat, share thoughts, or ask questions – leave a comment here or send a message on facebook.

A new (media) power in the race for augmented reality supremacy

Media Power announced today a donation of $5M to the GVU research center at Georgia Tech – for the advancement of Mobile Augmented Reality (http://www.cc.gatech.edu/news/media-power-donates-5m-to-gvu-center).
It’s intriguing that Media Power’s founder is none other than the controvertial Carl Freer, the executive from Gizmondo – a mobile game device that went belly up “under a cloud” after demostrating huge potential in 2005. Although it made it to the #1 position of “The 10 Worst-Selling Handhelds of All Time” on gamepro it was pretty popular among mobile augmented reality research(Demo).

So now Carl will not only resurrect Gizmondo, but will also establish a new division – Magitech – “centered around the very promising field of Augmented Reality”.
The objective of the joint initiative between Magitech and Georgia Tech is to “envision, prototype and evaluate the next generation of mobile AR games and entertainment applications and positions the company as a leader in AR.”

This initiative looks promising mostly thanks to its ability to attract worldwide top talent in the field of augmented reality (many of them regular contributors to this blog – games alfresco):
Dr. Leonard Kleinrock (Professor, University of California at Los Angeles), Blair McIntyre (Professor, Georgia Tech), Mark Billinghurst (Professor, University of Canterbury), Daniel Wagner (Professor, University of Graz, Vienna), Dr. Michael Gervautz (CEO Imagination, Vienna)

Now, what would you do with $5M and that kind of caliber to advance augmented reality games?

***Update***

A couple of months later Media Power made another major investment. This time the sum was $2.7M and the benefactor –  Mark Billinghurst’s HIT Lab in New Zealand.