Caffeine levels are set after the well deserved coffee break, and we are back to discuss AR layouts.
Onstage Steven Feiner introducing the speakers of this session.
First presenter is Nate Hagbi which is touching on an unusual topic that often is seen as a given: In-Place Augmented Reality: A new way for storing and distributing augmented reality content.
In the past AR was used mostly by “AR Experts”. The main limiation for spearing it was mostly hardware related. We have come a long way since and AR can be done nowadays on a cell phone.
Existing encoding methods such as Artag, Artoolkit, Studierstube, MXRtoolkit as not human readable and require to store additional information in a back-end database.
Take the example of AR advertising for the Willington Zoo tried by Satchi and Satchi (2007).
This is a pretty complex approach which requires publishing printed material, creating a database for the additional AR info and querying database before presenting
In place Augmented reality is a vision based method for extracting content all encapsulated in the image itself.
The process includes: Using our visual language to encode the content in the image. The visualization is done as in a normal AR application.
The secret sauce of this method is the visual language used to encoding the AR information.
There are multiple benefits to this approach: the content is human readable and it avoids the need for an AR database, and for any user maintenance of the system. This approach also works with no network communication.
A disadvantage is that there is a limit of the amount of info which can be encoded in an image. Nate describes this as a trade off.
I am also asking myself, as a distributor of AR applications, what if I want to change AR data on the fly? Nate suggests that in such a case a hybrid approach could be used: some of the info is extracted from the encoded image. Additional image coding could point to dynamic material from the network (e.g. updated weather or episodic content).
~~~
Second presenter is Kohei Tanaka which will unveils An Information Layout Method for an Optical See-through Head Mounted Display Focusing on the Viewability
The idea in short is to place virtual information on the AR screen in a way that always maintains a viewable contrast.
The amusing example demonstrates a case where this approach ca help dramatically: you are having tea with a friend, wearing your favorite see-through AR HMD. An alert generated the AR system tries to warn me about a train I need to catch, but due to the bright alert on top of a bright background – I miss the alert, and as a consequence miss the train…
Kohei’s approach, makes sure that the alert is displayed in a part of the image where the contrast is good enough to make me aware of the alert. Next time, I will not miss the train…
Question: Is in it annoying for users that the images on screen constantly change position…?
Kohei responds that it requires further research…
~~~
Last in this session is Stephen Peterson from Linkoping University with a talk about Label Segregation by Remapping Stereoscopic Depth in Far-Field Augmented Reality.
The domain: Air Traffic control. A profession that requires to maintain multiple sources of information and combine them into a single context cognitively.
Can Augmented Reality help?
The main challenge is labeling: how do you avoid clutter of labels that could quickly confuse the Air traffic controller?
The conclusion: Remapping stereoscopic depth of overlapping labels in far field AR improves the performance. In other words – when you need to display numerous labels on a screen that might overlap with each other – use the depth of the view and display the labels in different 3d layers.
================
From ISMAR ’08 Program:
Layout
-
In-Place Augmented Reality
Nate Hagbi, Oriel Bergig, Jihad El-Sana, Klara Kedem, Mark Billinghurst -
An Information Layout Method for an Optical See-through Head Mounted Display Focusing on the Viewability
Kohei Tanaka, Yasue Kishino, Masakazu Miyamae, Tsutomu Terada, Shojiro Nishio -
Label Segregation by Remapping Stereoscopic Depth in Far-Field Augmented Reality
Stephen Peterson, Magnus Axholt, Stephen Ellis
Filed under: AR Events, AR User Experience | Tagged: ISMAR 08, Jihad El-Sana, Klara Kedem, Kohei Tanaka, Magnus Axholt, Mark Billinghurst, Masakazu Miyamae, Nate Hagbi, Oriel Bergig, Shojiro Nishio, Stephen Ellis, Stephen Peterson, Tsutomu Terada, Yasue Kishino |
Despite what everyone say, a french press is without a doubt tastier than any other kind of coffee I’ve ever tried. If you have a bit of time, take a peak over to the how-to guide we’ve assembled at frenchpresshowto.com. Thank you for the post!
Duke punter Will Monday set a Belk Bowl record possessing a 79-yard punt. … The two teams set a combined record for most first downs through the Belk Bowl. Duke’s last bowl appearance was the 1995 Hall of Fame Bowl. It’s last bowl win? The 1961 Cotton Bowl. The Blue Devils are going to get to wait notes on longer to try to record another bowl win. Cincinnati forced a fumble deep in its own territory with under two minutes to play, drove the length of this field then forced another fumble to secure a 48-34 win over Duke inside the Belk Bowl. Thanks on a pair of late scores by the Bearcats, the score didn’t reflect how close the game actually was.
http://sibergems.com/node/2569
You actually make it seem really easy with your presentation but
I in finding this matter to be actually one
thing which I believe I might by no means understand. It kind of
feels too complicated and very vast for me.
I am looking ahead in your subsequent submit, I’ll try to get the hang
of it!