Special Message From Mark Billinghurst: Augmented Reality for Non-Programmers Just Got Easier

The HIT Lab NZ has just released a professional version of it’s popular BuildAR AR scene building tool. This allows non-programmers to easily build AR scenes. The professional version includes a number of new features such as support for audio and video, multiple objects on AR markers, 2D image and text loading, VR viewing mode, and much more.
The software is available as a free beta now, and then will be commercially available for a low cost from July onwards. It can be downloaded from: http://www.buildar.co.nz/
Using BuildAR Pro, compelling AR scenes can be build in just a few minutes.

Pencil and Paper are not Dead: Augmented Reality Sketching Games at VR 2010

Tomorrow, I’ll be at the IEEE VR 2010 conference in Boston. Monday is dedicated to a series of augmented reality presentations.

One of the most interesting one is:

In-Place Sketching for Content Authoring in Augmented Reality Games

By the all star team from Ben Gurion University (Israel) and HIT Lab (New Zealand):

  • Nate Hagbi
  • Raphaël Grasset
  • Oriel Bergig
  • Mark Billinghurst
  • Jihad El-Sana

When it comes to AR games – we are all still searching for “Pong” a simple game that will captivate millions of players and kickoff this new genre.

One of the challenges in many AR games, is the reliance on printouts of ugly markers.

Plus many games use the markers as controllers which is a bit awkward (especially to a bystander).

Sketching offers an alternative for a more natural user interface.

Sketching is more natural than drawing with a mouse on a PC, even more intuitive that a touch screen. That’s still the first thing that kids are taught in school.

It’s not necessarily a better interface – but it’s an alternative that offers a very intuitive interaction, and enriched the player’s experience. I believe it could create a whole new genre of games.

In place sketching in AR games has huge potential in gaming – but many questions arise:

  • What’s the design space for such a game?
  • What are the tools to be used?
  • How do you understand what the player meant in a sketch?
  • What’s the flow of interaction?
  • How do you track it?

What’s “In-place AR”?  It’s when the augmented content is extracted from the real world (an illustration, an image , a sketch, or a real life object)

Here is the sequence of research efforts leading to this:

Here are 2 game prototypes the team created called AR Gardener and Sketch-Chaser. It is played on a regular white board.

AR Gardener

Draw symbols on the white board and 3D content is pulled from a database of objects to appeas in an Augmented Reality (AR) scene.

The sketch determines what object to create, its location, scale, and rotation.

The outer line sketched here defines the game anchor and is served for tracking; in this game it becomes a brown ridge.

Simple symbols drawn generate a couple of benches, a cabin, and in the spirit of the playground theme – rockers, and swings.

Virtual elements could also be created based on a real life object such as a leaf; here it is used to create a patch of grass using the color and shape of the leaf (and no, it can’t recognize that’s a leaf, or 3D object whatsoever)

The color of the marker could define the type of virtual object created: For example, blue represents water. Other objects that are put in it will sink.


In the second game you basically create an obstacle course for a car chase.

It’s a “catch the flag” or tag game. The winner is whoever has the flag for the most time.

First you draw, then play.

Once again, the continuous brown line represents a ridge and bounds the game.

A small circle with a dot in it represents the starting point for the cars.

A flag becomes the flag to capture. A simple square creates a building, etc.

The player adds more ridges to make it more challenging. Adds blue to generate a little pond  (which also indicates a different physical trait to this area)

Then – graphics are generated, the players grab their beloved controllers and the battle begins!

This research represents an opportunity for a whole new kind of game experience that could make kids play more in the real world.

Many questions still remain, such as how do you recognize in a sketch what the player really means without requiring her to be an artist or an architect. Or where does the sketch fit in the game play? Before, after or during?

Now, it’s up to game designers to figure out what sketching techniques work best, what’s fun, what’s interesting, and what’s just a doodle.

Who want’s to design an sketch-based augmented reality a game?

ISMAR 2009: Sketch and Shape Recognition Preview From Ben Gurion University

ISMAR 2009 the world’s best augmented reality event starts in 3 days!

If you are still contemplating whether to go – check out what you might be missing on our preview post.

The folks from the Visual Media Lab at Ben Gurion University in collaboration with HIT Lab NZ are preparing a real treat for ISMAR 2009 participants.

Sketch recognition (already covered in our previous post) is a major break away from “ugly” markers or NFT  (tracking natural 2d images). It is the dawn of user generated content for Augmented Reality, and an intuitive new interaction approach for changing the CONTENT overlaid on a marker. Big wow.

In-Place 3D Sketching

But the team lead by Nate Hagbi and Oriel Bergig (with support from Jihad El-Sana and Mark Billinghurst) is just warming up…In the next video Nate shows how any sketch you draw on a paper (or even on your hand!) can be tracked.

So are you telling me I won’t need to print stuff every time I want to play with augmented reality?
-That’s right! Hug a tree and save some ink!

Shape Recognition and Pose Estimation

But wait, there is more!

Nate says this demo already runs on an iPhone.

And to prove it, he is willing to share the code used to access the live video on iPhone 3.0.
(note: this code accesses a private API on the iPhone SDK)

Ready for the BIG NEWS?

For the first time ever, the core code necessary for real augmented reality “(real” here means precise alignment of graphics overlaid on real life objects) on iPhone 3.0 is available to the public.

To get access to the source code – send us an email.

May a thousand augmented reality apps bloom!

ISMAR 2009: Sneak Peek from HIT Lab New Zealand

ISMAR, the world’s best Augmented Reality (AR) event is just 11 days away!

We have already provided a sneak preview of some of the demos.

Here are 2 research results, to be introduced at ISMAR, from one of the most prolific AR labs in the world: HIT Labs NZ, courtesy of Mark Billinghurst:

Embedded AR

We have been developing AR software for the Beagle Board OMAP3 development kit. This allows you to run a whole AR system on a $150 piece of embedded hardware and use Linux for development. The OMAP 3 chip is the same that is in many new smart phones so it is a great way to do some benchmarking and prototyping for mobile phone AR applications.

If EmbeddedAR will have similar adoption to the open source Artoolkit, then we’ll soon see AR-enabled devices popping up like mushrooms after the rain. Potentially very cool.

Android AR

We have been developing our own mobile outdoor AR platform based on the Android operating system. We are using GPS and compass information to overlay 3D virtual models on the real world outdoors. Unlike some other systems we support full 3D model loading and also model manipulation, plus rendering effects such as shadows etc.

That’s not as new. Rouli would categorize it as a YAARB™ (Yet Another AR Browser…)
Wikitude and Layar (as well as other browsers) have similar capabilities (or will soon have), and are already open and accessible to many developers.

Want to learn more about it? Check out Android AR.


Just 2 more reasons to go to ISMAR 2009. It is going to be HUGE!

Don’t wait any longer – register Today!

Special Message from Mark Billinghurst: Introducing FLARManager – Can Building AR Apps in Flash Be Easier?

July 1st 2009  Press Release

ARToolworks Releases Commercial License for FLARManager

ARToolworks is very pleased to announce that it is able to offer commercial licenses for the popular FLARManager software. FLARManager is a software framework developed by Eric Socolofsky that makes building FLARToolKit Flash based Augmented Reality applications easier.

FLARManager decouples the marker-tracking functionality from Papervision3D, and provides a more robust event-based system for managing marker addition, update, and removal. It supports detection and management of multiple patterns, and multiple markers of a given pattern.

Most importantly, FLARManager sits on top of FLARToolKit and makes it much faster and easier to develop flash based AR applications, typically half the time or less of developing a straight FLARToolKit application.

Philip Lamb, CTO of ARToolworks, says “We are delighted to be able to provide commercial license for this outstanding tool. This will enable FLARToolKit developers to build Flash AR applications quicker than ever before, and is the perfect compliment to our existing product line.”

FLARManager will continue to be freely available under a GPL license from http://www.facebook.com/l/;http://transmote.com/flar/, but ARToolworks has the exclusive rights to sell commercial licenses to those companies that do not want to share the source code of their applications as required by the GPL license.

The developer of FLARManager, Eric Socolofsky, says, “I’m excited to be able to offer FLARManager to both the commercial and experimental community.  FLARManager began as an effort to bring FLARToolkit to a wider audience, and this commercial license will help to expand the reach of augmented reality and new interfaces to the web.”

For a limited time, ARToolworks is selling FLARManager for a reduced price of only $295 USD for a single product license, and also selling a discounted bundle of FLARToolKit and FLARManager licenses together. FLARToolKit is required to use FLARManager.

Please contact sales@artoolworks.com for more details.

How to Get the Next Generation Hooked on Augmented Reality – Today

Our belief:

…in 10-15 years everyone will use Augmented Reality to experience the world in a more meaningful way.

Our collective mission:

…nurture a healthy industry that will drive the adoption sooner than later.

So where do we start?

…by educating the youngest “digital natives”.

That generation is ripe and eager to try new experiences that speak their language. And that same generation will carry the AR movement to its glory.

The challenge is how to give them something they like, and at the same time offer value to those who hold the buying power  – their parents, guardians, or teachers.

Tech savvy parents and teachers tend to recognize the value of PCs and video games in educating their kids – but they hate the isolation resulting in too many hours in front of the screen.

Eric Klopfer argues in his excellent book, Augmented Learning, that we should give them mobile learning games:

These games use social dynamics and real world contexts to enhance game play…and can create compelling educational and engaging environments for learners…help develop 21 century skills…tackle complex problems…and acquire information in just-in-time fashion”

Eric doesn’t stop at arguing, he actually does what he preaches. Together with colleagues at MIT Teacher education program & the Education Arcade and in collaboration with Madison-Wisconsin and Harvard, they developed multiple mobile games (see below) – and experimented and improved them – with kids.

And they’re not alone. Researchers around the world have studied this huge opportunity and wrote about it extensively.

Future Lab in the UK is passionate about transforming the way people learn, and develop new approaches to learning for the 21st century (see games below).

Mark Billinghurst, an AR god from New Zealand’s HIT Lab, published this guide about Augmented Reality in Education.

Mike Adams ranted in 2004 about the prospects and dangers of augmented reality games in his passionate  The Top Ten Technologies: #3 Augmented Reality

Cathy Cavanaugh wrote the essay  “Augmented Reality Gaming in Education for Engaged Learning”  as the fifth chapter of a massive hand book dubbed Effective Electronic Gaming in Education. (You can get it for $695.00 at Information Science Reference.)

Cavanaugh explores a (surprisingly large) number of educational games developed in the last 4 years:

Most were designed to teach concepts in scientific systems, and the remaining AR games focus on the difficult-to-master, ill- defined domains of communication, managing data collected in the field, problem solving, and understanding cultural and historic foundations of a region.

Based on that list, here is an (alphabetical) culmination of mobile educational games in recent history:

Big Fish Little Fish (MIT)

Concepts including predator-prey dynamics, over fishing, biodiversity, evolution for school-age children.

Groups of students use handheld devices while physically interacting with each other to simulate fish feeding behavior.

Charles River City (MIT)

Outdoor GPS-based Augmented Reality game for teenagers. Players team up as experts including scientists, public health experts, and environmental specialists to analyze and solve an outbreak of illness coinciding with a major event in the Boston Metro Area.

Create-a-Scape (Future Lab)

Mediascapes are a powerful way of engaging with the world around us. Using PDAs they offer new opportunities to explore and interact with the landscape in exciting and varied ways.

Eduventure Middle Rhine (Institute for Knowledge Media)

Learning the cultural history of the Middle Rhine Valley for adults. Learners alternate between problem solving using video of the castle setting and problem exploration using mobile devices in the real castle.

Environmental Detectives (MIT)

Collaborative understanding of scientific and social aspects of threats to the environment and public health for adults. Participants role-play as teams of scientists investigating contaminated water using networked handheld devices in a field setting.

Epidemic Menace (Fraunhofer Institute)

Collaborative problem solving and experiences with learning arts for adults. Teams assume the roles of medical experts to battle a threatening virus using gaming and communication devices in a room and outdoors.

HandLeR (U. of Birmingham)

Support for field-based learning of children ages 9-11. Groups of children respond to scenarios in the field using a portable data collection and communication device.

Live Long and Prosper (MIT)

Concepts including genetics and experimental design for school-age children. Groups of students use handheld devices while physically interacting with each other to simulate the genetic actions of reproduction.

Mobi Mission (Future Lab)

Communication and reflection activities for teenagers.

Groups of students write verbal missions and respond to the missions of others using cell phones.

Mystery @ the Museum (MIT)

Collaborative thinking skills for adults and youngsters. Teams consisting of a Biologist, a Technologist and a Detective must work together to solve a crime at the Museum of Science.

Newtoon (Future Lab)

Physics principles for adolescents. Students use mobile phones and Web sites to play, create, and share games that demonstrate physics principles.

Outbreak @ MIT (MIT)

Experience with the complexities of responding to an avian flu outbreak, for young adults.

Players are brought in to investigate a potential epidemic on campus with hand-held networked Pocket PCs.

Savannah (Future Lab)

The science of living things interacting within an ecosystem, for ages 11-12. Children, acting as lions, navigate the savannah using mobile handheld devices.

Sugar and Spice (MIT)

Concepts including population economics and mathematics for school-age children. Groups of students use handheld devices while physically interacting with each other to simulate interactions between populations and resources

Virus (MIT)

Concepts including epidemics, scientific method, population growth for school-age children. Groups of students use handheld devices while physically interacting with each other to simulate the spread of disease

So what’s next?

These old games have built-in educational value, they strive to be more fun than traditional classroom lessons, and most importantly – they achieve it while detaching children from the screen.

However, none of these games has really made it to the mass market.

In order to break into the mainstream, games will have to be

  • more visual (see what you mean),
  • more intuitive (touchscreen and accelerometers – drop the Pocket PC look & feel),
  • more ubiquitous (play anywhere, anytime),
  • and they will have to run on devices that look more like an iPhone than a Newton.

Devices for education is in fact the main topic for the second part of this post.

Stay tuned. Or better yet – tell us what you think.

Live from ISMAR ’08: The Gods of Augmented Reality About the Next 10 Years

Welcome to the climax of ISMAR ’08. On stage the 9 “gods” of the augmented reality community. And they are siting in a panel to muse about the next 10 years of augmented reality.

Dieter Schmalstieg took on the unenviable job of moderating this crowd of big wigs. See if he can curb them down to 3 minutes each.

Here is a blow-by-blow coverage of their thoughts.

Ron Azuma (HRL)

The only way for AR to succeed is when we insert AR into our daily lives – it has to be available all the time (like Thad Starner from GA Tech which always wears his computer)
Ron asks – What if we succeed? what are the social ramifications? those who have thought about it are science fiction writers…such as Vernor Vinge (have you read Rainbows End and Synthetic Serendipity.)

Reinhold Behringer (Leeds)

AR is at the threshold of broad applications.
Cameras, GPS, bandwidth have improved immensely – split into lo-fi AR, approximate registration, low end hardware. and also hi end AR, live see through displays, etc.
What’s missing is APIs, common frameworks, ARML descriptor (standardization)

Mark Billinghurst (HitLab NZ)

Mobility (now) – It took 10 years to go from backpack to palm
Ubiquity (5+ years) – how will AR devices work with other devices (TV, home theater, …),
Sociability – it took us 10 years to go from 2 to 4 to 8 users . When will we have massive scale?
Next is AR 2.0 with massive user generated content and a major shift from technology to user interaction

Steve Feiner – Columbia

AR means “The world = your user interface”
What will it take to make this possible?
Backpacks are ridiculous; handheld devices will look ridiculous 5 years from now – so don’t write off eyewear.
A big one is dynamic global databases for identification/tracking of real world objects. Tracking could be viewed as “just” search (granted a new kind of search.)
There is more to AR than registration; AR presentations need to be designed (AR layouts).

Gudrun Klinker – TU Munchen

|ntegrating AR with ubiquitous. We are interfacing with reality, with our senses and others are mental. We need those lenses to connect to our “senses” (not just visually – it could also be sound, etc). Combining the virtual with the real – where is the information? and can we see it? How do we communicate with the stationary world? We need to connect with the room we are in and hear the “story”. The devices at least need to talk to each other.
We also need to think about “augmented” building, they do not evolve as fast as cell phones. Another aspect is how are we going to survive “this thing”. We need much more usability studies and connect it with real world applications. The ultimate test (I challenge you to show it in next year’s competition) is a navigation system for runners. It’s easy to do it for cars – but may be harder for people.

Nassir Navab –  TU Munchen

Medical augmented reality  – showing fascinating videos of medical overlays [add videos]

The simplest idea is getting into the operation room – combining X Ray and optics as part of the common operating workflow.

Next is fusion of pre/intra operative functional and anatomical imaging; patient motion tracking and deformable registration; adaptive, intuitive and interactive visualization; Integration into surgical workflow
Finally we need to focus on changing the culture of surgeons (e.g. training with AR simulation).

Haruo Takemura – Osaka University

Showing a table comparing the pros and cons of hardware platforms: e.g. mobile have potential benefits vs HMD (but also drawbacks – such as processing power); desktop is cheap and powerful but not mobile (tethered).
Cell phones have another issue – they are tied to the carriers which is problematic for developers.

Bruce Thomas – UniSA

We are extremely interdisciplinary – and should keep it up.
However with so many of these it’s hard to develop and evaluate. And by the way innovation is difficult to articulate.
We are in a “Neat vs. Scruffy” situation – the bottom line is that a smaller self-contained pieces of research is easier to get in front of the community – and get results.

Questions floating:
is high end or low end AR the goal?
is ubiquity in AR realistic or wishful thinking?
are we innovative/.
Does augmented reality need to make more money to survive?
Platforms: Don’t write off eyewear?
Social: what if we succeed with AR?
What is the position of ISMAR in the scientific community?

A controvertial question from the audience to the panel: How many of you have subject matter expert working in your office on a daily basis? (few hands) How many of you have artists working a daily basis? (even fewer hands) How many of your research have reached the real world? (once again – few hands)

A question from the audience about the future of HMD. Mark takes the mic and asks the audience:

How many of you would wear a head mounted display? (5 hands)

How many of you would wear a head mounted display that looks like a normal glasses? (75% of the audience raise hands)

Dieter asks the panel members to conclude with one sentence each (no semi columns…)

Ron: I want to refer to the comment that the cell phone is too seductive. We should make it indispensable so users won’t want to give it up – just like a cell phone.

Mark: We need to make sure that children, grandparents, in Africa and everywhere – could use AR

Steve: You ain’t seen nothing yet; look at the progress we have made in the last 10 years! No one can predict what will happen.

Gudrun: We have to be visionary and on the other hand. We need to be realistic and make sure RA doesn’t end up like AI…don’t build hopes in areas where people shouldn’t have them…don’t let AR get burned…

Nassir: Next event we should include designers and experts from other disciplines; and create solutions that go beyond the fashion

Haruo: Maybe combining information like Googles with devices

Bruce: I want you to have fun and be passionate about what you do! We can change the world!

Applause, and that’s a wrap.

Live from ISMAR ’08: Latest and Greatest in Augmented Reality Applications

It’s getting late in the second day of ISMAR ’08 and things are heating up…the current session is about my favorite topic: Augmented Reality applications.

Unfortunately, I missed the first talk (had a brilliant interview with Mark Bullinghurst) by Raphael Grasset about the Design of a Mixed-Reality Book: Is It Still a Real Book?

I will do my best to catch up.

Next, Tsutomu Miyashita and Peter Meier (Metaio) are on stage to present an exciting project that games alfresco covered in our Museum roundup: An Augmented Reality Museum Guide a result of a partnership between Louvre-DNP Museum lab and Metaio.

Miyashita introduces the project and describes the two main principles of this application are Works appreciation and guidance.

Peter describes the technology requirements:

  • guide the user through the exhibition and provide added value to the exhibitions
  • integrate with an audio guide service
  • no markers or large area trackin – only optical and mobile trackers

Technology used was Metaio’s Unifeye SDK, with a special program developed for the museum guide. Additional standard tools (such as Maia) were used for the modeling. All the 3d models were loaded on the mobile device. The location recognition was performed based on the approach introduced by Reitmayr and Drummond: Robust model based outdoor augmented reality (ISMAR 2006)

600 people experienced the “work appreciation” and 300 people the guidance application.

The visitors responses ranged from “what’s going on?” to “this is amazing!”.

In web terms, the AR application created a higher level of “stickiness”. Users came back to see the art work and many took pictures of the exhibits. The computer graphics definitely captured the attention of users. It especially appealed to young visitors.

The guidance application got high marks : ” I knew where I had to go”, but on the flip side, the device was too heavy…

In conclusion, in this broad exposure of augmented reality to a wide audience, the reaction was mostly positive. it was a “good” surprise from the new experience. Because this technology is so new to visitors, there is a need to keep making it more and more intuitive.


Third and last for this session is John Quarles discussing A Mixed Reality System for Enabling Collocated After Action Review (AAMVID)

Augmented reality is a great too for Training.

Case in point: Anesthesia education – keeping the patient asleep through anesthetic substance.

How cold we use AR to help educate the students on this task?

After action review is used in the military for ages: discussing after performing a task what happened? how did I do? what can I do better?

AR can provide two functions: review a fault test + provide directed instruction repetition.

With playback controls on a magic lens, the student can review her own actions, see the expert actions in the same situation, while viewing extra information about how the machine works (e.g. flow of liquids in tubes) – which is essentially real time abstract simulation of the machine.

The result of a study with testers showed that users prefer Expert Tutorial Mode which collocates expert log with realtime interaction.

Educators, on the other hand, can Identify trends in the class and modify the course accordingly.
Using “Gaze mapping” the educator can see where many students are pointing their magic lens and unearth an issue that requires a different teaching method. In addition, educators can see statistics of student interactions.

Did students prefer the “magic lens” or a desktop?

Desktop was good for personal review (afterward) which the Magic lens was better for external review.

The conclusion is that an after action review using AR works. Plus it’s a novel assessment tool for educators.

And the punch line: John Quarles would have killed to have such an After action review to help him practice for this talk…:-)


From ISMAR ’08 Program:


  • Design of a Mixed-Reality Book: Is It Still a Real Book?
    Raphael Grasset, Andreas Duenser, Mark Billinghurst
  • An Augmented Reality Museum Guide
    Tsutomu Miyashita, Peter Georg Meier, Tomoya Tachikawa, Stephanie Orlic, Tobias Eble, Volker Scholz, Andreas Gapel, Oliver Gerl, Stanimir Arnaudov, Sebastian Lieberknecht
  • A Mixed Reality System for Enabling Collocated After Action Review
    John Quarles, Samsun Lampotang, Ira Fischler, Paul Fishwick, Benjamin Lok

Exclusive! HitLab NZ Releases an Augmented Reality Authoring Tool for Non Programmers

I am excited. I have in my hands a flier I just received from Mark Billinghurst (one of the AR gods at ISMAR ’08)

This flier includes the URL for a totally new augmented reality authoring tool developed by HITLab New Zealand. What’s really new about this too is that it targets non programmers (as in you and me).

BuildAR is a software application that enables you to create simple augmented reality scenes on your desktop.

BuildAR provides a graphical user interface that simplifies the process of authoring AR scenes, allowing you to experience augmented reality first hand on your desktop computer. All you need is a computer, a webcam and some printed patterns.

Mark says I am the first one to receive the flier – hence the exclusive news.

Without further ado (I haven’t even tried it myself yet…), here is the URL: http://www.hitlabnz.org/wiki/BuildAR

I promised Mark that by tonight (as clocked in Honolulu) the entire world will have tried it.

Don’t let me be wrong…

Tell us, does it work? do you like it? want more of these?

Live from ISMAR ’08: Augmented Reality Layouts

Caffeine levels are set after the well deserved coffee break, and we are back to discuss AR layouts.

Onstage Steven Feiner introducing the speakers of this session.

First presenter is Nate Hagbi which is touching on an unusual topic that often  is seen as a given: In-Place Augmented Reality: A new way for storing and distributing augmented reality content.

In the past AR was used mostly by “AR Experts”. The main limiation for spearing it was mostly hardware related. We have come a long way since and AR can be done nowadays on a cell phone.

Existing encoding methods such as Artag, Artoolkit, Studierstube, MXRtoolkit as not human readable and require to store additional information in a back-end database.

Take the example of AR advertising for the Willington Zoo tried by Satchi and Satchi (2007).

This is a pretty complex approach which requires publishing printed material, creating a database for the additional AR info and querying database before presenting

In place Augmented reality is a vision based method for extracting content all encapsulated in the image itself.

The process includes: Using our visual language to encode the content in the image. The visualization is done as in a normal AR application.

The secret sauce of this method is the visual language used to encoding the AR information.

There are multiple benefits to this approach: the content is human readable and it avoids the need for an AR database, and for any user maintenance of the system. This approach also works with no network communication.

A disadvantage is that there is a limit of the amount of info which can be encoded in an image. Nate describes this as a trade off.

I am also asking myself, as a distributor of AR applications, what if I  want to change AR data on the fly? Nate suggests that in such a case a hybrid approach could be used: some of the info is extracted from the encoded image. Additional image coding could point to dynamic material from the network (e.g. updated weather or episodic content).


Second presenter is Kohei Tanaka which will unveils An Information Layout Method for an Optical See-through Head Mounted Display Focusing on the Viewability

The idea in short is to place virtual information on the AR screen in a way that always maintains a viewable contrast.

The amusing example demonstrates a case where this approach ca help dramatically: you are having tea with a friend, wearing your favorite see-through AR HMD. An alert generated the AR system tries to warn me about a train I need to catch, but due to the bright alert on top of a bright background – I miss the alert, and as a consequence miss the train…

Kohei’s approach, makes sure that the alert is displayed in a part of the image where the contrast is good enough to make me aware of the alert. Next time, I will not miss the train…

Question: Is in it annoying for users that the images on screen constantly change position…?

Kohei responds that it requires further research…


Last in this session is Stephen Peterson from Linkoping University with a talk about Label Segregation by Remapping Stereoscopic Depth in Far-Field Augmented Reality.

The domain: Air Traffic control. A profession that requires to maintain multiple sources of information and combine them into a single context cognitively.

Can Augmented Reality help?

The main challenge is labeling: how do you avoid clutter of labels that could quickly confuse the Air traffic controller?

The conclusion: Remapping stereoscopic depth of overlapping labels in far field AR improves the performance. In other words – when you need to display numerous labels on a screen that might overlap with each other – use the depth of the view and display the labels in different 3d layers.


From ISMAR ’08 Program:


  • In-Place Augmented Reality
    Nate Hagbi, Oriel Bergig, Jihad El-Sana, Klara Kedem, Mark Billinghurst
  • An Information Layout Method for an Optical See-through Head Mounted Display Focusing on the Viewability
    Kohei Tanaka, Yasue Kishino, Masakazu Miyamae, Tsutomu Terada, Shojiro Nishio
  • Label Segregation by Remapping Stereoscopic Depth in Far-Field Augmented Reality
    Stephen Peterson, Magnus Axholt, Stephen Ellis