Pencil and Paper are not Dead: Augmented Reality Sketching Games at VR 2010

Tomorrow, I’ll be at the IEEE VR 2010 conference in Boston. Monday is dedicated to a series of augmented reality presentations.

One of the most interesting one is:

In-Place Sketching for Content Authoring in Augmented Reality Games

By the all star team from Ben Gurion University (Israel) and HIT Lab (New Zealand):

  • Nate Hagbi
  • Raphaël Grasset
  • Oriel Bergig
  • Mark Billinghurst
  • Jihad El-Sana

When it comes to AR games – we are all still searching for “Pong” a simple game that will captivate millions of players and kickoff this new genre.

One of the challenges in many AR games, is the reliance on printouts of ugly markers.

Plus many games use the markers as controllers which is a bit awkward (especially to a bystander).

Sketching offers an alternative for a more natural user interface.

Sketching is more natural than drawing with a mouse on a PC, even more intuitive that a touch screen. That’s still the first thing that kids are taught in school.

It’s not necessarily a better interface – but it’s an alternative that offers a very intuitive interaction, and enriched the player’s experience. I believe it could create a whole new genre of games.

In place sketching in AR games has huge potential in gaming – but many questions arise:

  • What’s the design space for such a game?
  • What are the tools to be used?
  • How do you understand what the player meant in a sketch?
  • What’s the flow of interaction?
  • How do you track it?

What’s “In-place AR”?  It’s when the augmented content is extracted from the real world (an illustration, an image , a sketch, or a real life object)

Here is the sequence of research efforts leading to this:

Here are 2 game prototypes the team created called AR Gardener and Sketch-Chaser. It is played on a regular white board.

AR Gardener

Draw symbols on the white board and 3D content is pulled from a database of objects to appeas in an Augmented Reality (AR) scene.

The sketch determines what object to create, its location, scale, and rotation.

The outer line sketched here defines the game anchor and is served for tracking; in this game it becomes a brown ridge.

Simple symbols drawn generate a couple of benches, a cabin, and in the spirit of the playground theme – rockers, and swings.

Virtual elements could also be created based on a real life object such as a leaf; here it is used to create a patch of grass using the color and shape of the leaf (and no, it can’t recognize that’s a leaf, or 3D object whatsoever)

The color of the marker could define the type of virtual object created: For example, blue represents water. Other objects that are put in it will sink.

Sketch-Chaser

In the second game you basically create an obstacle course for a car chase.

It’s a “catch the flag” or tag game. The winner is whoever has the flag for the most time.

First you draw, then play.

Once again, the continuous brown line represents a ridge and bounds the game.

A small circle with a dot in it represents the starting point for the cars.

A flag becomes the flag to capture. A simple square creates a building, etc.

The player adds more ridges to make it more challenging. Adds blue to generate a little pond  (which also indicates a different physical trait to this area)

Then – graphics are generated, the players grab their beloved controllers and the battle begins!

This research represents an opportunity for a whole new kind of game experience that could make kids play more in the real world.

Many questions still remain, such as how do you recognize in a sketch what the player really means without requiring her to be an artist or an architect. Or where does the sketch fit in the game play? Before, after or during?

Now, it’s up to game designers to figure out what sketching techniques work best, what’s fun, what’s interesting, and what’s just a doodle.

Who want’s to design an sketch-based augmented reality a game?

ISMAR 2009: Sketch and Shape Recognition Preview From Ben Gurion University

ISMAR 2009 the world’s best augmented reality event starts in 3 days!

If you are still contemplating whether to go – check out what you might be missing on our preview post.

The folks from the Visual Media Lab at Ben Gurion University in collaboration with HIT Lab NZ are preparing a real treat for ISMAR 2009 participants.

Sketch recognition (already covered in our previous post) is a major break away from “ugly” markers or NFT  (tracking natural 2d images). It is the dawn of user generated content for Augmented Reality, and an intuitive new interaction approach for changing the CONTENT overlaid on a marker. Big wow.

In-Place 3D Sketching

But the team lead by Nate Hagbi and Oriel Bergig (with support from Jihad El-Sana and Mark Billinghurst) is just warming up…In the next video Nate shows how any sketch you draw on a paper (or even on your hand!) can be tracked.

So are you telling me I won’t need to print stuff every time I want to play with augmented reality?
-That’s right! Hug a tree and save some ink!

Shape Recognition and Pose Estimation

But wait, there is more!

Nate says this demo already runs on an iPhone.

And to prove it, he is willing to share the code used to access the live video on iPhone 3.0.
(note: this code accesses a private API on the iPhone SDK)

Ready for the BIG NEWS?

For the first time ever, the core code necessary for real augmented reality “(real” here means precise alignment of graphics overlaid on real life objects) on iPhone 3.0 is available to the public.

To get access to the source code – send us an email.

May a thousand augmented reality apps bloom!

Live from ISMAR ’08: Augmented Reality Layouts

Caffeine levels are set after the well deserved coffee break, and we are back to discuss AR layouts.

Onstage Steven Feiner introducing the speakers of this session.

First presenter is Nate Hagbi which is touching on an unusual topic that often  is seen as a given: In-Place Augmented Reality: A new way for storing and distributing augmented reality content.

In the past AR was used mostly by “AR Experts”. The main limiation for spearing it was mostly hardware related. We have come a long way since and AR can be done nowadays on a cell phone.

Existing encoding methods such as Artag, Artoolkit, Studierstube, MXRtoolkit as not human readable and require to store additional information in a back-end database.

Take the example of AR advertising for the Willington Zoo tried by Satchi and Satchi (2007).

This is a pretty complex approach which requires publishing printed material, creating a database for the additional AR info and querying database before presenting

In place Augmented reality is a vision based method for extracting content all encapsulated in the image itself.

The process includes: Using our visual language to encode the content in the image. The visualization is done as in a normal AR application.

The secret sauce of this method is the visual language used to encoding the AR information.

There are multiple benefits to this approach: the content is human readable and it avoids the need for an AR database, and for any user maintenance of the system. This approach also works with no network communication.

A disadvantage is that there is a limit of the amount of info which can be encoded in an image. Nate describes this as a trade off.

I am also asking myself, as a distributor of AR applications, what if I  want to change AR data on the fly? Nate suggests that in such a case a hybrid approach could be used: some of the info is extracted from the encoded image. Additional image coding could point to dynamic material from the network (e.g. updated weather or episodic content).

~~~

Second presenter is Kohei Tanaka which will unveils An Information Layout Method for an Optical See-through Head Mounted Display Focusing on the Viewability

The idea in short is to place virtual information on the AR screen in a way that always maintains a viewable contrast.

The amusing example demonstrates a case where this approach ca help dramatically: you are having tea with a friend, wearing your favorite see-through AR HMD. An alert generated the AR system tries to warn me about a train I need to catch, but due to the bright alert on top of a bright background – I miss the alert, and as a consequence miss the train…

Kohei’s approach, makes sure that the alert is displayed in a part of the image where the contrast is good enough to make me aware of the alert. Next time, I will not miss the train…

Question: Is in it annoying for users that the images on screen constantly change position…?

Kohei responds that it requires further research…

~~~

Last in this session is Stephen Peterson from Linkoping University with a talk about Label Segregation by Remapping Stereoscopic Depth in Far-Field Augmented Reality.

The domain: Air Traffic control. A profession that requires to maintain multiple sources of information and combine them into a single context cognitively.

Can Augmented Reality help?

The main challenge is labeling: how do you avoid clutter of labels that could quickly confuse the Air traffic controller?

The conclusion: Remapping stereoscopic depth of overlapping labels in far field AR improves the performance. In other words – when you need to display numerous labels on a screen that might overlap with each other – use the depth of the view and display the labels in different 3d layers.

================

From ISMAR ’08 Program:

Layout

  • In-Place Augmented Reality
    Nate Hagbi, Oriel Bergig, Jihad El-Sana, Klara Kedem, Mark Billinghurst
  • An Information Layout Method for an Optical See-through Head Mounted Display Focusing on the Viewability
    Kohei Tanaka, Yasue Kishino, Masakazu Miyamae, Tsutomu Terada, Shojiro Nishio
  • Label Segregation by Remapping Stereoscopic Depth in Far-Field Augmented Reality
    Stephen Peterson, Magnus Axholt, Stephen Ellis