Pencil and Paper are not Dead: Augmented Reality Sketching Games at VR 2010

Tomorrow, I’ll be at the IEEE VR 2010 conference in Boston. Monday is dedicated to a series of augmented reality presentations.

One of the most interesting one is:

In-Place Sketching for Content Authoring in Augmented Reality Games

By the all star team from Ben Gurion University (Israel) and HIT Lab (New Zealand):

  • Nate Hagbi
  • Raphaël Grasset
  • Oriel Bergig
  • Mark Billinghurst
  • Jihad El-Sana

When it comes to AR games – we are all still searching for “Pong” a simple game that will captivate millions of players and kickoff this new genre.

One of the challenges in many AR games, is the reliance on printouts of ugly markers.

Plus many games use the markers as controllers which is a bit awkward (especially to a bystander).

Sketching offers an alternative for a more natural user interface.

Sketching is more natural than drawing with a mouse on a PC, even more intuitive that a touch screen. That’s still the first thing that kids are taught in school.

It’s not necessarily a better interface – but it’s an alternative that offers a very intuitive interaction, and enriched the player’s experience. I believe it could create a whole new genre of games.

In place sketching in AR games has huge potential in gaming – but many questions arise:

  • What’s the design space for such a game?
  • What are the tools to be used?
  • How do you understand what the player meant in a sketch?
  • What’s the flow of interaction?
  • How do you track it?

What’s “In-place AR”?  It’s when the augmented content is extracted from the real world (an illustration, an image , a sketch, or a real life object)

Here is the sequence of research efforts leading to this:

Here are 2 game prototypes the team created called AR Gardener and Sketch-Chaser. It is played on a regular white board.

AR Gardener

Draw symbols on the white board and 3D content is pulled from a database of objects to appeas in an Augmented Reality (AR) scene.

The sketch determines what object to create, its location, scale, and rotation.

The outer line sketched here defines the game anchor and is served for tracking; in this game it becomes a brown ridge.

Simple symbols drawn generate a couple of benches, a cabin, and in the spirit of the playground theme – rockers, and swings.

Virtual elements could also be created based on a real life object such as a leaf; here it is used to create a patch of grass using the color and shape of the leaf (and no, it can’t recognize that’s a leaf, or 3D object whatsoever)

The color of the marker could define the type of virtual object created: For example, blue represents water. Other objects that are put in it will sink.

Sketch-Chaser

In the second game you basically create an obstacle course for a car chase.

It’s a “catch the flag” or tag game. The winner is whoever has the flag for the most time.

First you draw, then play.

Once again, the continuous brown line represents a ridge and bounds the game.

A small circle with a dot in it represents the starting point for the cars.

A flag becomes the flag to capture. A simple square creates a building, etc.

The player adds more ridges to make it more challenging. Adds blue to generate a little pond  (which also indicates a different physical trait to this area)

Then – graphics are generated, the players grab their beloved controllers and the battle begins!

This research represents an opportunity for a whole new kind of game experience that could make kids play more in the real world.

Many questions still remain, such as how do you recognize in a sketch what the player really means without requiring her to be an artist or an architect. Or where does the sketch fit in the game play? Before, after or during?

Now, it’s up to game designers to figure out what sketching techniques work best, what’s fun, what’s interesting, and what’s just a doodle.

Who want’s to design an sketch-based augmented reality a game?

Live from ISMAR ’08: Latest and Greatest in Augmented Reality Applications

It’s getting late in the second day of ISMAR ’08 and things are heating up…the current session is about my favorite topic: Augmented Reality applications.

Unfortunately, I missed the first talk (had a brilliant interview with Mark Bullinghurst) by Raphael Grasset about the Design of a Mixed-Reality Book: Is It Still a Real Book?

I will do my best to catch up.

Next, Tsutomu Miyashita and Peter Meier (Metaio) are on stage to present an exciting project that games alfresco covered in our Museum roundup: An Augmented Reality Museum Guide a result of a partnership between Louvre-DNP Museum lab and Metaio.

Miyashita introduces the project and describes the two main principles of this application are Works appreciation and guidance.

Peter describes the technology requirements:

  • guide the user through the exhibition and provide added value to the exhibitions
  • integrate with an audio guide service
  • no markers or large area trackin – only optical and mobile trackers

Technology used was Metaio’s Unifeye SDK, with a special program developed for the museum guide. Additional standard tools (such as Maia) were used for the modeling. All the 3d models were loaded on the mobile device. The location recognition was performed based on the approach introduced by Reitmayr and Drummond: Robust model based outdoor augmented reality (ISMAR 2006)

600 people experienced the “work appreciation” and 300 people the guidance application.

The visitors responses ranged from “what’s going on?” to “this is amazing!”.

In web terms, the AR application created a higher level of “stickiness”. Users came back to see the art work and many took pictures of the exhibits. The computer graphics definitely captured the attention of users. It especially appealed to young visitors.

The guidance application got high marks : ” I knew where I had to go”, but on the flip side, the device was too heavy…

In conclusion, in this broad exposure of augmented reality to a wide audience, the reaction was mostly positive. it was a “good” surprise from the new experience. Because this technology is so new to visitors, there is a need to keep making it more and more intuitive.

~~~

Third and last for this session is John Quarles discussing A Mixed Reality System for Enabling Collocated After Action Review (AAMVID)

Augmented reality is a great too for Training.

Case in point: Anesthesia education – keeping the patient asleep through anesthetic substance.

How cold we use AR to help educate the students on this task?

After action review is used in the military for ages: discussing after performing a task what happened? how did I do? what can I do better?

AR can provide two functions: review a fault test + provide directed instruction repetition.

With playback controls on a magic lens, the student can review her own actions, see the expert actions in the same situation, while viewing extra information about how the machine works (e.g. flow of liquids in tubes) – which is essentially real time abstract simulation of the machine.

The result of a study with testers showed that users prefer Expert Tutorial Mode which collocates expert log with realtime interaction.

Educators, on the other hand, can Identify trends in the class and modify the course accordingly.
Using “Gaze mapping” the educator can see where many students are pointing their magic lens and unearth an issue that requires a different teaching method. In addition, educators can see statistics of student interactions.

Did students prefer the “magic lens” or a desktop?

Desktop was good for personal review (afterward) which the Magic lens was better for external review.

The conclusion is that an after action review using AR works. Plus it’s a novel assessment tool for educators.

And the punch line: John Quarles would have killed to have such an After action review to help him practice for this talk…:-)

=====================

From ISMAR ’08 Program:

Applications

  • Design of a Mixed-Reality Book: Is It Still a Real Book?
    Raphael Grasset, Andreas Duenser, Mark Billinghurst
  • An Augmented Reality Museum Guide
    Tsutomu Miyashita, Peter Georg Meier, Tomoya Tachikawa, Stephanie Orlic, Tobias Eble, Volker Scholz, Andreas Gapel, Oliver Gerl, Stanimir Arnaudov, Sebastian Lieberknecht
  • A Mixed Reality System for Enabling Collocated After Action Review
    John Quarles, Samsun Lampotang, Ira Fischler, Paul Fishwick, Benjamin Lok