Tomorrow, I’ll be at the IEEE VR 2010 conference in Boston. Monday is dedicated to a series of augmented reality presentations.
One of the most interesting one is:
In-Place Sketching for Content Authoring in Augmented Reality Games
By the all star team from Ben Gurion University (Israel) and HIT Lab (New Zealand):
- Nate Hagbi
- Raphaël Grasset
- Oriel Bergig
- Mark Billinghurst
- Jihad El-Sana
When it comes to AR games – we are all still searching for “Pong” a simple game that will captivate millions of players and kickoff this new genre.
One of the challenges in many AR games, is the reliance on printouts of ugly markers.
Plus many games use the markers as controllers which is a bit awkward (especially to a bystander).
Sketching offers an alternative for a more natural user interface.
Sketching is more natural than drawing with a mouse on a PC, even more intuitive that a touch screen. That’s still the first thing that kids are taught in school.
It’s not necessarily a better interface – but it’s an alternative that offers a very intuitive interaction, and enriched the player’s experience. I believe it could create a whole new genre of games.
In place sketching in AR games has huge potential in gaming – but many questions arise:
- What’s the design space for such a game?
- What are the tools to be used?
- How do you understand what the player meant in a sketch?
- What’s the flow of interaction?
- How do you track it?
What’s “In-place AR”? It’s when the augmented content is extracted from the real world (an illustration, an image , a sketch, or a real life object)
Here is the sequence of research efforts leading to this:
- Extraction and augmentation – A new way for storing and distributing augmented reality content.(ISMAR 2008)
- Hand-sketched physical experiments – using isometric drawings and a physics engines (ISMAR 2009)
- 3D registration using natural shapes – best paper award at ISMAR 2009
Here are 2 game prototypes the team created called AR Gardener and Sketch-Chaser. It is played on a regular white board.
Draw symbols on the white board and 3D content is pulled from a database of objects to appeas in an Augmented Reality (AR) scene.
The sketch determines what object to create, its location, scale, and rotation.
The outer line sketched here defines the game anchor and is served for tracking; in this game it becomes a brown ridge.
Simple symbols drawn generate a couple of benches, a cabin, and in the spirit of the playground theme – rockers, and swings.
Virtual elements could also be created based on a real life object such as a leaf; here it is used to create a patch of grass using the color and shape of the leaf (and no, it can’t recognize that’s a leaf, or 3D object whatsoever)
The color of the marker could define the type of virtual object created: For example, blue represents water. Other objects that are put in it will sink.
In the second game you basically create an obstacle course for a car chase.
It’s a “catch the flag” or tag game. The winner is whoever has the flag for the most time.
First you draw, then play.
Once again, the continuous brown line represents a ridge and bounds the game.
A small circle with a dot in it represents the starting point for the cars.
A flag becomes the flag to capture. A simple square creates a building, etc.
The player adds more ridges to make it more challenging. Adds blue to generate a little pond (which also indicates a different physical trait to this area)
Then – graphics are generated, the players grab their beloved controllers and the battle begins!
This research represents an opportunity for a whole new kind of game experience that could make kids play more in the real world.
Many questions still remain, such as how do you recognize in a sketch what the player really means without requiring her to be an artist or an architect. Or where does the sketch fit in the game play? Before, after or during?
Now, it’s up to game designers to figure out what sketching techniques work best, what’s fun, what’s interesting, and what’s just a doodle.