Augmented Reality Game Poised to Win Game Award: Vote for Kweekie

Stephan Cocquereaumont, president and lead developer of Int13, a French next-gen games studio for Smartphones, has just shared with me the latest video of his mobile Augmented Reality game – Kweekies:

Kweekies is an Augmented Reality virtual pet game that allows gamers to interact with their pet by using the embedded camera of their Smartphone. 3 selling points – Augmented Reality that just works – Cute Virtual Pets – Online Competition


Kweekies is one of the 6 nominees for International Mobile Gaming Awards (IMGA) in the newly established Real World Games Category.

The competition is taking place this week at the Mobile World Conference in Barcelona.

Here are the nominees:
Ghostwire
FastFoot-Challenge
Kurai: The Dark Monolith
Kweekies
MoveYa!
Aikon Ghost Hunter

Is Kweekie the only true Augmented Reality Game in the bunch?

We already had that debate before…

In any case, the winners will be announced soon – and you can make a difference.

Vote for the best!

Live From WARM ’09: Keynote – Projection Over Four Orders of Magnitude

Oliver Bimber (Bauhaus University – Weimar) one of the world’s leaders is spatial augmented reality kicks off with barb: “unlike other sessions, this session is NOT about mobile augmented reality but rather – spatial (projected) augmented reality. Welcome  to the wonderful world of Oliver Bimber.
He projects visuals on every day surfaces, by using structured light and camera feedback.
Oliver amazes by demonstrating a projection on…a glass of wine – using inverted lighting.
Adaptive photometric: Have you ever seen Shrek projected on a stone wall? Oliver makes it look easy with a smooth resulting image.
Another demonstration shows how you can record footage that would usually require a green screen – in the scene itself, with no need to go to a dedicated studio.
Oliver keeps going with Reverse Radiosity and Multi Focal Projection. You have to see it (because I can’t put it in words…)

On to more applications: visualization of radiological images – x ray film, diagnostic monitors, and high quality paper prints – aren’t optimal for diagnostics because of its low contrast.
Super Imposing Dynamic Range (demonstrated at ISMAR ’08.) offers 6 times higher contrast than x ray film.
Radiologists have confirmed that this technique does better than existing techniques.
Another application is for Light Microscopy. Contrast is a problem is shiny surfaces in operations or manufacturing scenarios. Oliver shows a prototype of projected light microscopy – with a size of 2 micro meters that increases the contrast by a factor of 5 and removes the background noise on a more uniform illumination – and this is just the beginning.  This is important for applications of image analysis.

Now why the mysterious title?
Simply because with Oliver’s techniques, contrast is improved by 4 orders of magnitude…

Question from the audience – why not use laser projectors?
Oliver responds that the issue is not within the projector but mostly on the surface – so even with laser projectors you’ll need the compensation discussed.

After lunch – demos!

Live From WARM ’09: The World’s Best Winter Augmented Reality Event

Welcome to WARM 2009, where augmented reality eggheads from both sides of the Danube meet for 2 days to share ideas and collaborate.

It’s the 4th year WARM is taking place – always in Graz university, and always in February – to provide an excuse for a skiing event, once the big ideas are taken in. Hence the cunning logo:

This year 54 attendees from 16 different organizations in 5 countries are expected (Austria, Germany Switzerland, England and the US). The agenda is jam-packed with XX sessions, Lab demos and a keynote by Oliver Bimber. I have the unenviable pleasure of speaking last.

It’s 10 am. Lights are off. Spotlight on Dieter Schmalstieg, the master host, taking the stage to welcome everybody.
He admits, the event started as a Graz meeting and just happened because guests kept coming.

Daniel Wagner, the eternal master of ceremony of WARM, introduces Simon Hay from Cambridge (Tom Drummond group) the first speaker in the Computer Vision session. Simon will talk about “Repeatability experiments  for interest point location and orientation assignment”  – an improvement in feature based matching for the rest of us…

The basic idea: detect interest regions in canonical parameters.
Use, known parameters that come through Ferns, PhonySift, Sit Mops, and MSERs searches,
and accelerate and improve the search with location detectors and orientation assignments.

After a very convincing set of graphs, Simon concludes by confirming Harris and FAST give reasonable performance and gradient orientation assignment works better than expected.

Next talk is by Qi Pan (from the same Cambridge group) about “Real time interactive 3D reconstruction.”

From the abstract:
“High quality 3D reconstruction algorithms currently require an input sequence of images or video which is then processed offline for a lengthy time. After the process is complete, the reconstruction is viewed by the user to confirm the algorithm has modelled the input sequence successfully. Often certain parts of the reconstructed model may be inaccurate or sections may be missing due to insufficient coverage or occlusion in the input sequence. In these cases, a new input sequence needs to be obtained and the whole process repeated.
The aim of the project is to produce a real-time modelling system using the  key frame approach which provides immediate feedback about the quality of the input sequence. This enables the system to guide the user to provide additional views for reconstruction, yielding a complete model without having to collect a new input sequence.”

Couldn’t resist pointing out the psychological sounding algorithms (and my ignorance) Qi uses such as Epipolar Geometry and PROSAC, reconstructing Delauney Triangulation followed by probabilistic Tetrahedral carving. You got to love these terms.

The result is pretty good, though still noisy – so stay tuned for future results of Qi’s research.

Third talk is by Vincent Lepetit from Computer Vision Lab from the Swiss CV Lab at EPFL.
Vincent starts with a recap of Keypoint recognition: Train the system to recognize keypoints of an object.
Vincent then demonstrates works leveraging this technique: an awarded work by Camille Scherrer “Le monde des montagnes” a beautiful augmented book, and a demo by Total Immersion targeted for advertising.

Now, on to the new research dubbed Generic Trees. The motivation is to speed up the training phase and to scale.
A comparison results shows it’s 35% faster. To prove, he shows a video of a SLAM application.
Generic Trees method is used by Willow Garages for autonomous robotics – which is implementing Open CV.

Next, he shows recognizing camera pose with 6 degrees of freedom (DOF) based on a single feature point (selected by the user). Impressive.

That’s a wrap of the brainy Computer Vision session. Next is Oliver Bimber’s keynote.

Meet the “Six Sense” Device: Augmented Reality MIT style

This week at TED, on the very stage where Bill Gates unleashed mosquitos into the audience to make a point about the need to cure Malaria, an MIT researcher, Patty Maes, unveiled a “six sense” device.

Maes demonstrated a portable device constructed out of commercial of the shelf products such as a web camera, pocket projector, and a cell phone.

What kind of “six sense” feats can it achieve?

Yahoo Tech captured the new ways to interact with the world made possible with this device:

  • turn any surface into a touch-screen for computing, controlled by simple hand gestures
  • take photographs by framing a scene with your hands
  • project a watch face by creating a circle on your wrist with your finger
  • recognize items on store shelves and provide personalized recommendations
  • look at an airplane ticket and know whether the flight is on time
  • project information about a book while browsing at a store
  • recognize articles in newspapers, retrieve the latest related stories or video from the Internet and play them on pages

Augmented reality enthusiasts would immediately recognize these fantastic ideas. Whether you use cell phones, goggles, or a projectors to view the added information – it’s a whole new way to interact with the world.

Now we have to wait patiently until TED uploads the video. ***update*** see videos below.

Last year cellphones took center stage in spearheading augmented reality into the main stream. Out of the blue comes this spatial augmented reality cobbled-together-device and takes the spot light.

Oliver Bimber is not surprised. He’s been leading that school of thought for a while and even wrote a book about it: Spatial Augmented Reality: Merging Real and Virtual Worlds

So what’s the total?

“Six sense” device: $300

Interacting with the world in a totally new way: priceless…

ETA: 2019

***update***

Andy Baio just tipped me off that Wired posted up these videos – thanks Andy!

Vodpod videos no longer available. Vodpod videos no longer available.


Hacker’s Cafe Celebrates Virtual Tokyo with Augmented Reality Game

Via Hacker’s cafe

Hacker’s Cafe celebrated Virtual Tokyo day with an augmented reality game  dubbed – by the Japanese tradition – Cyber Star Rally Challenge.

Hackers hacked some virtual stars in a city square and let loose a pack of other hackers riding hacked MTBs, armed with GPS’, Google 3D view and a (hacked) augmented reality software.

They were tasked with one mission only: collect as many stars as possible.

You have to see it to believe it:

How did it work?

Your guess is as good as mine.

(unless you can read the instructions in Japanese)

Stimulant XRAY Exposes What’s Under the [Microsoft] Surface

When Microsoft showed SecondLight at PDC 2008 last year, Stimulant was inspired to do something cool with Microsoft’s Surface.

Here is the prototype they built. It takes advantage of Surface’s object recognition capabilities to identify the position of one or more iPhones, and turns those phones into “see through displays” revealing a second layer of information:

Vodpod videos no longer available.

Stimulant is excited by the potential of this as captured in their own words:

…adding a layer of personalized information on top of a public computing experience

SecondLight’s demo was quite inspiring, taking a different angle at this idea: by hovering with tracing paper over an image (think looking glass) it can reveal additional information about a star constellation, or show  street names on an aerial photo.

Now imagine  this capability going beyond “public computing”; imagine having a “magic lens” that allows you to see through anything, or add personalized information to any real life object you look at.

This is one of the stimulating promises of augmented reality. And in this demo, Stimulant made one of the first steps. towards this vision.

Coca Cola Augments Our World at Super Bowl XLIII

Lenny Raymond just unveiled this  sneak peak of an augmented reality commercial by Coca Cola about to air tomorrow during  Super Bowl 43.

Vodpod videos no longer available.

For Lenny this is a proof that augmented reality has made it to the mainstream:

If a Coke commercial during the Super bowl isn’t mainstream, I don’t know what is

For me, it’s mostly a source of inspiration: we are all getting lost in virtual worlds and we need something to help us rediscover reality.

In this case, a coke does the job…

I immediately added it to my AR commercial hall of fame: TV commercials that will inspire your next augmented reality experience

Now, who’s willing to bet on how many augmented reality commercials will air during the Super Bowl?

Hint: there will be at least two counting this one…

Vodpod videos no longer available.

The results are in. How many augmented reality commercials apeared during Super Bowl XLIII?

A lot!

In addition to the 2 mentioned above, here are some more:

Bud Light Drinkability – Ski Lodge

…draw graffiti anywhere with your finger and change reality…

Bud Light Drinkability – Lime

…and why not a weather changing augmented reality app?

(Oh, that was already invented in Australia – see ISMAR demos and scroll to “the most down under demo”)

Coke – Heist

…sure smells augmented…

Pedigree – You should have a dog

…looks realistic but when you think about it – why not augment your pet to look like a Rhinoceros?

A New Platform for The Next Generation of Augmented Reality Games

Today we are celebrating!

Ohan Oda from Columbia University in New York just released his new version of Goblin XNA. Kudos!

Goblin XNA is a development framework for Augmented Reality apps with a focus on games. It is based on Microsoft’s popular XNA Game Studio 2.0 that enables game developers to be successful on Microsoft gaming platforms (i.e. PC, Xbox, Win Mobile)

Ohan built it as a one man show (under the supervision of Steven Feiner and with help from his lab colleagues) as part of his PhD research project.

Has anyone tried it so far?

Ohan says he just released it to the public, so as of now, only he and his lab members have. However, the framework was used in 3DUI and Augmented Reality course, so approximately 60 students have used it so far.

What can you do with it?

Check out these demos created with the framework:

AR Racing Game

AR Electronic Field Guide

Want more? Check out the AR Domino on MSDN.

Here is the announcement in Ohan’s own words

We would like to inform you that Goblin XNA is finally released, and
it’s downloadable from http://www.codeplex.com/goblinxna .

We apologize for those of you who waited very long (some of you
probably waited for almost a year).

Source code, API documentation, user manual, installation guide,
tutorials, and a relatively large-sized project (AR domino game) is
included with the release.

For questions and bug reports, Please do NOT email me directly, but
instead, please post your questions and bug reports through Codeplex.
I won’t be able to guarantee quick response since I’m the only
developer of Goblin XNA, but I will try my best to answer your
questions and fix bugs.

Thanks
Ohan

Ohan will be working on the framework for his research at Columbia for another couple of years – so until then you can count on him to continuously update the framework for both bug fixes and new feature additions.

Try it and show us what kind of reality experiences you can build.

I Had A MID Night Dream

The US celebrated Martin Luther King’s day last week, which above all reminds us to keep dreaming – sometimes dreams do come true.

I had a dream too…and in my dream, an amazing Mobile Internet Device (MID) was released for our augmented reality experiences.

(See a list of existing MIDs)

my ar device

Here is a first take at defining the dream MID for augmented reality (2009-2010 time frame):

  • Manufacturer – a credible leader, with a friendly content distribution channel
  • Price – Ideally sub $200. Initially not more than $400.
  • CPU – Dual core 1.3 Mhz, with a Floating Point Unit, SIMD extensions
  • GPU – integrated with performance similar to TI’s OMAP3 and NVidia’s Tegra (the competition!)
  • Screen – 4.5 Inch, Min 800×480 resolution, Multitouch, and a very bright screen
  • Camera – A GOOD CAMERA with a quality lens, video recording at 320×240 or preferably 640×480 (VGA) at 30fps at a good quality (noise, contrast, colors, etc) even under low lighting. Zoom and auto focus a bonus. Front camera – bonus.
  • Low latency for getting the the camera image to the CPU/GPU and in turn to the display
  • Zero-latency video output from the device for a head-worn display (digital or analog)
  • Low-latency inputs for external sensors (such as a tracker on the head-worn display) and cameras (on the head-worn display).
  • GOOD graphics drivers, Open GL 2.0 (unlike the current Intel OpenGL drivers on Atom which are almost a show stopper for many projects…)
  • Device size – roughly 130x70x12mm (so that there’s little margin around the screen)
  • Weight – less than 200g
  • OS – The best Mobile Linux out there, with C/C++ based SDK and a good emulator. Also as an alternative: Win Mobile support (better dev tools)
  • Buttons – Very few. QWERTY keyboard is a nice to have.
  • Connectivity – 3G/GSM, WIFI, Bluetooth
  • Sensors – A-GPS, accelerometer, 3DOF Gyro sensors
  • 3-axis compass
  • Storage – 8G and expandable
  • Memory – 1G RAM
  • Battery – Min. 3 hours while in full use of camera and network
  • Extensibility – video out for an HMD, USB port on it.
  • Openness – open source…

So what do you think?

This spec was actually a swift response to a challenge presented by Intel’s Ashley McCorkle.

Many thanks for the contribution by Daniel-Good camera!-Wagner, Steven-don’t forget latency!-Feiner , Bruce-a couple of extras-Thomas, and Charles-Very bright screen-Woodward.

In ISMAR 2009 in Orlando, we are planning to organize a round table discussion for this very purpose. Would you be interested in participating?

***update***

The experts and enthusiasts are weighing in, and as it usually is in reality (as opposed to dreams) remind us that we need to consider trade-offs.

Charles for example says he would trade off battery time for a lighter device. He also suggests that for professional use – a higher price ($1000 range) for a higher quality device would be reasonable.

Augmented Reality Helps Solve the Rubik’s Cube

The Cult of Mac just uncovered the new iPhone app: CubeCheater.

It helps you decode your Rubik’s cube.

Take pictures of the cube’s faces with your iPhone and it will guide you through the shortest number of moves required to solve it.

Now, take it a step further and imagine the iPhone (or better yet – goggles) continuously watching your cube and cleverly guiding you on every move – for the fastest solution ever.

Wouldn’t it be cool?

Well, at least for Cube obsessed kids, it would.

In any case, I’d mark this app as an important milestone towards putting augmented reality to use for the good of mankind.

Today it teaches us how to solve the Rubik’s cube. Tomorrow it will teach us everything else.