Magic Vision Lab will Melt, Distort, X-Ray and then Augment your Reality

Chris Sandor, a veteran AR researcher, who worked with some of the most distinguished AR labs and researchers in the world such as Graz University, Columbia University, TU Munich, and Canon Research – is now is the Director of the Magic Vision Lab at the University of South Australia.
Chris and team have recently revamped the lab’s website which is a good opportunity to revisit some of their work previously covered during ISMAR.
First a word on the lab’s name.
In his “Sermon on the flatlands“, augmented reality prophet Bruce Sterling cautioned the AR community to stay away from terms like “magic”: “Magic is cheezy and deceitful. Practicing a leisure domain is a problem” he argued.
Chris brings to his defense a no lesser sci-fi luminary, Arthur C. Clarke, which famously coined the third law: “Any sufficently advanced technology is indistinguishable from magic”.
***
What do you think about the usage of the term “Magic” in conjunction with AR?
***
Now to substance in the revamped site:
Humans perceive their environment primarily through vision. Our goal is to enhance human vision with computer-generated graphics in order to amplify human intelligence on a world-wide scale. Our vision is shared by a large and growing community that is investigatingAugmented Reality.
The lab focuses on 2 key areas:

Mobile Augmented Reality Visualizations (melting, distorting, xray)

The following X-ray Vision is one of the more inspiring videos in AR
The next video depicts a technique called “melt vision” which is useful when a user wants to see points of interest hidden from view (occluded) and could also double as a simulation of a building demolition:)

Vodpod videos no longer available.

***
The second area of the lab’s focus is dubbed:

Visuo-haptic augmented reality systems

In laymen’s terms, it’s about combining visual and touch.
In this video a user, wearing a head-worn display, can see the virtual car and feel the car with his right hand through a haptic device called Phantom.

Vodpod videos no longer available.

This technology has huge potential in training newbies and experienced professionals in many fields such as healthcare, design, manufacturing, and many more.

To wrap up this AR Lab review –

Here is a classic: AR Weather, received our tongue-in-cheek award of the “most down-under demo” during ISMAR 2008. All I have to show is this image that depicts an AR application that overlays snow on a sunny day in Australia.

If you would like to showcase your work in augmented reality labs – let us know!

ISMAR 2009: Sneak Peek from HIT Lab New Zealand

ISMAR, the world’s best Augmented Reality (AR) event is just 11 days away!

We have already provided a sneak preview of some of the demos.

Here are 2 research results, to be introduced at ISMAR, from one of the most prolific AR labs in the world: HIT Labs NZ, courtesy of Mark Billinghurst:

Embedded AR

We have been developing AR software for the Beagle Board OMAP3 development kit. This allows you to run a whole AR system on a $150 piece of embedded hardware and use Linux for development. The OMAP 3 chip is the same that is in many new smart phones so it is a great way to do some benchmarking and prototyping for mobile phone AR applications.

If EmbeddedAR will have similar adoption to the open source Artoolkit, then we’ll soon see AR-enabled devices popping up like mushrooms after the rain. Potentially very cool.

Android AR

We have been developing our own mobile outdoor AR platform based on the Android operating system. We are using GPS and compass information to overlay 3D virtual models on the real world outdoors. Unlike some other systems we support full 3D model loading and also model manipulation, plus rendering effects such as shadows etc.

That’s not as new. Rouli would categorize it as a YAARB™ (Yet Another AR Browser…)
Wikitude and Layar (as well as other browsers) have similar capabilities (or will soon have), and are already open and accessible to many developers.

Want to learn more about it? Check out Android AR.

***

Just 2 more reasons to go to ISMAR 2009. It is going to be HUGE!

Don’t wait any longer – register Today!

Why Int13 Got in Bed with Total Immersion

Yesterday, Total Immersion (TI) and Int13 – both French augmented reality companies – announced a strategic partnership, in which Int13 would help TI cover a major gap in its product line: Mobile Augmented Reality.

A French kiss or a Russian bear hug?

Instead of guessing, we went out to speak with Stéphane Cocquereaumont, president and lead developer at In13, the mobile games boutique studio behind an AR game legend: Kweekies (“when is it coming out?”).

Ori: Congratulations Stéphane! This must be a big shift for Int13.

Stéphane: Thanks. We started discussions with TI back in April, so we knew where this was going for some time now.

Ori: What was TI’s motivation to approach Int13?

Stéphane: TI approached us because their clients kept asking for “360” solutions. While TI is strong with large installations, live shows, and PC based experiences – their mobile AR line needed a boost.

Ori: …and what about Int13? You had it going as an independent studio with Kweekies, first on Nokia and then on the iPhone.

Stéphane: We have been flooded with requests from marketing agencies about mobile AR campaigns. At first we tried to adress those requests, but we soon realized managing their expectations in terms of delays, pricing and capabilities of the tech posed a huge overhead, and was distracting us from our goals.

Ori: So, you decided to “outsource” the headache and focus on your product?

Stéphane: Exactly. TI will work with the marketing agencies and other players asking for quick mobile AR applications. We will offer them our tech embedded in a nice and clean SDK with a Lua API, easy to use but with limited flexibility.

Ori: You make TI happy, yet keep the full power to yourself. Make sense. What can you reveal about the business terms?

Stéphane: For TI this partnership means they have a working mobile AR SDK, with worldwide exclusive rights for two years. They pay an annual license fee, plus royalties on all the projects they deliver with our tech.

This should be a win-win partnership.

Ori (note to self): […more of a French kiss than a Russian bear hug.]

Stéphane: We’ve also made a deal with another french company, the project will be shown during CES in January 2010; very exciting project but I can’t say anything about it for now.

Ori: Thanks for teasing us. Can’t wait to see it.

Stéphane: We’re also discussing with device makers, we’ve been to Korea last month, and we’ll release our first AR game very soon (September or October) in Korea. The game will first be available on the SKT network, on Samsung devices.

Ori: Wow. You’re unstoppable.

Stéphane: Wait, there’s more – we’ll soon publish a preview video of ARDefender; it’s a simple game to help players discovers the capabilities of AR.

Ori: Fantastic. And when will we be able to finally play the legendary Kweekies?

Stéphane: Kweekies is delayed…too many other projects on the plate for our small team. We still intend to release it by Christmas – though we can’t promise…
Ori: Awww…that’s a downer…any good news to close with a positive note?

Stéphane: Yes, I got something for you – we’ll soon reveal a new demo of ARWiz 2, the next iteration of our AR tech, faster and much more robust.

Ori: That’s a great comeback. Thanks for sharing the story behind the scenes, Stéphane.

And all the best for Int13 and TI in your new (in bed) relationship.

Reblog this post [with Zemanta]

GDC 2009 Roundup: a (tiny) spark of augmented reality

The dust over GDC 2009 has settled a while ago and finally I got to reflect on the AR experience at the show.  Guess which headline would summarize it best:

a) augmented reality was the talk of the show

b) the expo floor was swarming with AR demos

c) AR games snatched lucrative game awards

d) none of the above

dsc03400

A friend in San Francisco wearing retro AR goggles

Unfortunately (d) is the right answer.

But – and it’s a big ‘but’ – the ignition spark was there. The seed was planted. The first shot of the revolution was fired.

(OK, maybe the last metaphor went too far.)

Here are 5 triggers I identified that ignited the spark:

1) Blair

The first GDC augmented reality talk – ever: Blair MacIntyre covered the latest and greatest about mobile augmented reality in front of a packed room of game developers. Awesome.

2) Demos

Was it the first time AR demos (and more demos) were presented at a major Game Developer Conference ?

Not sure – but it certainly was my first…

3)  Mentions in talks

Was Blair’s AR talk an isolated case?

Perhaps as a main topic it was. However, for the first time, I heard significant mentions of AR in multiple other talks. Check them out:

A talk about pervasive gaming. I liked the title: “Beyond the Screen: Principles of Pervasive Game” by the folks from Pervasive Games. The played with the concept in which the whole world is the playground.  These games, are founded in the belief that doing things for real is pleasurable. Games that harness reality as a source book have interesting dynamics.  Everything in reality matters to the game, the game play emerges from coincidence, and real and artificial blur.

Jane McGonigal fantastic talk “Learning to Make Your Own Reality: How to Develop Games that Re-invent Life As We Know It” introduced a concept she calls programmable reality. Augmented reality is among the key technologies to enable that idea.

Camera Based Gaming: The Next Generation by Diarmid Campbell attracted the attention of a room packed with game developers. He talked about Sony’s upcoming camera games for the PlayStation 3 such as Eye Pet. Armed with the EyeToy camera, these games will have the power to extract amusing gestures from players. Not quite AR – but sure smells like it.

Stretching Beyond entertainmentAR made a surprise appearance in a high profile panel discussion feturing some of the gods of the gaming industry: (from right) Ed Fries , Lorne Lanning,Bing Gordon,Will Wright, and Peter Molyneux.
The best quote award went to Ed Fries for saying: “We need to take game mechanics and apply them to the real world”.

dsc03394

4) Meet ups

Dinners, lunches, business meetups, brainstorming sessions – haven’t had that many meetings with AR enthusiasts since ISMAR 2008…

5) The iPhone

When it comes to bringing AR to the masses – the iPhone is a category on its own . And it doesn’t even know it yet…why the iPhone changed everything

-*-*-

Will we ever get to see answers a, b, or c become a real headline?

Most likely in the next few years, if you ask me.

A (tiny) spark was ignited.

ArToolKit Wins the Oscar of Virtual Reality

An important message from Mark Billinghurst about ArToolKit:hirokazu

On Tuesday March 17th, Hirokazu Kato recevied the 2009 IEEE VGTC
Virtual Reality Technical Achievement Award for his development of the
ARToolKit tracking library. This was given at the IEEE VR 2009
conference, the premier academic conference for virtual reality. The
VR Technical Achievement Award is a great honor and has only been
given out five times before, to such people as the inventors of the
CAVE immersive VR system.

For Hirokazu Kato this is well deserved because, although it was
developed ten years ago, ARToolKit is still the dominant computer
vision based AR tracking library with over 170,000 downloads. Almost
all of the researchers in the AR field have used ARToolKit in one way
or other, and many of the AR applications in the field have roots that
can be traced back to ARToolKit. As a result of his work, AR
researchers and developers didn’t need to build tracking code, but
instead could focus on AR application development and interface
design. The community owes a huge debt of gratitude to Hiro and it’s
great to see that his achievement has been recognized.

The full announcement.

Microsoft to Demo Augmented Reality at TechFest

Technology Review reports the news:

Today, Microsoft researchers will demonstrate software that can, in real time, superimpose computer-generated information on top of a digitized view of the real world.

Michael Cohen,  principal researcher at Microsoft, and his colleagues will demo the augmented-reality technology at TechFest, an annual showcase of Microsoft’s research projects, in Redmond.

Well, we have seen augmented reality running on a Vaio before – but never from Microsoft. This could be interesting. Especially the tools.

Michael spits all the right examples of real life applications using AR, however at TechFest, the software will be used to lead people on a treasure hunt to a hidden prize of a (virtual) pot of gold.

The bubble flow navigation is way cool. Coming up right after the poster demonstration.

Games will lead the way.

***Update***

via KZero: a collection of additional Microsoft-produced videos of the future – featuring largely augmented reality applications.

Vodpod videos no longer available.

Vodpod videos no longer available.

Vodpod videos no longer available.

Vodpod videos no longer available.

Vodpod videos no longer available.

Augmented Reality in Flash Now Commercially Available

Mark Billinghurst just shared with me the good news: ARToolworks announces commercial availability of FLARToolKit. He explains:

Basically it means that companies can now get commercial licenses to FLARToolKit and build flash based Augmented reality websites without having to release their source code as required by the GPL license.

Saqoosha (Tomohiko Koyama) was the first to marry AR with flash as open source – a major step in simplifying AR apps on a PC. Now ARToolworks CTO, Philip Lamb, is taking it to a commercial level.

The code is available at the Spark Project site.

To get your creative juices going, here’s a reminder of the kind of things this baby can deliver:

Vodpod videos no longer available.

Vodpod videos no longer available.

What are YOU going to create with it?

Live From WARM ’09: Keynote – Projection Over Four Orders of Magnitude

Oliver Bimber (Bauhaus University – Weimar) one of the world’s leaders is spatial augmented reality kicks off with barb: “unlike other sessions, this session is NOT about mobile augmented reality but rather – spatial (projected) augmented reality. Welcome  to the wonderful world of Oliver Bimber.
He projects visuals on every day surfaces, by using structured light and camera feedback.
Oliver amazes by demonstrating a projection on…a glass of wine – using inverted lighting.
Adaptive photometric: Have you ever seen Shrek projected on a stone wall? Oliver makes it look easy with a smooth resulting image.
Another demonstration shows how you can record footage that would usually require a green screen – in the scene itself, with no need to go to a dedicated studio.
Oliver keeps going with Reverse Radiosity and Multi Focal Projection. You have to see it (because I can’t put it in words…)

On to more applications: visualization of radiological images – x ray film, diagnostic monitors, and high quality paper prints – aren’t optimal for diagnostics because of its low contrast.
Super Imposing Dynamic Range (demonstrated at ISMAR ’08.) offers 6 times higher contrast than x ray film.
Radiologists have confirmed that this technique does better than existing techniques.
Another application is for Light Microscopy. Contrast is a problem is shiny surfaces in operations or manufacturing scenarios. Oliver shows a prototype of projected light microscopy – with a size of 2 micro meters that increases the contrast by a factor of 5 and removes the background noise on a more uniform illumination – and this is just the beginning.  This is important for applications of image analysis.

Now why the mysterious title?
Simply because with Oliver’s techniques, contrast is improved by 4 orders of magnitude…

Question from the audience – why not use laser projectors?
Oliver responds that the issue is not within the projector but mostly on the surface – so even with laser projectors you’ll need the compensation discussed.

After lunch – demos!

Live From WARM ’09: The World’s Best Winter Augmented Reality Event

Welcome to WARM 2009, where augmented reality eggheads from both sides of the Danube meet for 2 days to share ideas and collaborate.

It’s the 4th year WARM is taking place – always in Graz university, and always in February – to provide an excuse for a skiing event, once the big ideas are taken in. Hence the cunning logo:

This year 54 attendees from 16 different organizations in 5 countries are expected (Austria, Germany Switzerland, England and the US). The agenda is jam-packed with XX sessions, Lab demos and a keynote by Oliver Bimber. I have the unenviable pleasure of speaking last.

It’s 10 am. Lights are off. Spotlight on Dieter Schmalstieg, the master host, taking the stage to welcome everybody.
He admits, the event started as a Graz meeting and just happened because guests kept coming.

Daniel Wagner, the eternal master of ceremony of WARM, introduces Simon Hay from Cambridge (Tom Drummond group) the first speaker in the Computer Vision session. Simon will talk about “Repeatability experiments  for interest point location and orientation assignment”  – an improvement in feature based matching for the rest of us…

The basic idea: detect interest regions in canonical parameters.
Use, known parameters that come through Ferns, PhonySift, Sit Mops, and MSERs searches,
and accelerate and improve the search with location detectors and orientation assignments.

After a very convincing set of graphs, Simon concludes by confirming Harris and FAST give reasonable performance and gradient orientation assignment works better than expected.

Next talk is by Qi Pan (from the same Cambridge group) about “Real time interactive 3D reconstruction.”

From the abstract:
“High quality 3D reconstruction algorithms currently require an input sequence of images or video which is then processed offline for a lengthy time. After the process is complete, the reconstruction is viewed by the user to confirm the algorithm has modelled the input sequence successfully. Often certain parts of the reconstructed model may be inaccurate or sections may be missing due to insufficient coverage or occlusion in the input sequence. In these cases, a new input sequence needs to be obtained and the whole process repeated.
The aim of the project is to produce a real-time modelling system using the  key frame approach which provides immediate feedback about the quality of the input sequence. This enables the system to guide the user to provide additional views for reconstruction, yielding a complete model without having to collect a new input sequence.”

Couldn’t resist pointing out the psychological sounding algorithms (and my ignorance) Qi uses such as Epipolar Geometry and PROSAC, reconstructing Delauney Triangulation followed by probabilistic Tetrahedral carving. You got to love these terms.

The result is pretty good, though still noisy – so stay tuned for future results of Qi’s research.

Third talk is by Vincent Lepetit from Computer Vision Lab from the Swiss CV Lab at EPFL.
Vincent starts with a recap of Keypoint recognition: Train the system to recognize keypoints of an object.
Vincent then demonstrates works leveraging this technique: an awarded work by Camille Scherrer “Le monde des montagnes” a beautiful augmented book, and a demo by Total Immersion targeted for advertising.

Now, on to the new research dubbed Generic Trees. The motivation is to speed up the training phase and to scale.
A comparison results shows it’s 35% faster. To prove, he shows a video of a SLAM application.
Generic Trees method is used by Willow Garages for autonomous robotics – which is implementing Open CV.

Next, he shows recognizing camera pose with 6 degrees of freedom (DOF) based on a single feature point (selected by the user). Impressive.

That’s a wrap of the brainy Computer Vision session. Next is Oliver Bimber’s keynote.

A New Platform for The Next Generation of Augmented Reality Games

Today we are celebrating!

Ohan Oda from Columbia University in New York just released his new version of Goblin XNA. Kudos!

Goblin XNA is a development framework for Augmented Reality apps with a focus on games. It is based on Microsoft’s popular XNA Game Studio 2.0 that enables game developers to be successful on Microsoft gaming platforms (i.e. PC, Xbox, Win Mobile)

Ohan built it as a one man show (under the supervision of Steven Feiner and with help from his lab colleagues) as part of his PhD research project.

Has anyone tried it so far?

Ohan says he just released it to the public, so as of now, only he and his lab members have. However, the framework was used in 3DUI and Augmented Reality course, so approximately 60 students have used it so far.

What can you do with it?

Check out these demos created with the framework:

AR Racing Game

AR Electronic Field Guide

Want more? Check out the AR Domino on MSDN.

Here is the announcement in Ohan’s own words

We would like to inform you that Goblin XNA is finally released, and
it’s downloadable from http://www.codeplex.com/goblinxna .

We apologize for those of you who waited very long (some of you
probably waited for almost a year).

Source code, API documentation, user manual, installation guide,
tutorials, and a relatively large-sized project (AR domino game) is
included with the release.

For questions and bug reports, Please do NOT email me directly, but
instead, please post your questions and bug reports through Codeplex.
I won’t be able to guarantee quick response since I’m the only
developer of Goblin XNA, but I will try my best to answer your
questions and fix bugs.

Thanks
Ohan

Ohan will be working on the framework for his research at Columbia for another couple of years – so until then you can count on him to continuously update the framework for both bug fixes and new feature additions.

Try it and show us what kind of reality experiences you can build.