Who Should Attend The Augmented Reality Event in Santa Clara, CA June 2nd & 3rd, 2010

Over the last 2 years we have seen growing interest in Augmented Reality in various events – panels, dev camps, meetups – and many more. Due to growing demand for knowledge and expertise in augmented reality (AR), a group of AR industry insiders, backed by the AR Consortium have put together the first commercial event dedicated to advance the business of augmented reality.

How is are2010 different from ISMAR…

…previously touted here as the “World’s best Augmented Reality event”?

Well, ISMAR is still the best AR event for the scientific community. If you want to learn about (or present) the latest advancements in AR research – you should be in Seoul this October for ISMAR 2010. However, for the rest of us, who wish to take advantage of AR in practice, in the commercial world, and build a business around it – there was a gaping hole.

That is, until now.

Meet the Augmented Reality Event.

Who’s this event for?

For established and start up AR companies –

For established and start up AR companies (such as Total Immersion, Metaio, Acrossair, Ogmento, Circ.us, Mobilizy, Layar, Zugara, Neogence, whurleyvision, Chaotic Moon Studios, and many more) – are2010 is a stage to showcase their products and services; a venue to form partnerships, learn about latest innovations, and most importantly speak with clients. Bruno Uzzan, CEO of Total Immersion will wow the audience with a cutting edge augmented reality show; Peter Meier, CTO of Metaio, will speak about his companies latest products. Early stage startups and individual developers will receive guidance from Cole Van Nice (Chart Venture Partners) for how to build a successful company in the AR space, including raising funding (from VCs that actually invest in AR), licensing technology and IP, legal aspects, forging partnerships, etc. Christine Perey will speak about the scope of the mobile AR industry today and it’s growth trajectory.

For Developers –

For developers, are2010 is a window into the latest AR algorithms, engines and programming tools. Learn from case studies and post mortems delivered by experienced developers from the leading companies in the space. Blair MacIntyre, director of the GVU Center’s Augmented Environments Lab at Georgia Tech, will speak about his experience with tools and technologies while developing augmented reality games. Daniel Wagner, one of the leading mobile AR researchers in the world, will bring developers into the wonderful world of mobile AR. Patrick O’Shaughnessey, which has lead the development of more webcam-based AR campaigns than anyone else I know – will share his knowledge of what works and what doesn’t. Mike Liebhold, Distinguished Fellow at the Institute for the Future , will speak about Technology foundations of an Open ARweb. Gene Becker, co-founder of AR DevCamp, will dive into augmented reality and ubiquitous computing, and Sean White, a pioneer in Green Tech AR will suggest concrete examples of how AR can help save the planet

For Mobile, Hardware, and Platform Companies

For Mobile, Hardware, and Platform companies (such as Vuzix, Nokia, Qualcomm, Intel, QderoPateo, Microsoft, Google, Apple etc.) are2010 consists of a captive audience to launch and showcase their latest devices, processors, AR glasses, sensors, etc. The best collective minds of the AR commercial world will be onsite to articulate the market demand characteristics and help influence the design of future hardware.

For Clients and Agencies –

For clients and agencies in entertainment, media, publishing, education, healthcare, government, tourism, and many more – are2010 offers everything you need to know about AR: how to leverage augmented reality to advance your brand, attract and keep your customers, and how to build successful campaigns and products that will delight users, including postmortems of landmark augmented reality projects.

Jarrell Pair, CTO and a founder of LP33.tv, will speak about “Augmented Reality in Music Entertainment: Then and Now”, Brian Selzer, co-founder and President of Ogmento, will deliver a crash course for clients and agencies about how to leverage AR in marketing campaigns. Marshal Kirkpatrick, lead blogger for ReadWriteWeb, will share the results of his AR survey collecting feedback from dozens of AR developers and their experience in delivering AR campaigns and apps. Kent Demain, designer of the visual effects in Minority Report, will open our minds with the talk: “Taking Hollywood visual effects spectacle out of the theatre and into your world”. And of course…

For any AR Enthusiast –

Are you an AR Enthusiast? If so, you’re going to feel like a kid in a candy store at ARE, with a soon-to-be unforgettable keynote by Bruce Sterling, demo gallery, exhibitors from leading companies, artists installations from AR artists such as Eric Gradman and Helen Papagiannis, and many more surprises.

If you are into Augmented Reality – are2010 is the one event you should attend this year.

Want to join the event? Early registration is now open!

ISMAR 2009: Tracking a City Model – Preview From Graz University

Only 1 week to go for ISMAR 2009, the world’s best Augmented Reality (AR) event.

Here is one more reason to go to the event.

This stunning “Jakomini” demo from Graz University – the masters of handheld Augmented Reality – shows a 3D city model being tracked on a “natural feature” surface (or in plain language – a regular bird’s view image of a city)

Wow.

What handheld was used for this demo?

(My guess is it’s Nvidia’s Tegra)

What’s behind the mysterious Jakomini name?

(Jakomini is the 6th District of Graz and the most populous)

What’s hidden in Jakomini?

(I guess we’ll all find out during ISMAR…)

Need more reasons to come to ISMAR?

Check out these previews.

Reblog this post [with Zemanta]

Red Bull Gives You Augmented Reality Wings and Saves Magazines with Print 2.0

They gave you wings, extreme sports races, and Flugtag – and now they want to save print magazines with the bold concept: Print 2.0.

bullseye

Guess which one of these is a true fact?
1) In 2006, more than 3 billion cans of Red Bull were sold in over 130 countries

2) Red Bull publishes a printed magazine (2 million copies per issue)

3) Red Bull is an Austrian company

Answer: all of the above!
Can’t say which fact is more shocking, but they certainly explain why Red Bull decided to partner with Imagination (an Austrian company) to create a webcam augmented reality experience where:

This magazine sings, dances, flies and even scores a touchdown…

The cover and multiple pages (any page with the Bull’s eye) can be activated by pointing to a webcam thanks to Imagination’s natural feature tracking software.

Try it yourself.

If you don’t have the printed magazine – don’t worry – you can download and print at home.

Or just watch it here…

The magazine editor dives into more autophilia:

PRINT GOES LIVE

It’s not often that a magazine can call itself revolutionary,  but we’re delighted to say this one can.

This very issue of The Red Bulletin takes us from print  to Print 2.0, thanks to the incorporation of some nifty  software known as ‘augmented reality’…the  fun stuff is this: simply by holding the mag up to a computer you can take it ‘beyond the page’ and into the world-wide web. So, for example…[the cover] will link through to a video package explaining exactly how augmented reality can enhance your reading experience in a way you almost certainly never imagined, with music, film, animations and more.
Then turn to page 5 and Red Bull Air Race ace  Paul Bonhomme will give you an ‘as live’ introduction to  the magazine and the world of augmented reality. Head to our Now and Next pages, find the story about Black Gold  on page 20 and do the same again with the mag. Lo, you’ll
find the band’s latest video on the website. Clever, eh?
Further in, you can read about Burcu Cetinkaya and Cicek Güney – the girls putting the glam into rallying – then link to exclusive interviews with them and videos of them driving flat-out… and crashing!
And we’re not done yet, no way. Our Reggie Bush cover story, on page 48, combines with an exclusive mini-movie  of Reggie at home, as he talks to correspondent Jan Cremer, while page 62 will take you right into the pocket-rocket  world of the Red Bull Rookie motorbike racers.
No other magazine has ever tried anything like this,  and we have plenty more ideas for the future. But for now,  just get your magazine and computer primed and prepare  to be amazed…

…And Daniel Wagner certainly shows his skills in the video above…

Live From WARM ’09: The World’s Best Winter Augmented Reality Event

Welcome to WARM 2009, where augmented reality eggheads from both sides of the Danube meet for 2 days to share ideas and collaborate.

It’s the 4th year WARM is taking place – always in Graz university, and always in February – to provide an excuse for a skiing event, once the big ideas are taken in. Hence the cunning logo:

This year 54 attendees from 16 different organizations in 5 countries are expected (Austria, Germany Switzerland, England and the US). The agenda is jam-packed with XX sessions, Lab demos and a keynote by Oliver Bimber. I have the unenviable pleasure of speaking last.

It’s 10 am. Lights are off. Spotlight on Dieter Schmalstieg, the master host, taking the stage to welcome everybody.
He admits, the event started as a Graz meeting and just happened because guests kept coming.

Daniel Wagner, the eternal master of ceremony of WARM, introduces Simon Hay from Cambridge (Tom Drummond group) the first speaker in the Computer Vision session. Simon will talk about “Repeatability experiments  for interest point location and orientation assignment”  – an improvement in feature based matching for the rest of us…

The basic idea: detect interest regions in canonical parameters.
Use, known parameters that come through Ferns, PhonySift, Sit Mops, and MSERs searches,
and accelerate and improve the search with location detectors and orientation assignments.

After a very convincing set of graphs, Simon concludes by confirming Harris and FAST give reasonable performance and gradient orientation assignment works better than expected.

Next talk is by Qi Pan (from the same Cambridge group) about “Real time interactive 3D reconstruction.”

From the abstract:
“High quality 3D reconstruction algorithms currently require an input sequence of images or video which is then processed offline for a lengthy time. After the process is complete, the reconstruction is viewed by the user to confirm the algorithm has modelled the input sequence successfully. Often certain parts of the reconstructed model may be inaccurate or sections may be missing due to insufficient coverage or occlusion in the input sequence. In these cases, a new input sequence needs to be obtained and the whole process repeated.
The aim of the project is to produce a real-time modelling system using the  key frame approach which provides immediate feedback about the quality of the input sequence. This enables the system to guide the user to provide additional views for reconstruction, yielding a complete model without having to collect a new input sequence.”

Couldn’t resist pointing out the psychological sounding algorithms (and my ignorance) Qi uses such as Epipolar Geometry and PROSAC, reconstructing Delauney Triangulation followed by probabilistic Tetrahedral carving. You got to love these terms.

The result is pretty good, though still noisy – so stay tuned for future results of Qi’s research.

Third talk is by Vincent Lepetit from Computer Vision Lab from the Swiss CV Lab at EPFL.
Vincent starts with a recap of Keypoint recognition: Train the system to recognize keypoints of an object.
Vincent then demonstrates works leveraging this technique: an awarded work by Camille Scherrer “Le monde des montagnes” a beautiful augmented book, and a demo by Total Immersion targeted for advertising.

Now, on to the new research dubbed Generic Trees. The motivation is to speed up the training phase and to scale.
A comparison results shows it’s 35% faster. To prove, he shows a video of a SLAM application.
Generic Trees method is used by Willow Garages for autonomous robotics – which is implementing Open CV.

Next, he shows recognizing camera pose with 6 degrees of freedom (DOF) based on a single feature point (selected by the user). Impressive.

That’s a wrap of the brainy Computer Vision session. Next is Oliver Bimber’s keynote.

I Had A MID Night Dream

The US celebrated Martin Luther King’s day last week, which above all reminds us to keep dreaming – sometimes dreams do come true.

I had a dream too…and in my dream, an amazing Mobile Internet Device (MID) was released for our augmented reality experiences.

(See a list of existing MIDs)

my ar device

Here is a first take at defining the dream MID for augmented reality (2009-2010 time frame):

  • Manufacturer – a credible leader, with a friendly content distribution channel
  • Price – Ideally sub $200. Initially not more than $400.
  • CPU – Dual core 1.3 Mhz, with a Floating Point Unit, SIMD extensions
  • GPU – integrated with performance similar to TI’s OMAP3 and NVidia’s Tegra (the competition!)
  • Screen – 4.5 Inch, Min 800×480 resolution, Multitouch, and a very bright screen
  • Camera – A GOOD CAMERA with a quality lens, video recording at 320×240 or preferably 640×480 (VGA) at 30fps at a good quality (noise, contrast, colors, etc) even under low lighting. Zoom and auto focus a bonus. Front camera – bonus.
  • Low latency for getting the the camera image to the CPU/GPU and in turn to the display
  • Zero-latency video output from the device for a head-worn display (digital or analog)
  • Low-latency inputs for external sensors (such as a tracker on the head-worn display) and cameras (on the head-worn display).
  • GOOD graphics drivers, Open GL 2.0 (unlike the current Intel OpenGL drivers on Atom which are almost a show stopper for many projects…)
  • Device size – roughly 130x70x12mm (so that there’s little margin around the screen)
  • Weight – less than 200g
  • OS – The best Mobile Linux out there, with C/C++ based SDK and a good emulator. Also as an alternative: Win Mobile support (better dev tools)
  • Buttons – Very few. QWERTY keyboard is a nice to have.
  • Connectivity – 3G/GSM, WIFI, Bluetooth
  • Sensors – A-GPS, accelerometer, 3DOF Gyro sensors
  • 3-axis compass
  • Storage – 8G and expandable
  • Memory – 1G RAM
  • Battery – Min. 3 hours while in full use of camera and network
  • Extensibility – video out for an HMD, USB port on it.
  • Openness – open source…

So what do you think?

This spec was actually a swift response to a challenge presented by Intel’s Ashley McCorkle.

Many thanks for the contribution by Daniel-Good camera!-Wagner, Steven-don’t forget latency!-Feiner , Bruce-a couple of extras-Thomas, and Charles-Very bright screen-Woodward.

In ISMAR 2009 in Orlando, we are planning to organize a round table discussion for this very purpose. Would you be interested in participating?

***update***

The experts and enthusiasts are weighing in, and as it usually is in reality (as opposed to dreams) remind us that we need to consider trade-offs.

Charles for example says he would trade off battery time for a lighter device. He also suggests that for professional use – a higher price ($1000 range) for a higher quality device would be reasonable.

Mobile Augmented Reality Goes Way Beyond Markers

The dust from CES 2009 has barely settled over many shiny new devices, and new advancements in Handheld Augmented Reality software are already emerging from Vienna.

Daniel Wagner and his team at Graz University have come up with new and improved capabilities.

High Speed Natural Feature Tracking on a mobile phone

We saw an early implementation of Studierstube ES at ISMAR 08, so I asked Daniel what’s new about this capability, besides being faster and more robust.

Daniel: We can now track multiple images and switch arbitrarily. I believe it is now at a level that it can really be used in practice.

Games alfresco: Looks great. Based on the video it seems that it runs on Windows Mobile 6 (ASUS p552w, iPAQ 614c). What about other platforms?

Daniel: Not bad! It is written in C/C++, [but] since this is pure math code, it could be ported easily to any platform. Our AR framework is still Windows Mobile only, although we now also have Linux support (desktop only since we lack a Linux phone). MacOS and Symbian are in the making and should be available in a month or so.

Tracking of Business Cards on a mobile phone

Daniel: On January 20th we have the official opening of our “Christian Doppler Lab” (founded by the Christian Doppler agency). For that purpose I created a small demo for tracking business cards. In the future we’ll replace the 3D content with something more useful…

I can’t talk about Daniel without mentioning WARM ’09; He is the main organizer of this Winter Augmented Reality Event on 12th-13th February 2009 at Graz University, Austria. Registration is over, but if you really want to go and have something cool to present – you may be able to convince Daniel to let you in.

Should you get your hands on this powerful technology (assuming Imagination makes it available for licensing soon) what would YOU do with it?

Live from ISMAR ’08: Tracking – Latest and Greatest in Augmented Reality

After a quick liquid adjustment, and a coffee fix – we are back with the next session of ISMAR ’08, tackling a major topic in augmented reality: Tracking.

Youngmin Park is first on stage with Multiple 3D Object Tracking. His first demonstration is mind blowing. He shows an application that tracks multiple 3D objects, which have never been done before – and is quite essential for an AR application.

The approach combines the benefits of multiple approaches while avoiding their drawbacks:

  • Match input image against only a subset of keyframes
  • Track features lying on the visible objects over consecutive frames
  • Two sets of matches are combined to estimate the object 3d poses by propagating errors

Conclusion: Multiple objects are tracked in interactive frame rate and is not affected by the number of objects.

Don’t miss the demo.

~~~

Next two talks with Daniel Wagner from Graz university about his favorite topic Robust and Unobtrusive Marker Tracking on Mobile Phones.

Why AR on cell phones? there are more than a billion phones out there and everyone knows how to use them (which is unusual for new hardware).

A key argument, Daniel is making: Marker tracking and natural feature tracking are complementary. But we need a more robust tracking for phones, and create less obtrusive markers.

The goal: Less obtrusive markers. Here are 3 new marker designs:

The frame markers (the frame provides the marker while the inner area is used to present human readable information.

The split marker (somewhat inspired by Sony’s by the eye of judgment) we use barcode split, with a similar thinking to the frame marker.

A third marker is a Dot marker. It covers only 1% of the overall area (assuming it’s uniquely textured – such as a map).

Incremental tracking using optical flow:

These requirements are driven from industrial needs: “more beautiful markers” and of course making them more robust.

~~~

Daniel continues with the next discussion about Natural feature tracking on mobile phones.

Compared with marker tracking, natural feature tracking is less robust, more knowledge about the scene, more memory, better cameras, more computational load…

To make things worse, mobile phones have less memory, with less processing power (and no floating point computation), and a low camera resolution…

The result is that a high end cell phone runs x10 slower than a PC, and it’s not going to improve soon, because the battery power is limiting the advancement of this capabilities.

So what to do?

We looked at two approaches:

  • SIFT (one of the best object recognition engines – though slow) and –
  • Ferns (state of the art for fast pose tracking – but is very memory intensive)

So both approaches wont work for cell phones…

The solution: combine the best of both worlds into what they call: PhonySift (Modified SIFT for phones). And then complementing it with PhonyFern – detecting dominant orientation and predicting where the feature will be in the next frame.

Conclusion: both approaches did eventually work on mobile phones in an acceptable fashion. The combined strength made it work, and now both Fern and Sift work at similar speeds and memory usages.

================

From ISMAR ’08 Program:

  • Multiple 3D Object Tracking for Augmented Reality
    Youngmin Park, Vincent Lepetit, Woontack Woo
  • Robust and Unobtrusive Marker Tracking on Mobile Phones
    Daniel Wagner, Tobias Langlotz, Dieter Schmalstieg
  • Pose Tracking from Natural Features on Mobile Phones
    Daniel Wagner, Gerhard Reitmayr, Alessandro Mulloni, Tom Drummond, Dieter Schmalstieg