Where 2.0: The World is Mapped – Now Use it to Augmented our Reality

O’Reilly’s Where 2.0 event is a tightly run ship. One track for all attendees, fast paced 20 minutes sessions, discussing laser focused topics.

Low tech location services at Where 2.0

Low tech location services at Where 2.0

I got my fair share (3 minutes!) to educate the audience about how AR could impact our life, as part of the Mobile Reality panel, covered by Rouli.

A show of hands survey confirmed that only 5% of the audience was familiar with the concept of augmented reality before the event. Not too surprising considering the percentage among the general population is less than 1%.

What came out strongly at the event is that unbelieveable amount of data is being captured about people, places and things around the world. This data combined with sophisticated models (such as Sense Networks) result in the existence of super intelligent information about the world that we still don’t really know how to use.

My point is not a shocker: all we need is to tap into this information and bring it, in context, into people’s field of view.


For some time now, researchers in the augmented reality community have attempted to leave markers behind and leap into the great world of outdoor AR (alfresco). These pioneers typically hit walls such as low accuracy of GPS, lack of 3D modeled environments, and the usual device-specific limitations.

Where 2.0 gave stage for two new approaches to map the world that may help overcome the traditional challenges: Earthmine and Velodyne’s Lidar.

Earthmine uses its own camera-based device to index reality, at the street level, one pixel at a time. They have just announced Wild Style City – an application that allows anyone to create virtual graffitis on top of designated public spaces. However, at this point, you can only experience it on a pc!

Why not take advantage of their 3D pixel inventory of the world to make these graffiti work of arts available to anyone on the street? All is needed is some AR magic and a powerful mobile device.

The second novice approach is Velodyne’s Lidar. Remember Radiohead’s funky laser (as opposed to video) clip?

They did it with Lidar.

Now Velodyne is embarking on a broader mission to map the outdoors. Check out this experiment.

Can AR researchers harness these new approaches to index reality?

Reblog this post [with Zemanta]

5 Responses

  1. This stuff will obviously be useful eventually, I agree; the problem is getting it into a form that is actually useful. Right now, those camera-based models stuff from two huge problems: their size and the impossibility of updating them without actually redoing them.

    It’s really cool that data is being collected, though, since it’s the first step!

  2. […] Via Games Alfresco […]

  3. Interesting.
    I think the first is more likely usefull then the second.

    Allthough the second is more accurate, and easier to extract 3d data from, its probably going to be far too costly and restrictive to do large mapping like that.

    My own bets are still on Photosynth style technology liked with just mass’s of photos on, say, Flickr, used to construct point-data for citys and landmarks. (which could, in turn, be used to position outdoor AR stuff).

    This still wouldnt cover under-populated/less-photographed area’s, but its a good start.

  4. […] clearly demonstrated that we have an unprecedented amount of information from mapping our world, Ori Inbar noted in his conference roundup. Ori […]

  5. […] our world, see my post, “Location Becomes Oxygen at Where 2.0 and WhereCamp,” and  Ori Inbar’s  Where 2.0. conference roundup. But as Ori notes, to move augmented reality […]

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s

%d bloggers like this: