Weekly Linkfest

It’s time again for another weekly linkfest, but first, let’s take a moment to a recognize a historic event. This is the 100th post here at Games Alfresco. Last week featured this blog’s 500th comment. Let’s hope that by the 200th post, AR will take a more substantial part in our life.

Now, without any further ado, here are some other AR news from around the web:

  • Enter the mind of Ronald Chevalier, an experience that promotes this film.
  • Georgia Tech’s has a new infomercial for their mixed reality design class.
  • Study finds that when it comes to in-car navigation, augmented reality is better than 3D egocentric view aids (such as plain old GPS devices). Who could have guessed?
  • Geocaching using augmented reality is such a neat idea, I’m surprised no one before Jacob at Trimagination thought about it.
  • Looking for an AR primer? Rusty Henderson has one covering the basics (with many videos), and Tom Carpenter has some more details.

The quote of the week comes from Robert Rice’s twitter feed:

My team has figured out how to build most of Rainbows End. Just matter of time and funding now… /evil scientist cackle/

I guess that if they really achieved that feat, funding will not be a problem.

And finally, the weekly video comes from GeoVector (which I previouslly covered here). It’s a concept video from 1995 and contains some interesting ideas. It just shows that even

if you think that you have a novel idea, someone has thought about it before. Jump to 4:51 for a really cool augmented frogger:

Have a nice week!

Weekly Linkfest and Site News

Hi all!

My name is Rouli (pronounced like wooly) . For the last couple of months, I’ve been blogging on augmented reality over at Augmented Times, where I cover the coming “AR revolution”. When I first became an AR enthusiast, over a year ago, I’ve found Games Alfresco an incredible source of goodness. That’s why I was honored when Ori approached me with an offer to combine forces, in hopes of creating a central AR hub, a place for AR fans and proffesionals alike.
This post is the first step in creating such a hub, and in the following weeks I’ll be publishing some of my posts both over here and on Augmented Times. Please leave a comment and tell us what you think about this collaboration!

Once a week I write a post detailing all the news that didn’t get their own post, a “Linkfest”. So, for the first time on Games Alfresco, here’s this week’s linkfest:

Finally, the following video was doing the rounds this last week –

It’s iVisit‘s SeeScan, an application under development for Windows Mobile that intends to help the visually impaired, but could have other uses for AR (a bit more information here).

The Curious Raven Says: Tonchidot Has Real Demo…But He’s Not Impressed

Robert Rice (Curious Raven) just unveiled this new demo by Tonchidot, and although this time it looks real, compared to the TechCrunch50 concept video of yesteryear – he’s not impressed.

Judge for yourselves.

Vodpod videos no longer available.

What strikes me is the similarities with 2 other apps launched recently: wikitude and Nru.

These apps don’t do image recognition. They rely solely on GPS (for location) and sensors (compass – for direction, latitude – for angle, accelerometers – for pose) – to give you más info about what you’re looking at.

Wikitude’s founder, Philipp Breuss, presented last week at WARM, his simple yet powerful design of an app that tells you what wikipedia knows about what you’re looking at (mostly sites.) He boasts 60,000 downloads so far. Not too shabby.

Nru by Lastminute.com does a similar job with 2 distinct differences: it omits the live video stream for a purpuly radar like UI, and it tells you what Qype and fonefood know about what you’re looking at (mostly restaurants and shows.) Still waiting for the release of the app.

These apps make do with GPS and sensors, and lack visual recog capabilities – but they have one  advantage – they are getting popular by the minute.

Nru

The #9 disruptive technology of 2009 according to Gartner

ZDNet just published Gartner’s top 10 disruptive technologies for 2009:

  1. Multicore and hybrid systems
  2. Virtualization and fabric computing
  3. Social networking
  4. Cloud computing
  5. Web mashups
  6. User interface
  7. Ubiquitous computing
  8. Semantics
  9. Augmented reality
  10. Contextual computing

But keep in mind we are looking at a serious downturn which means only a few of the top 10 will actually get funded in 2009. The remaining technologies on the list (including #9) are dubbed by Gartner:

“projects to ponder in the future…”

Well, after all they are talking to CIOs and IT execs that need to play it safe: avoid technical disasters, keep the business side satisfied, and stay within budget – not the ideal breeding ground for disruptors…

Still, a nice tip of the hat by Gartner to the field of augmented reality, and it did raise some eyebrows among industry pundits. Matt Asay was so dazzled that he called his article:

“Gartner’s ‘augmented reality’ on IT spending”

They say any publicity is good publicity, and this case is no exception.

On a second look at the list it hits me: we will actually need every single one of these 10 technologies to design a great augmented reality experience.

Live from ISMAR ’08: Near-Eye Displays – a Look into the Christmas Ball

The third day of ISMAR ’08, the world’s best augmented reality event, is unfolding with what we expect to be an eye popping keynote (pun intended) by Rolf R. Hainich, author of The End of Hardware.

He is introduced as an independent research and started to work on AR in the early ’90s – so he could be considered as a pioneer…

A question on everyone’s mind is: Why Christmas ball and not a Crystal ball?

Rolf jumps on stage and starts with a quick answer: Christmas balls can help produce concave mirrors – useful for near eye displays.

First near eye display was created in 1968 by Ivan Sutherland; in 1993 an HMD for out of cockpit view was built in a Tornado simulator. In 2008, we see multiple products such as NVIS, Zeiss HOE glasses, Lumus, Microvision, but Rolf doesn’t consider them as true products for consumers.

Rolf ,defined the requirements for a near eye display back in 1994. It included: Eye tracker, camera based position sensing, dynamic image generator, registration, mask display, holographic optics. And don’t forget no screws, handles, straps ,etc…

He then presents several visions of the future of human machine interaction which he dubs 3D operating system.Then he briefly touches on the importance of sound, economy and ecology – and how near eye displays could save so much hardware, power, and help protect the environment.

But it requires significant investment. This investment will come from home and office applications (because of economies of scale- other markets such as military, medical, etc – will remain niche markets.

The next argument relates to the technology: Rolf gives examples of products such as memory, displays, cell phones, cameras which experienced dramatic improvements and miniaturization over the last years. And here is the plug for his famous joke: Today, I could tape cell phones on my eyes and they would be lighter than the glasses I use to wear 10 years ago…

Now, he schemes through different optional optical designs with mirrors, deflectors, scanners, eye tracker chips, etc (which you can review in his book The End of Hardware) These design could support a potential killer app – eye operated cell phone…

Microvision website is promoting such a concept (not a product), mostly to get the attention of phone manufacturers, according to Rolf.

Rolf, then tackles mask displays, a thorny issue for AR engineers and suggests it can achieve greater results than you would expect.

Eye Tracking is necessary to adjust the display based on where the eye is pointing. It’s once thing that AR didn’t inherit from VR. But help could come from a different disciplines – computer mouse which have become pretty good at tracking motion.

Other considerations such as Aperture, focus adjustment (should be mechanical), eye controller, are all solvable in Rolf’s book.

Squint and Touch – we usually look where we want to touch, so by following the eye we could simplify the user interface significantly.

Confused? Rolf is just getting started and dives effortlessly into lasers, describing what exists and what needs to be done. It should be pretty simple to use. And if it’s not enough, holographic displays could do the job. Rolf has the formulas. It’s just a matter of building it.

he now takes a step back and looking at the social impact of this new technology: when everybody “wears” anybody can be observed. The big brother raises its ugly head. Privacy is undermined, Copyright issues get out of control. But…resistance is futile.

Rolf wraps up with a quick rewind and fast forward describing the technology ages: PC emerged in the 80’s, AR in the 2020’s, and chip implants (Matrix style) will rule in the 2050.

Question: It didn’t look like the end of hardware…

Rolf: it’s the end of the conventional hardware – we will still have hardware but it could be 1000 times lighter.

Tom Drummond (from the audience): there is still quite a lot of work to get these displays done and there is still some consumer resistance to put on these head up displays…

Rolf: People wear glasses even for the disco – it’s a matter of fashion and of making it light – with the right functionality.

==================

From the ISMAR ’08 Program:

Speaker: Rolf R. Hainich, Hainich&Partner, Berlin

We first have a look at the development of AR in the recent 15 years and its current state. Given recent advances in computing and micro system technologies, it is hardly conceivable why AR technology should not finally be entering into mass market applications, the only way to amortize the development of such a complex technology. Nevertheless, achieving a ‘critical mass’ of working detail solutions for a complete product will still be a paramount effort, especially concerning hardware. Addressing this central issue, the current status of hardware technologies is reviewed, including micro systems, micro mechanics and special optics, the requirements and components needed for a complete system, and possible solutions providing successful applications that could catalyze the evolution towards full fledged, imperceptible, private near eye display and sensorial interface systems, allowing for the everyday use of virtual objects and devices greatly exceeding the capabilities of any physical archetypes.