Hacker’s Cafe celebrated Virtual Tokyo day with an augmented reality game dubbed – by the Japanese tradition – Cyber Star Rally Challenge.
Hackers hacked some virtual stars in a city square and let loose a pack of other hackers riding hacked MTBs, armed with GPS’, Google 3D view and a (hacked) augmented reality software.
They were tasked with one mission only: collect as many stars as possible.
You have to see it to believe it:
How did it work?
Your guess is as good as mine.
(unless you can read the instructions in Japanese)
When Microsoft showed SecondLight at PDC 2008 last year, Stimulant was inspired to do something cool with Microsoft’s Surface.
Here is the prototype they built. It takes advantage of Surface’s object recognition capabilities to identify the position of one or more iPhones, and turns those phones into “see through displays” revealing a second layer of information:
Vodpod videos no longer available.
Stimulant is excited by the potential of this as captured in their own words:
…adding a layer of personalized information on top of a public computing experience
SecondLight’s demo was quite inspiring, taking a different angle at this idea: by hovering with tracing paper over an image (think looking glass) it can reveal additional information about a star constellation, or show street names on an aerial photo.
Now imagine this capability going beyond “public computing”; imagine having a “magic lens” that allows you to see through anything, or add personalized information to any real life object you look at.
This is one of the stimulating promises of augmented reality. And in this demo, Stimulant made one of the first steps. towards this vision.
Ohan Oda from Columbia University in New York just released his new version of Goblin XNA. Kudos!
Goblin XNA is a development framework for Augmented Reality apps with a focus on games. It is based on Microsoft’s popular XNA Game Studio 2.0 that enables game developers to be successful on Microsoft gaming platforms (i.e. PC, Xbox, Win Mobile)
Ohan built it as a one man show (under the supervision of Steven Feiner and with help from his lab colleagues) as part of his PhD research project.
Has anyone tried it so far?
Ohan says he just released it to the public, so as of now, only he and his lab members have. However, the framework was used in 3DUI and Augmented Reality course, so approximately 60 students have used it so far.
We apologize for those of you who waited very long (some of you
probably waited for almost a year).
Source code, API documentation, user manual, installation guide,
tutorials, and a relatively large-sized project (AR domino game) is
included with the release.
For questions and bug reports, Please do NOT email me directly, but
instead, please post your questions and bug reports through Codeplex.
I won’t be able to guarantee quick response since I’m the only
developer of Goblin XNA, but I will try my best to answer your
questions and fix bugs.
Thanks
Ohan
Ohan will be working on the framework for his research at Columbia for another couple of years – so until then you can count on him to continuously update the framework for both bug fixes and new feature additions.
Try it and show us what kind of reality experiences you can build.
Here is a first take at defining the dream MID for augmented reality (2009-2010 time frame):
Manufacturer – a credible leader, with a friendly content distribution channel
Price – Ideally sub $200. Initially not more than $400.
CPU – Dual core 1.3 Mhz, with a Floating Point Unit, SIMD extensions
GPU – integrated with performance similar to TI’s OMAP3 and NVidia’s Tegra (the competition!)
Screen – 4.5 Inch, Min 800×480 resolution, Multitouch, and a very bright screen
Camera – A GOOD CAMERA with a quality lens, video recording at 320×240 or preferably 640×480 (VGA) at 30fps at a good quality (noise, contrast, colors, etc) even under low lighting. Zoom and auto focus a bonus. Front camera – bonus.
Low latency for getting the the camera image to the CPU/GPU and in turn to the display
Zero-latency video output from the device for a head-worn display (digital or analog)
Low-latency inputs for external sensors (such as a tracker on the head-worn display) and cameras (on the head-worn display).
GOOD graphics drivers, Open GL 2.0 (unlike the current Intel OpenGL drivers on Atom which are almost a show stopper for many projects…)
Device size – roughly 130x70x12mm (so that there’s little margin around the screen)
Weight – less than 200g
OS – The best Mobile Linux out there, with C/C++ based SDK and a good emulator. Also as an alternative: Win Mobile support (better dev tools)
Buttons – Very few. QWERTY keyboard is a nice to have.
Connectivity – 3G/GSM, WIFI, Bluetooth
Sensors – A-GPS, accelerometer, 3DOF Gyro sensors
3-axis compass
Storage – 8G and expandable
Memory – 1G RAM
Battery – Min. 3 hours while in full use of camera and network
Extensibility – video out for an HMD, USB port on it.
Openness – open source…
So what do you think?
This spec was actually a swift response to a challenge presented by Intel’s Ashley McCorkle.
Many thanks for the contribution by Daniel-Good camera!-Wagner, Steven-don’t forget latency!-Feiner , Bruce-a couple of extras-Thomas, and Charles-Very bright screen-Woodward.
In ISMAR 2009 in Orlando, we are planning to organize a round table discussion for this very purpose. Would you be interested in participating?
***update***
The experts and enthusiasts are weighing in, and as it usually is in reality (as opposed to dreams) remind us that we need to consider trade-offs.
Charles for example says he would trade off battery time for a lighter device. He also suggests that for professional use – a higher price ($1000 range) for a higher quality device would be reasonable.
Take pictures of the cube’s faces with your iPhone and it will guide you through the shortest number of moves required to solve it.
Now, take it a step further and imagine the iPhone (or better yet – goggles) continuously watching your cube and cleverly guiding you on every move – for the fastest solution ever.
Wouldn’t it be cool?
Well, at least for Cube obsessed kids, it would.
In any case, I’d mark this app as an important milestone towards putting augmented reality to use for the good of mankind.
Today it teaches us how to solve the Rubik’s cube. Tomorrow it will teach us everything else.
Obama’s administration is promoting augmented reality.
Really?
How else would you explain the “Get Up and Play” campaign launched by the department of health & human services ?
“Go online, just don’t stay too long” says the ad. “Be a player, get out there and play” it presses on.
Now, why would a 21st century child go out there and play, confront reality and deal with its harsh limitations when she can Wii in or stay inside the Xbox?
Here’s how the drama unfolds:
On the one hand, kids are not stimulated by reality as their forbears were. It’s not as fun.
Reality isn’t what it used to be.
On the other hand, kids need to spend less time in front of the screen. Parents always knew it in their gut – but now, it’s scientifically proven – thanks to a mega study of 173 other studies by the National Institution of Health which concluded unequivocally:
Too much TV, games, and internet harm kids’ health.
So, how do you resolve this conflict?
That’s where the augmented reality industry comes to the rescue.
We’ll make them go out, interact with their real surroundings – and they’ll totally think it’s all fun and games.
How are we going to do it?
Reality is actually pretty cool as it is, we just need to make it a bit more significant to them. Add a little spice. A sparkle. A challenge. A dream. A new dimension or a hidden depth. A reward for an unnoticed deed.
All these would make them do anything – even go outthere and play.
The Health Department dreamed it up to the extreme with an animated donkey; what an
[The author: the “Get out and play” campaign was reportedly launched by the Health department 2 years ago. I only noticed it now…on Hulu…while watching the Colbert Report…I trusted the Obama adminstration to have not only inherited but embraced this campaign both in practice and in spirit]
While we are on the topic of touristic applications, LastMinute.com labs, a travel site, has found a new way to tackle the problem of interfacing with an augmented reality world – for tourists. It is uncovered in an article on FastCompany.
The application Nru (pronounced Near You) uses the GPS, compass, sensors and other goodies available on Google’s G1 phone. As you point it to different directions while strolling on a London street – Nru will display signals about your surrounding attractions; the usual suspects include restaurants, movies, shows.
Now, here comes the interesting part: the user interface. Hold it parallel to the ground and it displays a radar like view of your touristic targets. Hold it vertically and it transforms into a purple-black “heat” sensor highlighting worthy targets in your front.
To get more info about a selected target use the “old” gesture: just touch it.
They claim to pull information from a number of sources including Qype and fonefood. Not surprisingly – both are London focused information services.
Some will argue that since it doesn’t overlay (register) the signals on top of what’s in your field of view – it’s not a pure augmented reality implementation, but rather a location-based app.
But the British accent certainly masks that thought and adds a certain Je-ne-sais-quoi to the demo. Absolutely fabulous.
Nru is now available on the android marketplace – but only for UK customers. Top bollocks.
In a land where what you see is what you get, those who can see more – even with one eye – are kings.
Kijin Shin from Yanko design believes in it and makes a point with this interesting concept design. He calls it the “Third Eye concept designed is for travelers.”
See something interesting? Just place the Third Eye up to your eyes like a monocle and the device pulls all relevant historical, travel, shopping, and tourist information.
Augmented reality lends itself well to touristic applications. When people explore new places – extra (augmented) information in context is highly sought after.
What’s interesting in this one – is the form factor and the user interface.
Sometimes you have to reduce features (one eye only) to achieve simplicity. That has the potential to drive massive adoption.
If you are into the pros and cons, check out the interesting discussion on the site featuring the usual supporters vs. skeptics. One commenter compared it to the Celestron SkyScout:
We’ll see if Kijin’s design raises interest among hardware manufacturers. By then we’ll realize if the one eyed man becomes king – or whether he’ll be facing a land populated with 2 eyed specs.
LEGO, the Danish toy manufacturer will test launch its “DIGITAL BOX” in selected toyshops and LEGO® stores worldwide. This interactive terminal will utilize innovative technology supplied by Metaio in the form of a software program specially-developed for the LEGO Group by the Munich-based experts in augmented reality solutions. Together with a camera and display screen, the software lets LEGO packaging reveal its contents fully-assembled within live 3D animated scenes.
The press release continues:
The partnership between Metaio and one of the largest toy manufacturers in the world is a truly major milestone in the history of the company.
Indeed, Lego is associated with playful, innovative toys and will certainly expose the concept of augmented reality to many kids around the world.
Kudos to Metaio for a great splash at the onset of the AR year.
Now who’s going to post the first video of this experience?