Last July TAT (“The Astonishing Tribe“) posted a concept video of their augmented social face-card system (okay, I made that term up, what else should we call it?). The video tickled the imagination with over 400,000 views.
TAT has since teamed up with Polar Rose, a leading computer vision services company, to turn that concept into a reality. The TAT Cascades system combined with Polar Rose’s FaceLib gives us this prototype called Recognizr.
It’s nice to see the technology is coming together, but I wonder how the social and ethical repercussions will play out. My guess is the only way this will truly work is to be paired with one of the big social networks like Facebook or LinkedIn. That way you have your protected status built into your social information.
But then it becomes less useful for business or conference settings, which is where I see the biggest use (and as they demonstrated in the concept video.) So if the access to your information connected to your face is customizable and controlled by you, then it probably won’t cause too much heartache. Hopefully, we’ll see a real product from them soon. Having the Recognizr system available at ARE2010 in June would be fantastic.
Metaio released their Unifeye Mobile Augmented Reality SDK at Mobile World Congress 2010.
The Unifeye® Mobile SDK is the world´s first and only software development kit for creating mobile augmented reality (AR) applications. The professional toolbox is supporting all major mobile platforms and features the latest image recognition technologies, 3D rendering for animations with real time interaction and optimized components for mobile hardware. With the Unifeye® Mobile SDK software it is possible to create fascinating marketing experiences, intuitive information design, mobile augmented reality games or innovative retail solutions. Based on the proven AR platform Unifeye® by metaio it is possible to easily develop and deploy solutions at the interface between the real and virtual world.
Having used their beta Unifeye software last year, I can attest to the ease of use. However, I have not used their mobile software development version so there may be some differences.
Yahoo! and augmented reality leader Total Immersion have come up with some nifty ways to bring consumers into the action at the world’s largest winter sporting event. Yahoo!’s “Fancouver” exhibit enables passers-by to insert themselves into the festivities in a host of guises. Kicking off yesterday, Feb. 12, Fancouver features an entertaining and versatile digital out-of-home display, with dual windows that use augmented reality (AR) face tracking and tracking to a brochure, respectively, to give fans a distinctly different view of the proceedings.
Peter Meier, the CTO of Metaio, will also be giving a speech on Wednesday at 4pm within the session: “Mobile Innovation — A Vision of 2020.” This session will: “take a visionary look into the services and applications that mobile communication will provide in 10 years time and the impact they will have on the way we live and communicate in 2020. The latter half of this session will look at Augmented Reality.”
Thanks Jan from Augmented Blog for the update. He promises some exciting releases and a movie after MWC2010 has concluded.
And once again, we won’t be able to attend – so if you’re there – keep us updated about your experience.
Next week, February 15-18th, will be the Mobile World Congress in Barcelona. There will be a variety of AR related events during the MWC.
AR Showcase
Christine Perey has organized an AR Showcase on Wednesday, February 17th from 5:00-7:00, so AR companies can demonstrate their services and products to customers. Designers will also have a chance to compare and contrast their products versus the competition. The following companies have confirmed their attendance:
You can find the Showcase in the northeast corner of the courtyard. Announcements for the AR showcase can be tweeted to #arshow (changed for length.)
The Mobile AR Summit is an invitation only event. If you’re interested in joining, please contact Christine Perey at cperey@perey.com. More information can be found here.
Mobile Premier Award in Innovation
Monday, 15.2., 2010 15:00 to 20:00
Petit Palau of Palau de la Musica
Mobilizy is with Wikitude one of the 20 finalists of the “Mobile Premier Award in Innovation”. http://www.mobilepremierawards.com/
Martin Lechner, CTO Mobilizy, will present.
AR Summit
Wednesday, 17.2. 13:00 to 19:00
Location: to be announced.
Mobilizy CTO Martin Lechner presents a position paper “ARML an Augmented Reality Standard”. ARML is currently being reviewed by W3C (World Wide Web Consortium). At 17:00 there will be a Wikitude Showcase presentation.
We won’t be able to attend – so if you’re there – keep us updated about your experience.
Sensor systems like cameras, markers, RFID, QR codes (and more) are usually done as single methods to align our augments. One challenge for a ubiquitous computing enviroment will be meshing together the various available sensors so computers have a seemless understanding of the world.
This video from the University of Munich shows us how a multi-sensor system works. It appears to be from 2008 (or at least the linked paper is.) Here’s the description on the project:
TUM-FAR 2008: Dynamic fusion of several sensors (Gyro, UWB Ubisens, flat marker, ART). A user walks down a hallway and enters a room seeing augmentations different augmentations as he walks on: a sign at the door and a sheep on a table.
In the process, he is tracked by different devices, some planted in the environment (UWB, ART, paper marker), and some carried along with a mobile camera pack (gyro, UWB marker, ART marker). Our Ubitrack system automatically switches between different fusion modes depending on which sensors are currently delivering valid data. In consequence, the stability of the augmentations varies a lot: when high-precision ART-based optical tracking is not available lost (outside ART tracking range, or ART marker covered by a bag), the sheep moves off the table. As soon as ART is back, the sheep is back in its original place on the table.
Note that the user does not have to reconfigure the fusion setup at any point in time. An independent Ubitrack client continuously watches the current position of the user and the associates it with the known range of individual trackers, reconfiguring the fusion arrangements on the fly while the user moves about.
The project brings up an interesting question. Is anyone working with multi-sensor systems? We know we’ll need a mix of GPS and local image recognition and markers to achieve our goals, but is anyone working on this complex problem for a real product? We’ve seen good image recognition with Google Goggles or SREngine, and GPS/accelerometer based AR is popular, but I’d like to see an app use both to achieve their aims.
If you’re working on a multi-sensor project. We’d love to hear about it at Games Alfresco.
The biggest news about the movie Avatar has been the 3D experience and the way its blown the doors off the previous records. The movie has garnered huge success because it pushed the boundaries of technology and told an interesting story.
I loved the movie and the way 3D helped give more perspective to the enviroment. My own Star Trek loving mother didn’t even realize the Na’vi were CGI. She thought they were people in blue suits (really… I’m not joking.) And though storytelling will become important to later advanced augmented reality applications, it’s not what I wanted to point out.
James Cameron is part art-dude and part tech-geek. He waited for years for the technology to ripen enough to do the movie the way he wanted. One of the innovations that he created for the movie was the Fusion camera for the live-action sequences. Normally, scenes are filmed before a green screen and then the CGI is added afterwards. The actors play a game of make-believe and the director has to guess at how the enviroment will unfold around them. CGI movies tend view flatly because the emotions are added later by the special effects guys and not the actors on the scene. Cameron has changed all that.
The Fusion camera system is an augmented reality viewport into the CGI world. When Cameron was filming the actors, he was able to direct them and see the results. When he looks through his camera, he can see them interacting with the world Pandora as the nine foot Na’vi and help them tell the story. The camera itself wasn’t even a real camera in the sense that it filmed the action. The camera allowed Cameron to see the action being recorded by multiple sensors and cameras. Once the action was recorded, he could go back and reshoot the action from a different perspective, even with the actors gone.
Facial expression was another hurdle they had to jump to make the movie work. So they added little cameras hanging on people’s heads to capture their range of facial expressions and then tweaked algorithms to get them to react correctly. Even now we can pull off this trick.
Together these systems are similar to an immersive augmented reality world. While we don’t have the HMDs, complete camera access and processing power to pull off the world of Pandora now, time and continued improvement will make lesser versions possible.
If you look at the Fusion camera system, the camera is essentially the HMD display, albeit a large and bulky one. Multiple cameras, RFIDs and tracking markers help the computer understand the world, and complex and powerful computers put all the pieces together. I can only imagine that this system could be turned into a mind-blowing game in an empty warehouse with the proper HMDs.
Essentially, the movie Avatar teaches us that augmented reality has sky-high practical possibilities. All the components of his Fusion system can be ported to the commercial world (not now, but in three or four years) and used to make complex and believable environments overlaid our own world.
In the future, you too can be a nine-foot tall blue Na’vi and you won’t even have to have your soul sucked through a fiber-optic tree.
Total Immersion leads the augmented reality industry in total projects (around 125 last year and they’re expecting over 250 in 2010.) They’ve successfully created world-wide campaigns like Coke Zero and the Avatar i-Tag game line. So when they talk about augmented reality, I want to make sure I’m taking notes. Iriny Kuznetsova from 2Nova interviewed Nicolas Bapst about the company and their current activities. The interview was short, but had a few interesting insights.
Total Immersion has done work for the military in creating augmented reality solutions that put simulated objects on the battlefield. This is a much cheaper alternative to war-gaming with real equipment. Hopefully this encourages the military to fund more see-through AR HMDs.
Total Immersion expects that AR mobile marketing will be the new trend in the coming year and shows off a brief demonstration. They’re converting their PC software to mobile to take advantage of the smartphone growth. I found Nicolas’ observation about how augmented reality marketing applications give you direct access to your customers interesting. By moving people from static newspapers to the computer (and especially the smartphone), then they can find out exactly who is interested in their product and then leverage social media to spread the word. Nicolas explains they doubled time on websites by adding augmented reality content. I’m curious if this increase will sustain as the novelty of augmented reality wears off.
Nothing game breaking here, but worth a few minutes if you’re not familiar with the company.
Robert Rice, the CEO of Neogence Enterprises and blogger of augmented reality on Curious Raven, spoke back in June at Mobile Monday. His speech targets the intermediate developer of augmented reality. If you’re new to the technology, most of this speech will go over your head.
The video is long, but if you’re serious about augmented reality and the future of mobile, the speech hits major points about the industry. And at 40 minutes, I’d give it a good five minute buffer if you’re going to watch the whole thing.
“Mobile is dead,” said Robert to begin his speech. He goes on to explain it should be brought back to life in a different format. Reincarnated, if you will. The point-to-point communication that we use right now will need transform into an immersive, predictive, meta-enviroment and can’t just be another way to access the internet.
Robert briefly explains the history of communications and tells us that if we do augmented reality correctly, it’ll join the pantheon. If we can remove the excess hardware of keyboards and screens in our mobile devices and convert to sunglasses, then the computer can become a buckle or a watch, conspicuous computing. We need to get away from the 2D mindset of flat screens and create 3D spaces where we can throw a YouTube video to another person through our AR enviroment, or send an SMS as a paper-airplane.
Augmented reality needs more than graphics over video, Robert goes on to explain. Should move past being even interactive and more dynamic and meta. It should answer the who, what, where, when, why and how. Computers have been vague points of demographic data because multiple people can use them, but mobile is an individual thing which allows us to break away from aggregate statistics and start answering questions for individuals.
Robert goes on to talk about venture capital, which he believes doesn’t get AR yet, and smart cities, and give suggestions to developers to keep the tagging of the world in mind, so we don’t have to go back and retag later.
Overall, I have to say I enjoyed the speech, though I was hoping Robert would get into specifics about Neogence Enterprises and their recent Mirascape announcement. And having spoken to him at length at ISMAR09 about the details of augmented reality, I thought he might elaborate on his anecdotes about furries and micro transactions. But maybe those weren’t appropriate for MOMA, anyway.
(edit note: while this was originally filmed back in June and even covered on GA by Ori, it’s still very relevent. Enjoy.)
Unless you’ve been living in a box today, you know that Apple finally unveiled the tablet iPad today. The biggest surprise about the announcement was the lack of a camera on the lap sized PC. No camera, really? If you don’t believe it, check the official spec page.
Besides the implications for augmented reality, which I’ll get to in a moment, the iPad not having a camera is a giant fail. I actually expected the iPad to have two cameras. One forward-looking so the iPad could function as a giant Polaroid and the other user-facing so videos could be recorded. We could forgive eliminating one of them, probably the forward-looking one since its so big, but not having the user-facing camera is inexcusable.
The series of tubes we call the Internet has moved beyond simple text. People want to record and upload videos straight to YouTube without having to yank out their dust-covered hand held or use Skype to call their friends while they’re watching the game.
The Apple iPad not having even one camera is like hooking up satellite without DVR. Sure you can do it, but why?
Of course, I’m being overly melodramatic here.
The real point to the iPad is competition for the Kindle, eReader and the Nook. Apple wants to revolutionize the way we read magazines, books and newspapers. Functionality for augmented reality isn’t even an afterthought. How many people are using their camera lying in bed reading an interactive book?
And is this a major setback for augmented reality? Not really. A giant-sized magic lens would add a fun new canvas to play with, but really wouldn’t be a game changer. Additionally, Apple isn’t expecting the tablet market to come even close to the smartphone market in sales.
So in the end, the iPad is a fail for augmented reality, but will probably give Jeff Bezos nightmares for months as he wonders how he’s going to compete against a Pentium 286 when he’s selling a Commodore 64.
And maybe, just maybe, Steve Jobs is still working on a see-through AR-enabled HMD. Then I’d say, all is forgiven Stevie, I’m coming home to Apple.