The last session of ISMAR ’08 is about to begin, and it concentrates on perfecting Rendering and Scene Acquisition in augmented reality and making it even more realistic.
First on stage is Yusaku Nishin with a challenging talk attempting Photometric registration by adaptive high dynamic range image generation for augmented reality.
His goal : development of photorealistic augmented reality with a High Dynamic Range (HDR) image.
Estimating the lighting environment of virtual objects is difficult because of low dynamic range cameras. In order to overcome this problem, they propose a method that estimates the lighting environment from an HDR image and renders virtual objects using an HDR environment map. Virtual objects are overlaid in real-time by adjusting the dynamic range of the rendered image with tone mapping according to the exposure time of the camera. The HDR image is generated from multiple images captured with various exposure times.
Now you are ready to watch the resulted effect. Incredible.
[youtuve=http://www.youtube.com/v/M53Tqqdk9w0]
~~~
Next on stage is the soon-to-be-hero-of-the-show Georg Klein (more on that later…) Compositing for Small Cameras
Blending virtual items on real scenes. It can work with small cameras. Video from such cameras tend to be imperfect (blurring, over saturation, radial distortion, etc) so when you impose a virtual item it tend to stick out in a bad way. Since we can’t improve the live video – we will try to adapt the virtual item to match the video at hand. Simply put, Georg samples the background and applies it to the image which matches blur, radial distortion, rotation, color saturation, etc) and he does it in 5 millisecond on a desktop… For details check the pdf paper; take a look for yourself and tell me if it works on Kartman:
Done! Georg is already working on the next challenge.
~~~
Following is Pished Bunnun introduces his work: OutlinAR: an assisted interactive model building system with reduced computational effort
Building 3D models interactively and in place (in-situ), using a single camera, and low computational effort – with a makeshift joystick (Button and wheels.)
In this case the video does a better job at explaining the concept than any number of words would…
Pished demonstrates it’s fast and pretty robust. You judge for yourself.
If you absolutely need more words about this – start here.
The team’s next challenge: make curved lines…
~~~
In the very last talk of the event Jason Wither courageously takes on another challenge to perfecting augmented reality, with his talk: Fast Annotation and Automatic Model Construction with a Single-Point Laser Range Finder
Jason is using a laser finder typically used by hunters (though he will not be shooting anything or anybody), mounted on the head or handheld, in conjunction with a parallel camera. First he wants to create an annotation. that’s totally trivial. But you can then orient the annotation according to a building for example.
Next, he is going to correct occlusion of virtual objects by real objects for improved augmented realism. Just click before and after the object and pronto:
Finally he will create a 3D model of an urban environment semi-automatically, by creating a depth map courtesy of the laser. To achieve that he’s using a fusion process. You got to see that video; the laser’s red line advancing on buildings reminds me the blob swallowing the city in that quirky Steve McQueen movie.
In conclusion this is a really low cost and fast approach for modeling and annotation of urban environments and objects. That capability would become extremely handy once Augmented Reality 2.0 picks up and anyone would want to annotate the environment (aka draw graffiti without breaking the law).
Next is the event wrap up and the results of the Tracking Competition. Stay tuned.
====================
From the ISMAR ’08 program:
Rendering and Scene Acquisition
-
Photometric registration by adaptive high dynamic range image generation for augmented reality
Yusaku Nishina, Bunyo Okumura, Masayuki Kanbara, Naokazu Yokoya -
OutlinAR: an assisted interactive model building system with reduced computational effort
Pished Bunnun, Walterio Mayol-Cuevas -
Fast Annotation and Automatic Model Construction with a Single-Point Laser Range Finder
Jason Wither, Chris Coffin, Jonathan Ventura, Tobias Hollerer
Filed under: AR Engines, AR Events | Tagged: Bunyo Okumura, Chris Coffin, compositing, David Murray, Georg Klein, ISMAR 08, Jason Wither, Jonathan Ventura, Masayuki Kanbara, Naokazu Yokoya, OutlineAR, Pished Bunnun, Rendering, scene acquisition, Tobias Hollerer, Walterio Mayol-Cuevas, Yusaku Nishina | 1 Comment »