X-Ray Vision via Augmented Reality

The Wearable Computer Lab at the University of South Australia has recently uploaded three demos showing some of its researchers’ work to Youtube. Thomas covered one of those, AR Weather, but fortunately enough, he left me with the more interesting work (imho).
The next clip shows a part of Benjamin Avery’s PhD thesis, exploring the use of a head mounted display in order to view the scenery behind buildings (as long as they are brick-walled buildings). If understood correctly (and I couldn’t find the relavant paper online to check this up), the overlaid image is a three-dimensional rendition of the hidden scene reconstructed from images taken by a previously positioned camera.

The interesting thing here is that a simple visual cue, such as the edges of the occluding items, can have such a dramatic effect on the perception of the augmented scene. It makes one wonder what else can be done to improve augmented reality beyond better image recognition and brute processor power. Is it possible that intentionally deteriorating the augmented image (for example, making it flicker or tainted), will make a better user experience? After all, users are used to see AR in movies, where it looks considerably low-tech (think Terminator vision) compared with what we are trying to build today.

Anyway, here you can find Avery himself, presenting his work and giving some more details about it (sorry, couldn’t embed it here, even after several attempts)

2 Responses

  1. Very neat stuff.
    I have seen photos “turned 3d” that way before.
    Some website even let you upload a photo and play with it that way I think.

    Its not really 3d, more projecting into a box with the normals facing in. It looks 3d enough as long as you get too close, so its perfect for this sort of use.

    Still, I think its particularly smart to use the brick texture and a marker…nice idea.

  2. Thats an extension work of the one Denis and I did 2 years ago in ISMAR 2007 (incidentally it won the Best Student Paper Award). Watch it here:

    And, this is a follow up paper:


    That year had a strong emphasis in contextual information. The guys from TU Munich did also some good work with volume data, check out their video (warning it has cadavers):


Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s

%d bloggers like this: