LearnAR – Augmented Reality for Schools

Augmatic the British company founded by James Alliban (you may remember him from that augmented reality business card) has launched a new tool, called LearnAR.

It is a pack of ten curriculum resources for teachers and students to explore by combining the real world with virtual content using a web cam.

Some of those demos are brilliant. The Geiger counter and physiology ones seem to provide real value when coming to teach such subjects. On the other hand, I really don’t find any benefit of using the English application over a computer game. In a sense LearnAR is showing how gimmicky and how useful AR can be at the same time.

For another perspective on using AR for educational purposes, you should really check out Gail Carmichael’s blog. Since she’s doing her PhD on this subject, she has blogged about some very interesting concepts.

As for LearnAR, you can learn more about it on James Alliban’s blog.

2d Sketches Become 3d Reality

The guys at Hit Lab New Zealand and the Visual Media Lab at the Ben Gurion University, Israel, have uploaded a new video presenting the results of their ISMAR09 paper “In-Place 3D Sketching for Authoring and Augmenting Mechanical Systems”. Since the paper is not online yet, I can’t really tell how much of it is really automatic, and how robust is it, but the video is nothing less than magical:

I really envy those future physics high-school students…

Augmented Field Guides

The New York times ran a story yesterday about a new breed of field guides, those made not out of paper, but out data bytes and computer vision algorithms.
The article mostly revolved around a new application coming to the iPhone, that enables users to take photographs of leaves and by doing so identify the tree to which they belong.

The computer tree guide is good at narrowing down and finding the right species near the top of the list of possibilities, he said. “Instead of flipping through a field guide with 1,000 images, you are given 5 or 10 choices,” he said. The right choice may be second instead of first sometimes, “but that doesn’t really matter,” he said. “You can always use the English language — a description of the bark, for instance — to do the final identification.”

The technology comes from this group at Columbia University, which on their site you can find the academic papers describing the algorithms that were used in prior incarnations of that application. Now, I know some of you will say that this is not AR, since no image-registering was involved. Well, it fits my definition of AR (it augments our reality), and looking at a previous prototype that involves a HUD, and fiduciary markers, makes things even more obvious:

Anyway, I find this use of AR fascinating. It could really connect kids with nature, detaching them from the computer screen for a while, and transforming any outside walk into an exploration. What do you think?