The object recognition portion of augmented reality is a little like that hand-held label printer that you got when you were a kid and then went crazy putting tags on everything in your room. Did you really need to put a tag on your table that said, “Table”? Nah. But it felt good doing it.
High-end object recognition (and I’m including facial) is really a key component to ubiquitous AR. Well, and those pesky glasses, but we won’t talk about them today.
So back to object recognition. For our computers to understand the world enough to create seamless reality interfaces, they’re going to have to understand what a chair is, where it is when they see it and what it’s used for. This understanding will be useful for us humans, but it will be even more useful for robotics in the future.
With easy access to information, labeled in a computer friendly way, robots can learn to use our environment better than before. And I’m not even talking about high-end robotics either. A couple of cameras on a Roomba could help it know when to vacuum the floor and when to stay put because a party is going on. We use unattended vehicles to transfer parts around our Toyota plants. Allowing these simple vehicles to know when a box has been left in the way and to quietly move around would make them work better.
And who knows, maybe in the far-flung future when Turing level robots become possible, they’ll educate themselves on the wider world by taking long journeys and absorbing the trash-tags left by their human overlords.
And for fun, here’s a picture and video of a robot.
Filed under: Uncategorized | Tagged: augmented reality, Robotics, Robots, Roomba | Leave a comment »