Imagine being at a tourist location, wanting to know what a certain landmark is, pointing your camera phone at it, snapping a picture and having Google return results for the place/thing in question. That is exactly what you’ll be able to do with a new feature on Android Phones called “Visual Search” – apparently codenamed Google Goggles.
The system was demoed on CNBC’s show Inside the Mind of Google where Google Product Manager Hartmut Neven took a picture of Santa Monica pier and then sent the image to Google for crunching and it correctly identified it and brought back the right search results.
I tried to track down the video but was unsuccessful.
Great idea in theory, and with location-based information it really shouldn’t be THAT hard to identify things. Without the camera/picture and just using your location, the compass and accelerometer it should really be able to nail most of major landmarks, buildings and things of that nature. But what happens when you’ve got a lot of notable things crammed into one area? Will Visual Search give you a list of all the items in your picture or just assume something for you? There are a lot of quirks where this could go wrong depending on the scope of the project but the idea is pretty awesome nonetheless.
As time goes on Google just continues to bring more of their own application innovation to Android handsets – I can’t wait for this to hit Android Market so I can give it a spin. Maybe a new feature in Android 2.1? Pretty please?
I kind of wish Google Goggles was a piece of hardware that looked like this:
And when you wore them it simulated real life in your eyes… kind of like a Liquid Galaxy. But if you want to be ghetto about it I suppose you can already do that yourself.
[EWeek via ElectricPig]
Such great news, we posted it twice! (Rob’s birthday weekend left him less than sober, perhaps? :P)