Google’s been making a big deal about machine learning as of late. It’s the concept that powers the future of their search engine. It allows artificial intelligence to understand human things. It can look at a picture and know what the subject of the picture is. It can listen to audio and detect what kind of sound it is.
As much work as Google has done to usher in this exciting future, they feel the next big step won’t come until devices of all shapes and sizes can make these distinctions on their own. The current status quo is to ship those responsibilities off to the cloud so a remote server can handle the heavy lifting, but a partnership with Movidius could mean your phone will eventually be able to handle all of it on its own.
Movidius is a company which developers high-efficiency chipsets specifically made to process the instructions needed for machine learning to do its thing. The partnership will have Google sourcing Movidius for the chips for testing. Google, meanwhile, will contribute aspects of their neural network to Movidius to help improve the technology.
“What Google has been able to achieve with neural networks is providing us with the building blocks for machine intelligence, laying the groundwork for the next decade of how technology will enhance the way people interact with the world,” said Blaise Agϋera y Arcas, head of Google’s machine intelligence group in Seattle. “By working with Movidius, we’re able to expand this technology beyond the data center and out into the real world, giving people the benefits of machine intelligence on their personal devices.”
We’re not sure of all the benefits this could have for the common consumer in the early going. Our first guess is that this will enable those without reliable data connections to reap all the benefits of machine intelligence even if they’re off the grid. This could also eventually lead to more powerful virtual assistant capabilities. We probably won’t see the fruits of their labor in 2016, but it’s exciting to think about nonetheless.