While we often hear the notion that smartphone hardware evolution has slowed down over the past few years, perhaps it’s safe to say that we’re still seeing some innovation in previously-untapped sectors, at least when it comes to mobile phone technology.
Take for example artificial intelligence – once a seemingly-distant scientific concept, we’re seeing more and more AI features integrated into a lot of consumer-grade hardware and software products. Most recently, we’ve seen this come into play from companies like Google with their Tensor G3 chipset, and Qualcomm’s latest Snapdragon 8 Gen 3 chipset.
While both companies aim for similar results with features such as generative AI for example, there are some key differences as to how these are achieved.
If you watched the Made by Google product showcase back in October, there’s a good chance that you heard the word “AI” mentioned all throughout the event. From the inception of the first-generation Tensor chipset, it’s clear that Google was on the path to integrating AI more and more into its products and services.
READ: Google Pixel 8 Review: Deceptively Simple
This is very obvious with the Pixel 8 series smartphones, which are powered by the Tensor G3 mobile SoC. Photo-editing features such as Magic Editor and Best Take are made possible by the Tensor G3 chip, although it should be noted that it doesn’t work alone – for example, Magic Editor requires users to back up their photos online in order to work, indicating that Google’s AI magic happens via the cloud.
That’s not to say that Tensor’s AI functionality is a gimmick – it still does some of the lifting, but requires the aid of Google’s cloud-based servers in order to work.
By contrast, Qualcomm’s Snapdragon 8 Gen 3 is capable of processing generative AI tasks all on the device, meaning that the chip – and in turn the phone – does all of the heavy lifting. Qualcomm’s efforts towards on-device AI has been a long time coming, with the company working on the technology way back before the 8 Gen 3’s launch.
READ: Snapdragon 8 Gen 3 Ushers in a New Standard for Performance and Mobile AI
It’s also generally thought that on-device AI processing is a lot more cost-effective versus alternatives which will require hardware to work alongside a cloud-based server; this also means faster processing since all the workloads involved are completely local, reducing the time needed to get users the results they want.
Given that the 8 Gen 3 is capable of this, buyers can expect their favourite smartphone flagship to come with fancy new generative AI features, although there’s a catch.
As such, there are some caveats to Qualcomm’s approach to enabling hardware manufacturers to equip their devices with onboard Ai capabilities. Simply put, while the technology is available on the 8 Gen 3, OEMs will still need to develop their own specific apps and software in order to fully take advantage of on-device AI, which does add a barrier of sorts to the feature.
On the other hand, Google’s method with its Pixels makes sure that the software features are already there, although there’s a significant tradeoff with regards to the speed that the phone can generate results, since half of the task is offloaded to the cloud. In any case though, Qualcomm has shown that it is possible to equip smartphones with generative AI features, without resulting to the cloud.
With all that being said, it’s certainly interesting to see how different companies are approaching mobile-ready AI features. For all intents and purposes, a lot of current AI functionality is still in its early stages, though it’s always exciting to see how it develops alongside the hardware needed for it to function.