Whether generative AI and specialist chips will improve deteriorating user experience remains to be seen
Sundar Pichai, CEO of Google and Alphabet, gave the keynote [transcription here] at Google Cloud Next, the hyperscaler’s annual event.
He used the occasion to stress that Google has been taking an “Ai-first” approach for the last seven years to make its products “radically more helpful”. He continued, “We believe that making AI helpful for everyone is the most important way we’ll deliver on our mission in the next decade.
“That’s why we’ve invested in the very best tooling, foundation models and infrastructure, across both TPUs [tensor processing units which train models] and GPUs [graphics processing units – some of the hottest commodities on the planet].”
NVIDIA, the biggest producer of GPUs, just reported a 101% increase in revenues for Q2 driven by the rocketing demand for AI chipsets and exceeding market expectations.
Jury’s out on improvements
Pichai gave the huge improvements AI has made to “everyone”, using search as an example through Search Generative Experience (SGE). The extent to which this will mitigate what Cory Doctorow calls “enshittification” of key digital platforms remains to be seen and indeed, there are fears generative AI could make things worse.
This is scary given the accurate observation by Elaine Moore (also in the Financial Times – subscription required) that Google search “was once one of the wonders of the online world” but now is “less encyclopedia, more Yellow Pages”.
The theory is that increasingly poor experience is due to putting users’ experience a long way behind commercial priorities, in the knowledge that big platforms arguably have a captive audience. It’s hard to see how adding tech will reverse that.
Undaunted Pichai’s vision of the brave, new, AI-driven world was reinforced by NVIDIA’s Founder and CEO, Jensen Huang, joining Thomas Kurian, Google Cloud’s CEO, for a keynote presentation. NVIDIA transformation from a specialist chip producer for gaming to being a key tech partner for all the hyperscalers is remarkable.
The idea at the event was to underline their companies’ deepening relationship. In particular they talked about how Google uses NVIDIA’s H100 and A100 GPUs for internal research, including inference in DeepMind. Huang highlighted how deeper levels of collaboration have enabled NVIDIA GPU’s to accelerate Google’s PaxML framework for creating massive large language models (LLMs).