Hyperscalers Marking Territory for Their AI Business Models
Training a user base in own AI tools to set a foundation; the game is on in the AI platform wars as the software base is turning over to AI; cloud services projected to be in great demand
By John P. Desmond, Editor, AI in Business
Google has announced a new course to teach a broad audience how to use AI tools, and the company is investing $75 million to make it free for one million Americans as part of an initiative to make AI training more accessible.
The Google AI Essentials course was developed and is taught by the company’s own experts, to teach workers basic skills for using generative AI tools, such as Google’s Gemini or OpenAI’s ChatGPT. The course is said to be product agnostic, Fox Business reported.
"AI offers significant opportunities to accelerate economic growth, particularly if people have access to the right resources and training," stated James Manyika, senior vice president for Research, Technology & Society at Google.
The hyperscalers that have made huge investments in AI–Google, Amazon, Meta and Microsoft especially–need to get a return on their investment. How they are going to achieve that is beginning to take shape, with the need to train a workforce in how to use the new tools as part of it.
“The explosion of interest in generative AI has created something of an arms race in using it for search and developer enablement,” stated Jean Atelsek, lead author of a recent account of this trend from 451 Research. The urgency of getting direct benefit from investments comes at a time when Amazon Web Services, Azure and Google Cloud are “trimming their payrolls and rebalancing investments to accommodate slowing growth,” the authors stated.
Pursuing Converts to Each Platform
Platform wars have long been a part of the software industry. Win the most users to your platform to gain the advantage position. Target developers as a customer base, to win more converts to the platform as a result of their work. The game is on for the hyperscalers with generative AI offerings.
The services business is likely to generate huge revenue streams, as will license or usage fees for generative AI output, to help pay for all those server farms taking up more acres of land and sucking up more electricity, a demand that is sending electric power demand forecasts dramatically higher.
The entire software base is turning over, as it has in the past when microcomputers–PCs–invaded the enterprises where mainframes had been providing the backbone of computer processing. The desktops turned over, and the mainframes are still here too, as IBM can attest.
“By the same token that serverless processing relies on servers, artificial intelligence relies on compute, storage and bandwidth — resources hyperscalers have plenty of,” stated Atelsek.
Microsoft announced ChatGPT integrations with its Bing search engine, a ChatGPT-based Azure OpenAI Service, and a tie to Microsoft 365 Copilot. Microsoft had previously announced an integration of OpenAI tech into its GitHub Copilot, which had been trained on open-source contributions to the GitHub code repository favored by developers. Microsoft CEO Satya Nadella cited plans in a 2023 earnings call to “lead in the new AI wave across our collusion areas and expand our total available market.” As a result, we are seeing conversations we never had,” leading to new business.
Google has responded with its own tools, including the PaLM foundation models, MakerSuite developer tools, the Bard conversational AI service and enhancements across its portfolio. “The company continues to emphasize its pioneering work in AI and large language models,” Atelsek stated.
“Our North Star is getting it right for users,” stated Alphabet CEO Sindar Pichai on a 2023 earnings call. He emphasized a goal to “ensure the safety of generative AI applications before they are released to the public.” That goal has proved challenging for all the generative AI hyperscalers, given AI’s tendency to hallucinate and its reliance on data difficult to “unbias.”
Amazon, which has used AI extensively in its retail business and within its managed services on AWS, is in its second generation of custom-designed Trainium processors and its Inferentia inference engine. Amazon Bedrock was released in April 2023, providing access to language models of AI21 Labs.
“If you look at the really significant leading large language models, they take many years to build and many billions of dollars to build. And there will be a small number of companies that want to invest that time and money, and we'll be one of them," stated Amazon CEO Andy Jassy on an earnings call. For developers, the company released CodeWhisperer, aimed at enabling programmers to generate code from natural language prompts.
Jassy projected that cloud services will be in great demand with the advent of large language models and generative AI. “So many customer experiences are going to be reinvented and invented that haven't existed before. And that's all going to be spent, in my opinion, on the cloud," he stated.
Generative AI Seen Transforming the Technology Stack
The synergy between hyperscalers and AI is seen by many industry observers.
“The Generative AI boom is expected to empower the technology stack of most of the companies across the globe. The need to automate the creation of content, images, videos, code, and other creative applications has been growing and is driving the market for Generative AI,” stated Tyson Hartman, CTO of InfoGain, a firm offering extendable platforms for digital businesses, in an account on the company’s blog. “This has introduced a major transformation in the technology landscape, and it is just the beginning,” he stated.
The hyperscalers have the following advantages to help get their generative AI products and services to market, in Hartman’s view:
Security & privacy: The hyperscalers have already in invested in security and privacy controls for their cloud-based services;
The hyperscalers can process huge volumes of data by virtue of the enormous data centers they have constructed;
Businesses that have moved their data to the cloud are in a position to exploit the latest generative AI tech available;
AI toolkits are being made available by the hyperscalers via their cloud-based services, developer tools and APIs. “This enables easy availability of AI tools through cloud-based platforms without the need to invest in AI hardware and frameworks– thus giving the hyperscalers an edge,” Hartman stated.
The hyperscalers have made serious investments in their generative AI offerings. For example, a single training run for a typical GPT-3 model could cost as much as $5 million, according to a recent working paper from the National Bureau of Economic Research summarized in a recent account in CIODive.
New Street Research has estimated the infrastructure price tag for adding ChatGPT features to Microsoft’s Bing search engine could reach $4 billion, as reported recently by CNBC. Bing currently has less than three percent of the global market for search, CNBC reported.
It’s getting to the point where in-house data centers find it difficult to compete for support of generative AI functions. “With most enterprises turning to cloud to provide the specialized platforms and raw power needed, it becomes a new opportunity for hyperscalers to increase market share,” stated Scott Young, a director at Info-Tech Research Group.
Read the source articles and information from Fox Business, 451 Research, the blog of InfoGain, in CIODive and from CNBC.
(Ed. Note: I am experiencing an issue with the Substack free subscription button at the moment; it’s swapping out the free subscription email field for a Pledge Your Support button, which is asking for credit card information. I would not do that; my preference is to win support for the newsletter from advertising. I wrote to Substack support about it; have had no satisfactory response yet. Thank you for your patience. Meanwhile for new subscribers, write to me at jd@jpdcontentservices.com; I will add you to the list manually. It’s free!)