Much like previous technological paradigm shifts, the generative artificial intelligence (AI) revolution offers substantial investment opportunities across three key tech layers: compute, infrastructure, and interface. In looking to track potential market leadership, historically, companies that met the demand in these categories during previous shifts emerged as winners. A similar trend is unfolding with generative AI. Initially, we expect leadership to progress from AI hardware suppliers to cloud infrastructure and data management software players, and then to consumer interfaces and platforms.
This piece is part of a series that dives deeper into this year’s iteration of our flagship research piece, Charting Disruption.
Key Takeaways
- The compute layer is foundational for generative AI’s advancement, and leading AI hardware supplier and chip vendor Nvidia has been the major beneficiary of capacity buildouts in this initial phase.
- Cloud infrastructure providers like Microsoft Azure, Amazon AWS, and Google Cloud are crucial for AI’s expansion and are quickly emerging as the second wave of winners.
- The greatest long term value accrual of the generative AI platform shift will likely belong to consumer interfaces that integrate AI, potentially topping the profits of foundational chipmakers. We are already seeing companies like OpenAI help bring generative AI to the everyday consumer, and we expect additional platforms/services to surface in the coming years.
AI Infrastructure Revamp Creates New Opportunities for Chipmakers
As AI applications expand in scope and adoption, existing computational infrastructure in the data center space and for AI training will require significant upgrades to meet demand. By the end of the decade, annual spending on AI chips is projected to grow at a compounded annual growth rate (CAGR) of over 30% to nearly $165 billion.1 Graphics processing units (GPUs) are likely to account for most of this spending due to their performance superiority in completing AI tasks versus central processing unit (CPU)-based chips.2
Nvidia currently commands more than 80% of the GPU market, and demand for its solutions are at all-time highs, as evidenced by its recent Q3 FY 2024 earnings report.3,4 Nvidia’s data center business, which is responsible for the sale of these GPUs, grossed revenues of $14.5 billion, a 279% year-over-year increase. The company projects Q4 FY 2024 revenues of $20 billion, 25% above of consensus, driven by successful HGX system adoption, including the recently released H100 GPU.5 Nvidia’s edge also stems from its software prowess, including the CUDA AI platform for accelerated computing and the Spectrum X platform for networking. This software suite helps AI developers efficiently manage and deploy AI workloads across GPU clusters.
The AI chip opportunity is set to expand well beyond data centers and AI training, entering various domains such as smartphones, robotics, and automobiles. This expansion presents ample opportunities for smaller companies to capture and serve the market effectively. We also expect major tech giants to intensify their efforts to compete with Nvidia by developing their own in-house chips.
Cloud Infrastructure, Data Solution Suppliers, to Benefit from AI Implementation
Spending on cloud computing is expected to grow by nearly 22% in 2023, with major cloud software and infrastructure providers poised to capitalize on this trend through enhanced AI-integrated solutions.6
Cloud providers are already playing a pivotal role in powering existing generative AI workloads and distributing AI to the masses. A prime example is Microsoft disseminating OpenAI’s models to its expansive base of end users. All AI models will eventually be run through the cloud, with leaders like Amazon AWS, Microsoft Azure, and Google Cloud positioned to serve as the default arm, which will increase their cloud revenues.
Simultaneously, the surge in enterprise adoption of custom AI models is likely to lead to a significant increase in data generation, fueling a substantial rise in expenditures on cloud computing resources and data management tools.
Consumer Interface Layer Likely to Capture Max Value from AI Shift
Like Microsoft did for PCs and Apple did for smartphones, apps that bring generative AI into consumer products can accrue significant value. An early example is ChatGPT’s rapid user adoption helping OpenAI cross $1.3 billion of revenue on an annualized basis in 2023, up over 4,500% from 2022.7
Historically, the services providers and platforms created by a new tech cycle ultimately reap better profits than the chipmakers that power the cycle. For example, chipmakers Nvidia, Intel, AMD, Qualcomm, and Broadcom collectively earned $21 billion in operating profits in 2010, the start of the cloud cycle that ran through 2022. In contrast, an exhaustive group of software & services companies including Meta, Google, Amazon, Salesforce, and Netflix generated only $13 billion. But by 2015, the software & services cohort surpassed the chipmakers, and by 2022, their operating profits skyrocketed to $122 billion, well ahead of the chipmakers’ $44 billion.8
OpenAI currently owns the most widely used generative AI interface, with ChatGPT boasting 100 million weekly active users and an estimated 200 million plus monthly users.9,10 However, in the years ahead, new platforms that focus on more niche categories like healthcare or logistics are likely to emerge.
Conclusion: Generative AI’s Value Chain Takes Shape
Generative AI’s value chain’s development thus far mirrors historical technological paradigm shifts, and we expect this compute, infrastructure, and interface progression to continue. As part of that progression, we expect the value chain, including AI hardware, cloud and data infrastructure, and consumer platforms to present numerous investment opportunities that help bring this transformative technology to life.