AMD’s Forecast Shows AI Demand Is Spreading Beyond GPUs
Caroline Hyde
Ian King
Balaji Krishnamurthy
Mark Gurman
Carol MassarGeetha Ranganathan
Cathie WoodHelena Wang
Josh D'Amaro
Brody Ford
Ryan Vlastelica
Joe Mathieu
Srini PajjuriBloomberg TechnologyThursday, May 7, 202619 min readBloomberg Technology framed AMD’s sharp rally as evidence that the AI infrastructure trade is widening beyond GPUs. Caroline Hyde, Ian King and RBC’s Srini Pajjuri said AMD’s forecast pointed to renewed demand for CPUs as AI workloads shift toward inference and agentic systems, even as Nvidia remains dominant in accelerators. The program extended that argument across Nvidia’s Corning deal, Microsoft’s power constraints and Apple’s outside-model plans: the AI boom is becoming a contest over compute, connectivity, energy and platform control.

AI demand is no longer only a GPU story
The day’s most important market move was not simply that Advanced Micro Devices rallied. It was why. AMD’s forecast pushed the stock sharply higher because investors heard a broader AI infrastructure claim: the central processor, long treated as secondary to the GPU in the AI trade, is becoming newly important as AI workloads move from training toward inference and agentic systems.
Ian King said AMD’s reported earnings were “pretty good,” but the decisive moment came on the company’s conference call, when Lisa Su gave what King described as a very bullish projection for CPUs. According to King, Su said that in the current quarter that part of AMD’s business would be up 70%. He framed the statement as an affirmation of what other companies have been saying: demand is not confined to accelerators.
The company’s guidance was roughly $11.2 billion in second-quarter revenue. That remains far smaller than the roughly $70 billion expected from Nvidia, as Caroline Hyde noted, but the market reaction reflected a broader reassessment of AMD than a simple second-place accelerator story. Bloomberg Tech’s market graphics put AMD up about 15% early in the program and later more than 17% intraday, with the stock described as trading at a record high and having risen more than 300% over one year.
| Company or index | Move shown | Context in the source |
|---|---|---|
| Advanced Micro Devices | +17.22% intraday | Record-high move after a forecast tied to renewed CPU demand |
| Nvidia | +5.35% intraday | Recovered after recent weakness as AI infrastructure and Corning deal drew attention |
| Philadelphia Semiconductor Index | +3.67% intraday | Shown as chip stocks led a record-high Nasdaq 100 |
| Nasdaq 100 | +1.57% intraday | Risk-on market with technology stocks leading |
The immediate question was whether AMD could supply into that demand. Hyde raised memory constraints, especially their effect on the PC side of the business. King said Su faced repeated questions on the issue during the call. Her answer, as King summarized it, was that AMD had seen the constraints coming and had been working to manage them. The logic was commercial as much as operational: memory suppliers want to sell expensive memory into data centers and to companies such as AMD. King said Su’s essential assurance to investors was that AMD could meet its forecasts.
Srini Pajjuri of RBC Capital gave the deeper semiconductor explanation. Demand, he said, is strong for several reasons at once. Agentic AI is the primary driver, but broader AI capital expenditure is rising, and enterprise server demand is also strong. Supply is tight enough that visibility and backlog are extending through the year and even into next year. Pajjuri pointed to Intel’s comments about not having enough supply to meet current demand as part of the backdrop.
The structural change, in Pajjuri’s view, is the CPU-to-GPU ratio inside AI systems. In training workloads, he said, the historical pattern has often been one CPU for every four GPUs, and in some cases one for every eight. In inference — particularly agentic AI — that ratio changes because CPUs take on more of the orchestration work. The models may still run on powerful GPUs, but agents need to be managed. They need access to the right data. They need to talk to other APIs. CPUs, Pajjuri said, play a critical role in that coordination layer.
He cautioned that it is early and difficult to draw too many conclusions, but said the direction is clear enough: the ratio is improving in favor of CPUs. Based on architectures from Nvidia, AMD, Google, and others, RBC sees visibility into 2027. Pajjuri said the ratio could move toward one CPU for every two GPUs by the end of next year. Whether it moves to one-to-one or beyond remains, in his words, to be seen.
| Workload or forecast | CPU-to-GPU ratio described | Speaker’s point |
|---|---|---|
| Historical AI training | 1 CPU to 4 GPUs, sometimes 1 to 8 | Training has been heavily GPU-weighted |
| Inference and agentic AI trend | Moving in favor of CPUs | Agents require orchestration, data access, and API coordination |
| RBC view by end of next year | Potentially 1 CPU to 2 GPUs | Pajjuri said visibility into 2027 supports a shift |
| Possible longer-term outcome | 1 to 1 or higher | Pajjuri said that remains to be seen |
That is why AMD’s CPU story mattered even as the AI market remains dominated by GPUs. Pajjuri said the CPU serviceable available market is now about $120 billion, up from roughly $60 billion not long ago, according to Lisa Su’s framing. But he also stressed that the GPU opportunity is still much larger: including custom chips, he described it as a trillion-dollar type serviceable available market. For AMD, CPU strength helps in the short term, but longer-term upside still depends on showing it can gain share in GPUs against Nvidia.
Valuation is the tension. Hyde pointed to AMD’s roughly $600 billion market capitalization, still far below Nvidia’s roughly $4 trillion, but newly large after an extraordinary run. Pajjuri said AMD’s valuation is rich. On calendar 2027 numbers, he said Nvidia trades in the high teens while AMD trades at close to twice that multiple. That premium can be partly explained by AMD’s smaller base and potentially faster growth, but RBC is “sitting on the sidelines” until there is clearer evidence that AMD can convert new customers into durable GPU share. He cited OpenAI and Meta as customers expected to ramp later this year and into next year.
The market was also repricing competition around Nvidia. Ryan Vlastelica said AMD had already entered the earnings report after its biggest one-month gain since the dot-com era, which meant expectations were high. Even so, the forecast “blew everyone away,” he said, and drew upgrades he had seen. Hyde noted that some price-target moves were “eye-watering,” including Bernstein doubling its target.
Vlastelica described a different problem for Nvidia. The Philadelphia Semiconductor Index had risen sharply since late March, but Nvidia had not participated to the same degree. Investors, he said, increasingly recognize that Nvidia’s former near-monopoly in AI chips is being challenged not only by AMD, Intel, Qualcomm, and other traditional chipmakers, but also by Nvidia’s own largest customers. Alphabet has talked about selling its TPU chips to other cloud providers. Amazon has its own chip. Meta and Microsoft are developing their own. Those chips are designed for AI applications, and some investors view them as superior to Nvidia products in particular use cases.
The question, as Vlastelica framed it, is what happens to Nvidia’s growth, margins, pricing power, and market share if its biggest customers reduce their reliance on Nvidia. That does not imply Nvidia is suddenly weak. Its stock was also up strongly on the day. But the source of investor anxiety had shifted: AI demand remains large, while the distribution of economics across the chip supply chain is becoming more contested.
Nvidia is buying against bottlenecks before they become binding
Nvidia’s deal with Corning was another sign that AI infrastructure demand is forcing the supply chain to change before obvious shortages become binding constraints. Bloomberg Tech graphics described a $500 million Nvidia deal for stock in Corning and a warrant for up to 15 million Corning shares. Corning’s stock was shown up more than 13% intraday, and later more than 14%.
Ian King said Jensen Huang would explain the deal as part of a broader strategy: Nvidia is using its money to remove, or at least reduce, any possible bottleneck in AI infrastructure. Fiber is one of those bottlenecks. Copper is reaching its limits, he said, and the speed of light is as fast as information can be transmitted. If future Nvidia chips are to keep the AI data-center system moving, they need fiber optic connections at scale.
Srini Pajjuri made the same point from the analyst side. Processing data is only one part of the AI infrastructure problem; the data also has to move at very high speed. The strength of demand is exposing connectivity bottlenecks. Optical links matter not only between servers, but also between data centers. Pajjuri said supply constraints are emerging “across the board,” and Nvidia is trying to make sure those constraints do not limit its own growth.
| Item shown or discussed | Number | Why it mattered |
|---|---|---|
| Nvidia-Corning stock deal | Up to $500 million | Hyde framed it as a move to secure AI infrastructure supply |
| Corning warrant | Up to 15 million shares | Bloomberg Tech graphic tied the warrant to Nvidia’s Corning arrangement |
| Corning intraday move | +14.32% | The stock rallied as optical fiber became part of the AI-infrastructure trade |
| Nvidia intraday move | +4% to +5% range | Nvidia rebounded as investors focused on infrastructure positioning |
The Corning deal therefore sits in the same infrastructure logic as AMD’s CPU demand and memory constraints: the AI buildout is expanding the list of scarce components. It is no longer enough to ask who sells the best accelerator. Power, memory, networking, optical fiber, and CPUs all entered the discussion as parts of the same investment problem.
Pajjuri argued that Nvidia’s balance sheet is itself a structural advantage in this environment. He said Nvidia has not only the best products in the industry but also the financial strength to secure supply in areas that could otherwise become chokepoints. In an environment where “everything is so tight,” he said, that matters.
A brief discussion of Infineon reinforced the breadth of the supply-chain issue. Hyde said the German chipmaker forecast roughly $4.8 billion in fiscal third-quarter revenue, ahead of expectations, and said it was benefiting from the AI infrastructure spending boom because data-center power supply solutions were in very high demand. Yet the stock was under pressure in European trading, even as the company said 2026 remained on track for about 10% growth. AI demand can lift multiple semiconductor categories while still leaving investors to sort through valuation, expectations, and margins company by company.
A Gartner-sourced Bloomberg Primer chart put AI chips at more than one quarter of all chips sold in 2025 and projected them to reach more than half by 2029. In the same preview, a speaker contrasted the old semiconductor cycle — driven by a new Intel chip, iPhone, or PC upgrade — with a more sustained growth period tied to the need to build data centers for AI deployment and applications. Spending on AI infrastructure such as data centers was described as projected to cross the $1 trillion mark.
That sequence connected the chip discussion without making it a single-company story. AMD’s rally was about CPUs and accelerators. Nvidia’s Corning deal was about optical connectivity. Infineon was about power chips. SpaceX’s proposed Terafab chip factory was presented as a $55 billion project that could rise to $119 billion if additional phases are completed. The infrastructure trade was described through a chain of constraints, not one component.
The AI power bill is forcing companies to revisit climate promises
The demand for AI infrastructure is not only showing up in semiconductor supply chains. It is also pressuring the energy commitments that large technology companies made before the AI boom.
Brody Ford reported that Microsoft is considering delaying or abandoning ambitious clean-energy targets, according to sources. He said Microsoft had committed to match 100% of the energy used in its offices and data centers by putting clean energy back onto the grid, through sources such as solar, batteries, and wind. Those targets were set before the AI boom. Now, Ford said, companies are racing to secure as much power as they can, including by starting gas power plants.
Inside Microsoft, according to Ford, there is a growing sense that the company may have to revise or take back some of the climate targets that had been important to its marketing. Caroline Hyde noted that talks were ongoing and no final decision had been made. She also said Microsoft’s spokesperson continued to point to efforts to maintain the company’s annual matching goal, while not commenting on a tougher hourly commitment.
Ford explained why that distinction matters. Annual matching is easier because there is abundant clean energy during the day, especially from solar. Hourly matching is harder because data centers run 24 hours a day and clean power is less available at night. A company can maintain annual matching and still sound as if it is meeting a strong goal, but stepping back from hourly matching would still be a retreat from the more ambitious climate commitment.
A Bloomberg Tech chart described data centers as “guzzling up gas in the US,” with projected data-center-driven gas consumption by region rising through 2035 based on BNEF estimates. The visible chart grouped regions including East South Central, Midwest, Mountain, and Pacific, and showed gas consumption driven by data centers rising through the projection period.
| Energy commitment or pressure | How it was described | Constraint |
|---|---|---|
| Annual clean-energy matching | Microsoft spokesperson continued to point to efforts to maintain the annual matching goal | Easier because daytime solar can offset annual use |
| Hourly clean-energy matching | Hyde noted Microsoft did not comment on the tougher hourly commitment | Harder because data centers run 24/7 and nighttime clean power is limited |
| Gas power for AI demand | Ford said companies are racing for power and starting gas power plants | AI data-center growth is increasing power demand faster than earlier targets assumed |
| Projected gas consumption | Bloomberg Tech chart showed data centers “guzzling up gas in the US” through 2035 | Future projections were based on BNEF estimates |
Hyde also cited OpenAI’s expected compute spending. According to testimony by OpenAI president Greg Brockman in the company’s courtroom battle against Elon Musk, OpenAI expects to spend $50 billion on computing power this year. Brockman also said he and other co-founders were concerned that Musk lacked the patience and AI understanding to run the ChatGPT maker. The trial was continuing, but for the infrastructure discussion the most material number was the scale of compute spend.
The energy and compute discussion widened the infrastructure frame. Climate goals, fiber supply, CPU demand, memory demand, and gas consumption were all discussed against the same pressure: the need to build and operate enough compute capacity.
Apple’s AI plan gives users model choice while preserving Apple’s platform economics
Apple’s reported plan for iOS 27 was framed as a practical concession to its current AI position. Users will be able to choose from multiple outside AI models, including Google’s Gemini and Anthropic’s Claude, while Apple tries to make its own AI capabilities competent enough to remain the default.
Mark Gurman said Apple is trying to be agnostic “in some way,” but more fundamentally it is recognizing where it stands in artificial intelligence. Siri and Apple Intelligence are not at the level of Android, competing devices, ChatGPT, Claude, and other services, he said. Apple’s near-term goal is therefore to make default offerings such as Siri and Apple Intelligence good enough, similar to the built-in iPhone apps. Users may prefer third-party apps, but they still use the iPhone, and Apple still earns services and hardware revenue.
Gurman said Apple wants to apply that model to AI. Customers could put their preferred AI models on top of features or use them to power certain functions. They would still be using an iPhone, and Apple could potentially make more money if users subscribe to services such as ChatGPT or Gemini through the App Store, where Apple takes a cut.
The distinction between Siri’s underlying model work and external AI services matters. Gurman said Apple Intelligence launched in 2024 with Siri based on Apple models and an extension allowing users to tap into ChatGPT. Apple is now rebuilding Siri using Google Gemini models, but that does not mean Siri will have Google Gemini functionality. Gurman’s point was that Apple is using Google’s model work to improve Siri because Google is doing a better job building models; he did not describe that as a consumer-facing Gemini integration inside Siri.
Separately, users will be able to tap into outside AI services such as Gemini and Claude, in addition to ChatGPT. Gurman said it remains to be seen whether Apple allows Meta, Alexa, and others. The features and revamped Siri are expected to be introduced at WWDC on June 8 and rolled out in September. Gurman said if the rollout arrives later than that, Apple would have “another disaster” on its hands, but he believes it will happen this time.
The short-term and long-term judgments differ. For customers, Gurman said, optionality is hard to criticize: as Gemini, Claude, and ChatGPT improve, iPhone users can access the best services on Apple hardware. For Apple, however, dependence on third parties cannot be the end state. Gurman said Apple still needs improved models and needs to be on the frontier of AI because it has future hardware plans. A hardware company does not want to rely on outside firms to power the underlying AI in its products. In the short term, model choice may be good enough. In the long term, Apple has to become more competitive.
Uber and Disney showed different ways earnings were being tested by demand
Away from chips and AI infrastructure, Uber and Disney each reported stronger-than-expected numbers, but the underlying questions were different. Uber’s issue was whether it could keep bookings growing in a choppy macro environment. Disney’s was whether parks, streaming, and sports costs would support a more predictable earnings trajectory under new CEO Josh D’Amaro.
Uber’s shares were up more than 8% intraday after a second-quarter bookings outlook that beat expectations. CFO Balaji Krishnamurthy said the company delivered 21% growth in the quarter, the third consecutive quarter at that kind of level, while also reporting 44% year-over-year EPS growth. He attributed the performance partly to product velocity after Uber’s Go-Get event and said Uber’s nascent autonomous vehicles business scaled 10 times year over year. The company’s operating claim was that cost discipline and increased use of AI for efficiency helped deliver earnings growth alongside top-line growth.
The Expedia partnership was presented not just as a hotel-booking add-on, but as a way to expand Uber’s definition of cross-platform behavior. Krishnamurthy said only about 20% of monthly active consumers currently use both mobility and delivery, but those who engage across the portfolio drive significantly higher gross bookings and profits. Uber One reached about 50 million members during the quarter after adding roughly 20 million members over the previous year, and those members now drive 50% of Uber’s gross bookings.
Hotels fit that strategy because travel is already embedded in Uber’s existing demand. Krishnamurthy said 15% of mobility gross bookings come from airport trips, 40% of U.S. riders take trips outside their home cities, and last year Uber handled 1.5 billion trips globally outside users’ home cities. Through Expedia, Uber can offer hotel prices from Expedia’s platform and add 10% cash back, which customers can spend back on Uber.
| Uber metric | Figure | Krishnamurthy’s use of the figure |
|---|---|---|
| Quarterly growth | 21% | Third successive quarter at that kind of level |
| EPS growth | 44% year over year | Linked to cost discipline and AI-enabled efficiency |
| Uber One members | About 50 million | Membership underpins cross-platform engagement |
| Uber One gross bookings contribution | 50% | Members now drive half of company gross bookings |
| B2B gross bookings | $5 billion | Business is growing faster than the company overall |
| B2B growth | 45% | Compared with 21% total growth |
The B2B business followed the same platform logic. Krishnamurthy said Uber’s business-to-business offering now has $5 billion in gross bookings and is growing 45%, compared with the company’s 21% overall growth. About 300,000 organizations globally use it today, and Uber believes it can serve as many as one million organizations globally over time, which Krishnamurthy said would allow the business to move from $5 billion to more than $10 billion in gross bookings.
Disney’s shares were having their best day in a year after results beat expectations. Helena Wang of Phillip Securities called the numbers a solid start for D’Amaro. She said both revenue and EPS beat estimates, and Disney guided for 12% EPS growth for 2026. A Bloomberg Intelligence graphic framed the guidance as 16% EPS growth in fiscal 2026 versus 11% consensus, with double-digit growth in fiscal 2027. The same graphic said parks operating income rose 5% in fiscal second quarter versus consensus for 2%, pointing to limited headwinds from rising gas prices and geopolitical tensions.
The consumer picture was mixed but better than feared. Wang said Disney’s experience division saw a slight decrease in U.S. park attendance, likely reflecting the stronger dollar and fewer foreign tourists traveling to the United States. But guests who did visit spent more. Per-person spending at U.S. parks grew 5% year over year, across tickets, food, and merchandise. Wang also cited strong booking at Disney World and higher cruise volume after the introduction of Disney Destiny and Disney Adventure in March 2026, which she said increased cruise booking capacity by 40%.
Geetha Ranganathan of Bloomberg Intelligence said “everything seems to be working right now.” Parks represent about 60% of total company profit, she said, and analysts had expected only a modest increase in fiscal second-quarter profit. Disney instead delivered a 5% increase, helped by strong per-capita spending despite some pressure on visitation. Ranganathan said upbeat commentary for the fiscal third quarter, including a pickup in demand and fading attendance headwinds, supported Disney’s optimistic outlook for fiscal 2026 and fiscal 2027.
D’Amaro’s own comments focused on organizational integration. In an audio excerpt from Disney’s earnings call, he said the company centralized television programming within Disney Entertainment direct-to-consumer, programming for Disney+ and Hulu while using windowing to linear television to expand reach and maximize monetization. He also said Disney integrated its games business into Disney Entertainment, creating new opportunities to cross-promote franchises, extend storytelling, and develop new IP.
We centralized television programming within Disney Entertainment DTC. So we're programming for Disney+ and Hulu while being smart about windowing content to linear so that we can expand reach and maximize monetization.
Ranganathan said that strategy positions Disney+ as a hub for a broader set of products: streaming, movies, parks, merchandise, and video games. Games open access to a younger audience, in her reading, and help reinvigorate the content engine. She said D’Amaro did a good job convincing investors that he had a plan.
The main pressure point was sports. Hyde pointed to ESPN weakness and the rising cost of sports rights. Wang said ESPN revenue increased because of higher subscription and affiliate fees tied to its NFL transaction, but operating income decreased because of higher sports rights and production costs. She said the entire streaming industry is trying to increase live sports because it is a large market with a loyal fan base. Sports rights are expensive, especially at scale, but Disney already has scale, which Wang described as an advantage despite the cost pressure.
Cathie Wood sees vertical integration as the answer to an unfinished supply chain
At the Milken conference, Cathie Wood tied several of the day’s technology themes together through a single view: major technology companies are converging around vertical integration because the next generation of infrastructure does not yet have a complete supply chain.
Wood said Ark Invest was founded on the belief that technology seeds planted during the 20 years ending in the dot-com bubble had been germinating for another 20 to 25 years and are now flourishing. She described 15 technologies evolving and converging. Elon Musk, she said, told her about six months earlier that his companies were converging more than even he had understood. In Wood’s account, SpaceX, xAI, and Tesla rumors all fit into Musk’s belief that “in the new world,” a company has to be vertically integrated.
That was her explanation for the reported SpaceX and Tesla chip-factory plan, described earlier as at least $55 billion of investment and potentially up to $119 billion with additional phases. Wood said Musk is moving toward “incredible” vertical integration as he moves data centers into space. When a company is inventing something, she said, the supply chain often does not exist, or not all parts of it exist. Musk’s aggressive timelines, in her view, are a way of focusing employees and suppliers because “when he moves he moves fast.”
In the new world, a company has to be vertically integrated.
Wood applied the same logic to robo-taxis. She said Tesla is vertically integrated and has created a platform on which others will build. Ark’s analysis, according to Wood, suggests the cost of transportation will collapse as robo-taxis scale. Using Uber’s current cost umbrella of more than $3 per mile as a reference, Wood said robo-taxi costs could fall to 25 cents per mile. She estimated that Waymo’s 2030 cost structure would be 50% higher than Tesla’s because Waymo depends on other automakers and supply-chain partners that Tesla does not.
On SpaceX, Wood said demand for a public offering would be intense. She said that, as far as is known, the deal size is “only” $75 billion relative to the demand she sees. Ark’s venture fund, ARKVX, holds SpaceX as its largest position, she said, and investors found the fund because they were looking for SpaceX exposure. Wood expected an initial supply-demand imbalance and a pop, though with volatility.
She also said Ark already has a SpaceX model on its website, but has not yet added orbital data centers. Wood said Ark’s preliminary work suggests that orbital data centers could take the business, relative to Ark’s existing model, orders of magnitude higher, using “10, 20 times higher” language in relation to revenue generation. She did not present that as a settled market fact; it was Ark’s preliminary view of what the additional business could mean.
Wood rejected the idea that the AI infrastructure buildout resembles 1999. She said “unequivocally” that it is not hype in that sense. The seeds planted then are now ready, in her framing, and she argued that the technology revolution will dwarf the industrial revolution “by far.” Orbital data centers, she said, would not make Earth-based data centers a bubble because demand will require all of the capacity. Musk’s experience in Memphis and Mississippi informed the move, she argued: local opposition to power use, electricity prices, and land impact becomes less relevant if data centers can move into space.
On chips, Wood echoed the day’s CPU theme. Asked about AMD’s surge and whether markets should pay more attention to CPUs after the GPU boom, she said both are needed. She credited OpenAI’s Sarah Friar with warning that people chasing GPUs would be surprised by how agentic AI activates CPUs and how inference generally activates CPUs. Wood said Lisa Su had provided a stat she had not heard before: currently, for every CPU there are four to five GPUs enabling AI, and Su thinks that could go one-to-one in the future. Wood called CPUs “the sleeper” and pointed to Intel’s resurgence as part of a “back to the future” pattern in older technology names.
When Joe Mathieu noted data storage shortages and rallies, Wood’s answer was concise: “All hands on deck.” That phrase captured her view that the AI buildout pulls old and new categories together rather than replacing one with the other.
Wood also addressed government access to AI models after a question about Google and Microsoft giving a U.S. agency early access. She said that, knowing the administration and David Sacks, whom she called the AI czar, she did not think the issue was heavy regulation. She saw it more as national security. She referenced Anthropic’s Mythos and the possibility that AI can find vulnerabilities in software that has been in place for decades and never penetrated. The goal, in her reading, is likely to tighten protections and make industries safer. When asked whether White House vetting carries political implications for what platforms people can use, Wood downplayed it, comparing some of the concern to early OpenAI messaging around whether ChatGPT was too powerful to release: true in part, she said, but also “great marketing.”



