AI Competition Shifts From Models to Chips, Power, and Supply Chains
Ed Ludlow
Caroline Hyde
Tyler Kendall
Shari Liss
Rebecca Torrence
Dylan Field
Brett AdcockMadlin Mekelburg
Sarah Friar
Matt Day
Dana Wollman
Eric SheridanBloomberg TechnologySaturday, May 16, 202613 min readBloomberg Technology framed the latest AI race less as a contest over individual products than as a fight over infrastructure constraints, from Nvidia chip export politics and U.S. semiconductor labor to cloud spending, energy, memory and data-center capacity. Ed Ludlow, Caroline Hyde and Bloomberg reporters treated Donald Trump’s discussion of Nvidia’s H200 chips with Xi Jinping as emblematic of that shift: significant for markets, but short of any clear export deal. The program’s interviews with Goldman Sachs’ Eric Sheridan, OpenAI CFO Sarah Friar and Figma CEO Dylan Field similarly argued that compute, distribution and ownership of the stack are becoming the decisive limits on AI growth.

AI infrastructure is now a constraint story
The strongest through-line was not a single company’s AI product. It was the pressure building around the physical, political, financial, and distribution systems that make AI possible: advanced chips, export permissions, cloud capital expenditure, energy, memory, semiconductor labor, data-center supply chains, and manufacturing capacity.
Eric Sheridan of Goldman Sachs described the market as still being in an “infrastructure-led cycle.” Capital expenditures at the hyperscalers continue to have an upward bias, he said, though the upside was more muted this quarter than last. Investors received that moderation positively because the revenue case is becoming more visible.
Sheridan said combined future revenue backlogs across Alphabet’s and Amazon’s cloud computing divisions now exceed $900 billion. In his view, that gives investors more confidence that revenue can follow one, two, or three years after the capex is spent. He also said cloud margins surprised to the upside because non-AI workloads are accelerating and data structuring is accelerating, supporting the long-term argument that larger cloud revenue bases can earn returns on the infrastructure being built now.
Custom silicon was central to Sheridan’s argument. Caroline Hyde raised Alphabet’s TPUs and Amazon’s use of both in-house chips and external hardware, including Cerebras. Sheridan called custom silicon at Alphabet and Amazon one of the market’s most underappreciated narratives. It pulls workloads into their cloud ecosystems, he said, and allows the hyperscalers to keep more margin because they design the chips themselves. TPUs may not match GPUs on absolute performance, in his view, but on price-to-performance they are competitive.
The same logic shaped his view of AI partnerships. Asked about the tangled relationships between Amazon, Alphabet, Anthropic, cloud distribution, and model competition, Sheridan avoided dissecting any one company relationship. His broader point was that AI is becoming interdependent because foundational model companies need compute, hyperscalers can provide it, and hyperscalers can also help model companies reach enterprise customers. Only a handful of companies, he said, tend to earn excess returns in major computing shifts at the infrastructure and platform layers. Because this cycle requires so much capital, only a handful can build at the required scale.
That framing also made Cerebras’s IPO less a standalone listing story than another sign of infrastructure demand. Rebecca Torrence said three of Cerebras’s four largest outside backers came in during 2016, investing in the company’s earliest round on the order of $25 million. Benchmark, she said, now holds around an 8% stake, and the investment was its first hardware deal in more than a decade at the time. Bloomberg reported that Benchmark, Eclipse, and Foundation Capital are poised to make billions from the IPO.
Torrence said Bloomberg learned that Arm and its majority backer SoftBank attempted to acquire Cerebras in the weeks before the listing, but those offers were rebuffed. For her, that underscored how intense competition has become in AI chips and infrastructure across both private and public markets.
The Trump-Xi chip discussion produced ambiguity, not a deal
The Trump-Xi meeting added a geopolitical constraint to the infrastructure story. Ed Ludlow framed the selloff in chip stocks as investors digesting the “net outcome” of the Beijing meeting for Nvidia and the broader chip complex. The Philadelphia Semiconductor Index was shown down 3.82% intraday, while Nvidia was down 4.08%. Ludlow noted that chip stocks had rallied roughly 70% year to date before the prior close, but the immediate focus was what the meeting did, and did not, resolve for advanced chips.
? donald-trump said aboard Air Force One that Nvidia’s H200 chips “came up” in his discussion with Xi Jinping. He praised Nvidia CEO Jensen Huang, said Nvidia has “a much higher level than the H200,” and added that “the H200 is good” and “China needs it.” The comments did not amount to an export agreement or a purchase commitment.
Bloomberg’s Tyler Kendall reported from Beijing that Chinese state media described Taiwan as the “number one issue” in U.S.-China relations. U.S. officials maintained that U.S. policy on Taiwan had not changed, while Trump separately told Fox News that he wanted to see tensions “cool down” between China and Taiwan. Kendall said Trump was noncommittal when pressed on future U.S. weapons sales, while Congress awaits approval of a $14 billion package.
A Bloomberg summit graphic listed the key takeaways as Trump inviting Xi to Washington on September 24, Xi urging “extreme caution” on Taiwan, Trump saying Xi agreed to withhold arms from Iran, Trump saying China would buy 200 Boeing jets, and Xi telling U.S. CEOs that China would open up more.
| Summit issue | Takeaway |
|---|---|
| Diplomacy | Trump invites Xi to Washington Sept. 24 |
| Taiwan | Xi: “Extreme caution” with Taiwan issue |
| Iran | Trump: Xi agreed to withhold arms from Iran |
| Aviation | Trump: China to buy 200 Boeing jets |
| Business access | Xi to US CEOs: China will open up more |
Kendall’s larger point was that the summit was cordial and stability-focused, but thin on “tangibles and deliverables.” Boeing officials remained in Beijing meeting with Chinese officials, which he described as a positive sign for that potential deal. But on Nvidia’s H200 chips, despite Huang’s last-minute addition to the trip, Kendall said Trump indicated China wants to develop its own chips, which is why purchases had not gone through. Trump also said the two sides discussed the future of artificial intelligence and possible collaboration on guardrails.
U.S. chip capacity still depends on workforce capacity
The geopolitical discussion quickly became a manufacturing question: if U.S.-China tensions rise and policymakers focus more heavily on bringing semiconductor design and manufacturing capacity into the United States, can the country build and operate more of the supply chain domestically?
Shari Liss, vice president of global workforce development initiatives at SEMI, said the answer is yes, but workforce remains a bottleneck. CHIPS-related investments are driving “incredible growth” in U.S. semiconductor manufacturing, she said, and the country will need roughly another 150,000 people in the chip work environment.
Liss emphasized that the shortage is not limited to one job category. The industry needs technicians and operators on fab floors, electrical, mechanical, and chemical engineers, researchers, PhDs, marketing talent, and finance talent. Her diagnosis was that the U.S. has an “image and awareness” problem with students. People use phones, iPads, computers, cars, and appliances every day without understanding how deeply chips underlie those products.
The practical question is whether the U.S. has the talent base to make fabs “hum,” not merely build them on paper. Liss said U.S. programs had atrophied because the country had not been manufacturing at the same scale, but new programs are now launching around the country. She pointed to a $200 million workforce investment through the National Science Foundation, in concert with the Department of Commerce, created as part of the CHIPS Act to build regional workforce nodes and link them into a national infrastructure.
The U.S. industrial strategy still depends on a tension between domestic training and global expertise. Asked how much the U.S. had relied on foreign talent, including TSMC talent brought to Arizona, Liss said it has been a mix. Semiconductors are a global, intricate industry, and the sector has relied on talent from abroad. But she said the aim now is to build a U.S. workforce of U.S. citizens able to take U.S. jobs and fill the roles being created.
Ludlow added that one topic apparently not prominent in the Trump-Xi meeting was visas, even though Chinese talent has long been central to universities, Silicon Valley, and AI. Hyde noted the point was particularly relevant in AI.
OpenAI’s limits are compute, distribution, and courtroom credibility
OpenAI’s immediate constraints were presented in three forms: compute demand, platform distribution, and legal risk.
Sarah Friar, OpenAI’s CFO, said demand has exceeded even the expectations of leaders who had anticipated the need for compute early.
Compute itself has clearly been a bottleneck.
Friar said even Sam Altman and Greg Brockman, whom she described as prescient about compute needs, could not have foreseen the scale of demand in 2026. She named energy and memory out of Southeast Asia as supply-chain choke points, and argued that OpenAI needs to get ahead of those constraints by striking partnerships earlier, before others see the full scale.
Caroline Hyde characterized Friar as clear that OpenAI still needs money for compute and has “plenty of optionality.” Hyde also said Friar spoke about her relationship with Altman as close but not frictionless: Friar rejected the idea that a CEO and CFO should always agree on everything, while describing a strong working relationship.
Friar defended that relationship during a week when Altman and Brockman were under scrutiny in Elon Musk’s lawsuit. She said Musk was “very much out to just distract,” while OpenAI is focused on building technology, putting it into customers’ hands, and “creating AGI for the benefit of humanity.” She described her partnership with Altman as a “yin and yang”: Altman pushes and is curious; her job is to create optionality, secure compute and funding, and build the business.
Distribution is another pressure point. Bloomberg reported that OpenAI’s relationship with Apple is fraying and that OpenAI is weighing possible legal action against the iPhone maker. The reported complaint is that OpenAI has not received the expected benefits from the partnership, while Apple’s use of OpenAI technology remains limited and difficult for users to find.
Dana Wollman said OpenAI initially saw the Apple partnership as a way to put its product in front of Apple’s huge user base. For less technically sophisticated users, Apple integration might have been their first exposure to OpenAI technology. OpenAI had hoped the partnership could result in billions of dollars annually in new subscribers.
Wollman said Apple is also opening its platforms to other developers, which likely does not help the dynamic with OpenAI. More broadly, she said Apple is benefiting from the AI investment other developers have already made, integrating a menu of increasingly advanced tools while continuing to build its own delayed AI products.
The legal risk is separate but significant. Madlin Mekelburg said closing arguments in Musk’s trial against OpenAI left jurors with a case about credibility. Musk’s attorneys tried to portray Altman as deceptive and untrustworthy. OpenAI’s lawyers argued that Musk’s story does not match the documents and witness testimony. Mekelburg said the advisory jury verdict will be non-binding, with the judge ultimately deciding whether Musk has proved his claims.
The remedies Musk is seeking are sweeping: $134 billion in damages that he says he would donate to the OpenAI Foundation, removal of Altman and Brockman from their jobs, and a transition of the OpenAI company back to a nonprofit. Mekelburg described the stakes as “huge and existential essentially for OpenAI.”
Figma argues AI makes design more valuable, not less
Figma’s quarter challenged the view that AI coding tools will displace the design stack. Dylan Field said Figma’s revenue accelerated 46% year over year, net dollar retention for customers above $10,000 in annual recurring revenue reached 139%, non-GAAP operating margin was 16%, and free cash flow was 27%. The company also raised guidance.
| Metric | Field’s figure |
|---|---|
| Revenue growth | 46% year over year |
| Net dollar retention, customers over $10K ARR | 139% |
| Non-GAAP operating margin | 16% |
| Free cash flow | 27% |
Field’s broader claim was that AI makes design more important because it commoditizes the layer below it.
As AI commoditizes code and makes it so that code is easier than ever to write, you know the layer above code, as that gets commoditized, is design.
That argument shaped Figma’s AI monetization. In March, Figma began charging customers after they exceeded certain AI usage limits. Field said Figma gave paid seats a number of free credits so customers could try AI features, then allowed them to buy additional credits. He acknowledged that AI inference costs “real money” and that keeping everything free indefinitely would not be the right business move.
Field described Figma Weave as a node-based workflow tool for connecting model outputs such as images, videos, and 3D models. The goal, he said, is to let users “mold those model outputs like clay.” He cited architecture firm NBBJ, which previously conducted extensive site shoots to understand lighting at different times and superimpose 3D building models. With Figma Weave, he said, the firm can control parameters inside a workflow, save time, and get better client results.
Asked about investor fears that AI will disrupt software providers, Field said Figma can only control its inputs: delivering for customers and making long-term decisions. He pushed back against a narrow definition of design as aesthetics. Design, he said, is UX, form, function, and the thinking process that produces the result. In a world of LLMs, he argued, design is becoming broader inside companies and more central to breaking through a competitive information landscape.
On margins, Field said inference and token costs could create short-term pressure if Figma sees an opportunity to “go really big” and pursue growth. His view was that accepting some margin pressure can be the right decision if the long-term market is large enough. He framed Figma’s opportunity as extending beyond traditional design into sculpting, advertising, marketing, and bringing more people into the design process.
Figure’s robotics claim depends on autonomy, data, and manufacturing
Figure’s humanoid robot demonstration was meant to answer a harder question than whether a robot can perform a task once. The company said its F03 robots completed more than 24 hours of continuous autonomous package sorting on a livestream across YouTube and X. Three robots worked in shifts scanning, flipping, and sorting packages at roughly human speed, powered by Figure’s in-house AI software running onboard.
Brett Adcock was pressed on whether there had been teleoperation, after viewers noticed gestures toward the robots’ heads. Adcock denied it directly.
There’s absolutely no teleoperation involved in this.
Adcock said the robots were operating fully autonomously using an onboard neural network called Helix 2. He explained that when a robot turns left to grab packages, it moves its left hand upward and out of the way, creating the gesture viewers noticed. He said the robots had been running autonomously for close to 50 hours, with almost no belt downtime, and had pushed close to 60,000 packages.
The logistics of keeping the system running were central to the claim. Adcock said each robot has roughly four hours of battery life. When its battery runs low, it messages another robot to take over. The first robot leaves the conveyor system and charges wirelessly on a stand while the replacement continues working. If there is a hardware or software issue, a robot can walk off into maintenance and call another robot to replace it. The goal, he said, is 24/7 operation without failures in the use case.
On speed and reliability, Adcock said the robot is operating at roughly human speed — about three seconds per package — which is the logistics-line requirement. He said the goal is a 90% success rate on package flips for barcode scanning, and that Figure is within that requirement. Four years ago, he said, humanoid robots were falling and unreliable; now Figure is trying to show reliability publicly.
His larger argument was that humanoid robots require full-stack control. Asked about speculation that OpenAI could return to robotics, Adcock argued that building humanoid robots at scale requires vertical integration across hardware, manufacturing, testing, AI data collection, and neural-net training. Figure designs motors, battery systems, actuators, sensors, kinematics, and structures in-house, and manufactures the robots itself.
Adcock named two main bottlenecks: data for pre-training Helix, and manufacturing. He said Figure’s BotQ facility would manufacture between 60 and 70 humanoid robots that week, and that the company is at a run rate of several thousand units annually. He also said Figure has well over $1 billion of cash on the balance sheet, putting the financial side in a good position.
Amazon’s AI strategy runs through data centers and supply chains
Amazon’s AI push, as described by Bloomberg’s Matt Day, is increasingly a data-center and supply-chain story. Day said much of what Amazon is spending money on is not visible to a consumer who thinks of the company mainly through packages arriving at the doorstep. The spending reaches data centers, chips, hardware engineering, software, and large language models.
Day said Amazon’s sprawl is now so broad that the main organizational principle is Andy Jassy’s decision-making at the top: where money is shifted, which areas receive investment, and which businesses give way. A Bloomberg Big Take graphic quoted Jassy saying, “You can choose to howl at the wind, but AI is not going away.”
Day’s contrast between Jassy and Jeff Bezos was specific. It was not that Bezos lacked attention to detail, but that in his final years leading Amazon he had checked out of some parts of the business. Jassy, by contrast, has “not let go of anything” and is deeply involved in the details. In retail, Day said, Amazon has left some opportunities on the table. In AI, it wants to be everywhere and “sprinkle AI through all of their product lines.”
A company-report graphic put Amazon’s projected 2025 revenue at $718 billion across online stores, third-party seller services, AWS, advertising services, subscription services, physical stores, and other segments. The relevance was Amazon’s breadth: AI spending can be routed through a business with consumer, enterprise, logistics, advertising, and cloud surfaces.
| Amazon segment | Projected 2025 revenue shown |
|---|---|
| Online Stores | $269B |
| Third-Party Seller Services | $172B |
| AWS | $129B |
| Advertising Services | $69B |
| Subscription Services | $50B |
| Physical Stores | $23B |
| Other | $6B |





