AI’s Value Is Moving From SaaS Margins to Hardware Capacity
PwC technology, media and telecommunications leader Dallas Dolen argues that the AI boom is a real infrastructure and business-model shift, but one constrained by chips, construction labor, telecom capacity, copper, power and enterprise economics. In a PwC-sponsored interview, he says value is moving from SaaS toward hardware, software margins are compressing, and most companies are less limited by compute access than by token costs, security rules and measurable return on investment. Dolen’s view of enterprise AI is practical and bounded: agents are working in defined back-office, sales and legal tasks, while broader automation will depend on cost, governance and human oversight.

AI demand is real, but the buildout is bounded by chips, labor, copper, and power
Dallas Dolen treats the AI infrastructure boom as both a genuine capital shift and an investment environment where some participants will lose money. Asked whether the current buildout of “AI factories” and data centers is wise, Dolen put it in the lineage of previous infrastructure cycles: fiber, the original internet, telco networks. The scale he cited is extreme: one forecast of roughly $7 trillion spent on data centers over a 10-year period, plus nearly another trillion dollars that still has to be spent on telecom.
The size of those numbers does not mean every announced project becomes a productive asset. Dolen drew a line between commitments to spend and the ability to deploy capital usefully. Actual execution is limited by chips, workers to build the facilities, telecom infrastructure, and even basic inputs such as copper. Telco is not a side issue in his account; it is “a major part of the buildout.”
His conclusion was not that the buildout is irrational. It was that the bottlenecks are concrete enough to make the investment cycle uneven. “Someone’s absolutely going to lose out from an investor point of view,” he said, because not every bet will hit. The result is a market where demand for AI capacity can be sincere, and the capital commitment enormous, while project completion and returns remain constrained by supply chains and construction capacity.
The interview was presented as a conversation with PwC from the floor of Google Cloud Next and was sponsored by PwC.
Value has shifted back toward hardware, and SaaS economics are compressing
Dolen’s core economic claim is that AI has changed where value accrues in the technology stack. The last major enterprise-software era favored software margins: SaaS businesses could sell high-margin, repeatable products, often protected by moats around enterprise workflows. AI, in Dolen’s telling, has moved the value proposition “from software back to hardware,” especially chips.
That does not make hardware permanently superior. Dolen called chip and hardware businesses cyclical, as they have always been. But they are on the high side of the cycle because AI demand has made hardware capacity the scarce, value-capturing layer. Alex Kantrowitz framed the reversal directly: hardware had been seen as commodity while software differentiated, and now that relationship has “flipped.” Dolen agreed: “Totally flipped.”
The other side of the shift is pressure on software. AI is shrinking some moats around SaaS because companies can now do certain functions on their own: CRM, some components of ERP, customer attraction, and marketing functionality. The point was not that every large software suite disappears immediately, but that AI changes the labor, speed, and economics of producing software-like capabilities.
AI is kind of the new SaaS, or said differently: SaaS is dead, long live SaaS.
This applies even to the largest enterprise software providers. Their own engineers are changing how they build, and they too can produce software more quickly with AI assistance. For Dolen, the result is a two-part margin shift: money rotates toward hardware, while software’s old profit pools face what he called a natural compression.
On model companies, Dolen was more conditional. He referenced a question Kantrowitz had put to a room earlier: how many people would pay more for the AI services they already receive? Dolen said roughly 30% to 40% of the room indicated they would pay up to five times as much. A $19.99 monthly service could become a $100 monthly service for those users. That is “great news” for frontier model builders such as OpenAI, Anthropic, Google, and Chinese AI firms, he said.
But the durability of that pricing depends on return on investment. Consumers who use AI heavily may justify the higher spend personally. Enterprises will ask a different question: whether the use case produces measurable value.
Most enterprises are not compute-constrained; they are cost-constrained
The practical compute bottleneck looks different depending on the buyer. Dolen said he is not aware of many enterprise environments where companies cannot run the AI systems they have built because they have hit a hard compute or token wall. What he has seen is different: teams spending too much money on tokens, forcing organizations to turn usage off or push people back to older workflows because the economics do not justify the activity.
There is still a compute constraint in the broader ecosystem, but Dolen separated several versions of it. First, hardware companies face trade and supply-chain bottlenecks. Tariffs, import costs, uncertainty, and the inability to get products into a country can limit revenue even when demand is strong. Hardware producers would like to import and sell more, but in many cases supply is insufficient. That creates a painful combination: higher costs compress margins, while limited supply prevents companies from fully meeting demand.
Second, AI-native companies may be in a different position from traditional enterprises. Dolen has heard conversations in which AI-native companies simply need more capacity and sit too low on a provider’s priority list. In those cases, executives may have to appeal directly to CEOs of providers for help.
For the broader enterprise market, the limit today is often governance and ROI rather than a lack of available model access. Dolen described internal guardrails at PwC around AI use, with security as the most important. The firm has about 300,000 people globally, and it uses gamification to encourage AI training and adoption. There are competitive leaderboards and “a little bit” of token maximization, he said, but within controlled environments.
That led to a joking distinction. Kantrowitz asked whether PwC was “pure tokenmaxxers.” Dolen answered first that there was “no tokenmaxxing going on,” then immediately qualified it: some is happening, but within PwC’s own boundaries and guardrails. The more serious point is that AI adoption at enterprise scale creates a management problem: encouraging experimentation while controlling spend, security, and use-case quality.
PwC is not betting on a single AI stack
Dolen’s view of the model and infrastructure ecosystem is deliberately plural. PwC works with hyperscalers, frontier model companies, and multiple tool providers because different use cases call for different systems. He did not describe a near-term world in which one architecture, one cloud, or one lab wins all enterprise demand.
The question came through the tension between Nvidia’s ecosystem and alternative accelerator ecosystems, including cloud-provider stacks. Dolen answered from the standpoint of a large professional-services and systems-integration firm. PwC has use cases for “all the hyperscalers” and “all the providers of frontier models.” Some tools are better in particular environments. Security-related work with Google is one major example he gave. Engineers also choose what works best for them; if they want to use Claude Code, he said, they will.
Client demand, as Dolen described it, is not concentrated in one place. Speaking from a North American perspective and from his client base in technology, media, and telecom — while saying he sees a similar pattern across industries — he said the use cases and demand are coming in at about “a third, a third, a third” as it relates to the major options under discussion. He did not turn that into a precise market-share map. Some companies may be stronger in particular industries. Claude, he said, has recently had a moment in financial services, where companies with thousands of engineers want access to it. At the same time, he said engineering capabilities from Cursor and Gemini are “right there too.”
The long-term selection process may favor models and platforms with deeper financial or structural underpinnings. Dolen pointed to the importance of infrastructure ownership: who has data centers, who owns pipes, who can deliver capacity effectively. In a world with structural limits — power, data centers, supply chains — model quality alone may not determine the winner.
He was uncertain about when the broader system hits those constraints. It could be 2027 or 2030, he said. But supply-chain limits are already likely to slow some data-center production, and once those limits bind, the acceleration of constraint could arrive faster than expected.
Automation changes the shape of organizations, but the cost equation still matters
Dolen does not claim to know the final shape of the AI-enabled workforce. He framed the current operating model as a pyramid whose base is likely to shrink. The open question is what replaces it: a spindle, a monolith, a narrower pyramid, or something like a boat sail. In each metaphor, the distribution of people across engineering, sales, front office, and back office changes.
The transition he expects is near-term: probably over the next two to five years. But he resisted a simple replacement story in which every automatable function disappears. One reason is human preference and quality control. Using agentic commerce as an example, he said agents may do some shopping, but he doubts people will want agents to do all of it. Enjoyment, quality, and human-in-the-loop judgment still matter.
The other reason is cost. If compute demand rises and model prices rise with it, some tasks that are technically automatable may remain cheaper to do with people. Dolen and Kantrowitz agreed that AI costs have been low, but Kantrowitz noted that prices for some of the latest models have begun going up. Dolen agreed. If an agentic system costs $12,000 a month to run, Dolen said, a company may be better off paying a person.
That is an important qualification to the automation story. The decision is not simply capability versus incapability. It is capability, reliability, security, change management, and unit economics.
The useful agent is a bounded delegate, not an omniscient worker
Dolen defined an AI agent by analogy to the entertainment business: an actor’s agent has authority to act on the actor’s behalf in a specific domain. Translated to technology, an AI agent is a system granted authority to perform defined tasks within a bounded environment, under rules, with skills, access to information, and security controls.
His definition is practical rather than mystical. An agent might operate on a desktop, inside a platform ecosystem, or across applications. The user grants it limited agency, not open-ended permission to do anything. The boundary matters because agentic systems can fail in odd ways. Dolen mentioned stories of email-reading agents deleting emails after reading them because the task seemed complete, as well as stories of agents behaving unpredictably. For him, those anecdotes reinforce the need for rules and security.
Dolen said he runs about eight agents daily for “random but important tasks.” His most detailed example is a morning headline workflow. One agent reviews headlines globally across regulatory, tax, markets, and the broad economy, includes citations, and sends him a list at 5:12 a.m. That agent lives on Grok because he wants it to scour X, where news providers post their stories. He restricts it to avoid treating random opinion posts as news.
He then runs the curated result through Gemini, asking whether the headline set is missing anything and using it to clean up links and formatting. Finally, he pushes it into Teams for distribution to roughly a thousand PwC partners. The final Teams step remains manual because he does not want an agent to have authority to write Teams messages to anyone. The orchestration sits inside the Microsoft environment and is set up on his iPad.
The workflow illustrates his broader point about enterprise AI ecosystems: different tools perform different functions. Grok is used for X-based discovery, Gemini for checking and cleanup, Microsoft for workflow and distribution. The value is not in a single model doing everything; it is in a controlled chain of useful delegation.
Enterprise agents are working first in finance, sales support, and legal review
Dolen identified three enterprise areas where agentic AI is already proving useful.
The first is the back office, especially finance: source-to-pay, procure-to-pay, payroll, actuals-to-accruals checks, updates, and dashboarding. In the examples he has seen, agents can interoperate with ERP systems and custom tools and run 90% to 100% of specific functions that people historically handled.
The second is the front office, particularly B2B marketing and sales campaigns. Instead of large teams building customized campaigns and sales support materials, agents can assemble pitches using product functionality, company knowledge, historical pitch materials, and pricing inputs. Dolen’s example was a seller preparing to pitch an automotive company on software: the agent can understand the product, the target company, prior pitches, and pricing logic.
The third is legal work. Some of it is already familiar from research and drafting tools, but Dolen sees broader disruption in research, intake, summarization, contract review, and procurement-like discrepancy analysis. He emphasized redlining as a particularly clear use case: rather than manually scrolling through documents in Word or Google Docs, an agent can identify discrepancies, summarize the top issues, and present them in natural language. Because the output is natural language, he said, it tends to be “pretty darn accurate.”
The common pattern is repeatable knowledge work with defined inputs, organizational context, and a human reviewer who still needs to make judgments.
Workforce disruption may be real without being sudden
Dolen’s treatment of AI and work was cautious but not alarmist. He described the question as personal: he has three children and thinks about what kind of jobs they will do. He also placed the current disruption in a longer history of work changing through agriculture, industrialization, manufacturing, and technology. The kind of technology work many people do today did not exist 40 years ago, he noted; labor markets have already seen large spikes of human capital devoted to new activities and later redirected elsewhere.
He expects AI-driven disruption to follow a similar trend, though not in a way that feels smooth to people living through it. He does not expect a single dramatic moment when “all the jobs went away.” Referring to payroll and unemployment data released that day, Dolen said the numbers were in trend with expected job losses; he interpreted that as a sign that the overall economy was not entering a draconian labor shock.
At the same time, he did not dismiss the disruption. For every task or role “destroyed by disruption,” he expects retraining or movement toward new jobs. He cited a report he had seen saying construction companies are roughly 500,000 workers short of what is needed to build the required data centers and infrastructure. Those roles, he said, may pay more than some technology or tech-sales jobs.
The discussion of adversity sharpened the distinction between disruption and catastrophe. Kantrowitz referenced Sam Altman’s reported idea that humans have a “right to adversity,” and said he was unsure how to feel about it. Dolen took a different angle: challenge can be useful because it gives people the opportunity to learn, adapt, and exceed expectations. He did not argue for terrible hardship. He argued that the AI transition is an evolution and that difficult decision-making is part of the work.
Change management, in his view, remains one of the hardest parts. His example was his father, nearly 80, trying to solve a bathroom problem while fixing up a house. Dolen asked whether he had Googled it or checked YouTube. His father replied that he had first asked Enterprise GPT, then looked for examples and videos, and still ultimately needed a plumber because the problem required specialized knowledge. Dolen drew two lessons from the story: people are more adaptive than expected, and specialist human work can remain valuable even when AI is the first stop.
Shadow AI is risky, but the explorers may define the next operating model
The employees pushing AI beyond prescribed company channels may become unusually important, provided they stay inside data and security boundaries. Kantrowitz said he tends to celebrate people using tools outside formal company prescriptions — he gave a personal Claude subscription as an example — if they are not putting customer data into personal instances, because they can become disproportionately impactful inside organizations.
Dolen largely agreed with the importance of those explorers, while keeping the enterprise constraints in view. Security, token costs, regulation, and disruption all explain why organizations restrict AI use. But he said the people building useful things with AI “see the way,” while the rest of the organization may be holding them back.
One PwC partner, Khalil, built functionality in Claude over five weeks of nights and weekends, according to Dolen. Dolen framed the example as work being done within the rules of the organization. Khalil wrote input Python code himself, used Claude to help automate a process his team currently performs, and tried to “agentify” skill sets across the team. The goal was to create an environment where tasks normally assigned to the team could be loaded into the system, producing the output he would otherwise review from team members.
The result, Dolen said, was about 99% right, though Khalil would still review it. Dolen called that the kind of personality that changes professional services, software companies, and other industries. The point was not that everyone becomes a solo founder inside a large company. It was that one or two people willing to explore can show an old industry a new operating model.
Dolen acknowledged that this creates a version of AI-driven shadow IT. But he argued it can happen inside organizational rules if companies become more comfortable with the nomenclature, practices, and encouragement structures. Not every AI push needs to be gamified. Some progress comes from people who lean into the tools, accept the challenge, and push the limits within boundaries.
I encourage people to push the limits, right, because that's actually where progress is made.



