Orply.

Thoma Bravo Keeps AI Strategy Model Agnostic as Cyber Risks Accelerate

Dani BurgerSeth BoroBloomberg TechnologyThursday, May 7, 20265 min read

Thoma Bravo managing partner Seth Boro told Bloomberg’s Dani Burger that enterprise AI is creating parallel problems for companies: faster cyber threats and uncertain deployment economics. Boro said the firm is “model agnostic,” maintaining relationships with OpenAI, Anthropic and Google while using its cybersecurity portfolio to monitor emerging threats. He argued that enterprises will need layered defenses, tighter governance of AI agents and more specific, efficient models rather than assuming general-purpose systems fit every workflow.

Thoma Bravo managing partner Seth Boro described enterprise AI as a two-sided operating problem: models are increasing the speed and scale of cyber risk, while the economics of deploying those same systems remain unsettled. His answer is to stay “model agnostic” across major AI providers, rely on layered cybersecurity informed by broad customer networks, and push enterprise use toward specific, efficient models rather than assuming a general-purpose model belongs in every workflow.

Cybersecurity is being forced into a new operating speed

Boro said Thoma Bravo’s cybersecurity portfolio has been preparing for AI-driven threats for several years, but the latest model releases have accelerated the risk environment. No single model is the whole issue. The cadence of increasingly capable models is changing the speed at which enterprise security has to operate.

Mythos is the current example because it is being discussed for cyber risk. Dani Burger pressed the concern further: what happens if models can find zero-day vulnerabilities in minutes and surface weaknesses humans have missed for years?

Mythos and every other model that's going to come next, and there's going to be a lot of them.
Seth Boro

Boro’s answer centered on layered defense and network effects. Thoma Bravo’s portfolio companies, he said, have deep expertise “across every facet” of cybersecurity and together produce about $8 billion in revenue. The scale matters in his account because threat visibility improves when security vendors are observing large volumes of malicious behavior across many customers.

Proofpoint was his concrete example. Boro said the company has a network of 14,000 customers and sees malicious emails coming into those enterprises every day, along with how employees interact with them. That data, he said, gives Proofpoint and its customers a way to detect zero-day threats quickly and respond quickly.

14,000
Proofpoint customers in the network Boro cited for malicious-email visibility

Burger challenged whether that could be fast enough against a model like Mythos. Boro did not claim perfect parity. He said Mythos had not yet hit the market, but argued enterprises should behave as if such capability exists and may already be in use.

Model-risk warnings are a prompt for agent governance

Dani Burger raised the possibility that some anxiety around powerful AI models may be marketing ahead of IPOs: a way to generate excitement and fear at the same time. Boro said he had heard that view, but chose what he called the optimistic interpretation. He framed the disclosures and public attention as a way to protect future users and give enterprises and consumers notice of what is coming.

Seth Boro did not make Anthropic, Mythos, or any single company the center of the planning problem. The specific model in the headlines is only a preview. If it is Anthropic today, he said, it will be someone else later.

The more durable issue, Boro said, is agent governance. Today, agentic deployment remains minimal, though it is beginning to pick up. As AI agents are deployed more widely, the security question becomes less about a standalone model and more about what agents are allowed to do inside organizations.

Boro listed the governance questions directly: What are the agents doing? What information do they have? Where is the data coming from? What actions do they take after obtaining information? Once agents are “operating in the world,” he argued, companies need monitoring systems that can identify malicious activity and act quickly.

He named SailPoint, Ping, Proofpoint, and Darktrace as portfolio companies monitoring that environment. His point was not that traditional cybersecurity categories disappear, but that enterprises will need systems capable of watching what happens as agents gain access to information and begin taking actions.

Thoma Bravo wants relationships across the major model providers

Seth Boro described Thoma Bravo’s partnership with Google as both an enterprise-deployment effort and a cyber-intelligence effort. Google approached Thoma Bravo, he said, to work with the firm and its portfolio companies on deploying Google’s “full stack technologies.” Thoma Bravo moved quickly to bring the portfolio into that relationship.

On the cybersecurity side, Boro said the companies would work with Google to identify threats early through Thoma Bravo’s portfolio. He did not provide technical detail, but framed the partnership as a way to deploy AI technology while also getting ahead of model-enabled threats.

Burger placed that deal alongside other announcements, including Anthropic working with Goldman Sachs and an earlier Bain-OpenAI joint venture. Boro declined to explain the motivations behind other firms’ arrangements, saying each likely had its own reason. Thoma Bravo’s position, he said, is intentionally model-agnostic.

We are model agnostic, so we have great relationships with OpenAI, with Anthropic, we know those companies very well.
Seth Boro · Source

Boro said Thoma Bravo has ongoing discussions with major model companies and that its portfolio companies are significant consumers of their products. The point of those relationships, as he described it, is to understand where AI deployment is going, work with experts on implementation, and, especially for cybersecurity, get ahead of the next models before they reach the market.

Enterprise AI economics remain unsettled

Dani Burger turned the cost question toward inference: large AI providers may be absorbing high costs today rather than fully passing them through, while some are also trying to offer cheaper options. The concern is whether AI products later become more expensive and pressure margins.

Seth Boro said many organizations still do not understand the cost of rolling out AI solutions. He cited research, without naming it, suggesting that for many higher-functioning roles, it is currently more expensive to bring tokens into an organization and operate agentically than to perform the work through a human. He added two qualifications: that this is not true for every task, and that it will not necessarily remain true.

The uncertainty is practical as much as technical. Enterprises have to budget for AI costs, redesign processes, and integrate systems before they can fully absorb what daily inference will cost at scale. Boro argued that this process takes longer than many expect.

His answer pointed to a shift away from default use of general-purpose models. Within Thoma Bravo’s portfolio companies and among their customers, he said, companies are deploying models for specific use cases because they are much more efficient. He expects efficiency and power consumption to become central areas of innovation.

General-purpose models may be where the market is now, but Boro argued that enterprises will not need them for every function. The cost question will push organizations toward narrower models, more efficient deployments, and a clearer accounting of when AI is cheaper than human labor and when it is not.

The frontier, in your inbox tomorrow at 08:00.

Sign up free. Pick the industry Briefs you want. Tomorrow morning, they land. No credit card.

Sign up free