Orply.

AI Companies Are Running Into Infrastructure, Distribution, and Trust Bottlenecks

TBPN’s discussion argued that AI’s value is now being tested less in model demos than in the bottlenecks around deployment: inference speed, power, workflow integration and access to customers. Cerebras was framed as a public-market bet on faster inference, while Giga Energy’s data-center business showed how scarce powered shells have become part of the AI supply chain. The same bottleneck logic appeared outside core AI, from Audemars Piguet using Swatch as an official low-cost entry point to Augustus, with conditional OCC approval, trying to rebuild dollar clearing as a national bank.

AP tries to make an official fake before the fakes own the entry point

Audemars Piguet’s collaboration with Swatch was treated less as a watch release than as a brand-control problem. Jordi Hays framed the product as AP “knock[ing] themselves off”: a lower-priced, official version of something that appears to borrow from the Royal Oak’s most recognizable cues. The launch teaser from Swatch showed watch internals assembling themselves and ended on the name “Royal Pop” with “SWATCH x AUDEMARS PIGUET,” but did not reveal the actual watch.

That absence mattered. Much of the immediate market reaction was based on speculation, AI renders, and extrapolation from Swatch’s prior luxury collaborations, especially the MoonSwatch with Omega. John Coogan noted that “everyone has basically created somewhat realistic-looking posters” and that the actual watch had not been shown. Hays connected the situation to the MoonSwatch: a lower-cost entry point into a luxury watch universe whose “real” product remains many multiples more expensive.

The initial question was whether AP was diluting itself. Hays read the panic as: “sell everything,” “sell all your APs for five hundred dollars,” “rest in peace.” Coogan pushed in the other direction. His view was that AP had a missing rung in its brand ladder. The real Royal Oak had become too expensive for many aspiring collectors, and AP did not have an obvious “walk, crawl, run” path into the ecosystem. Coogan said that once the brand priced closer to aftermarket levels, “to get in the game you’re at 30k,” and Hays suggested the gap had widened because aftermarket prices had risen faster than incomes.

The source material preserved the absurdity of the comparison. A tweet from Live Monitor shown on screen put a blue AP next to colorful Swatch-style watches and said: “Spend $500,000 or $500.... AP owners should be livid.” Hays added the obvious caveat: similar silhouette is not the same product. The materials, complications, and status are different. But the visual logic of the Royal Oak is exactly what AP is allowing Swatch to borrow.

Coogan argued that this may be the point. The fake-watch market already exists. Knockoffs of Nautilus, AP, and other luxury pieces are available in the low hundreds of dollars, he said, and can look convincing. Hays recalled a YouTube segment in which watch reviewer Nico Leonard was asked to distinguish real and fake watches and got all six correct, despite some convincing counterfeits. That anecdote did not undermine the counterfeit problem; it sharpened it. The watches can be close enough to force the brand to decide whether the cheap approximation should be uncontrolled or official.

An intellectual-property theory circulated as well. Hays read a post by Ariel Givner arguing that AP had lost trademark protection around the octagonal bezel and tapisserie dial in Japan in 2024 and the United States in 2025, and therefore licensed “the crown jewels” to Swatch as “damage control.” Coogan responded that AP still appeared to maintain some trademark around the octagon. Hays separated trade dress from ownership of an abstract shape: the brand cannot allow a full fake of a direct product, but courts may not let it own the idea of an octagon.

That theory remained unsettled. Hays said he did not know how much of the collaboration was about IP damage control. But the strategic question was clear: if AP cannot fully wall off its design language, it can still create an official cheap version that keeps aspirational demand inside the brand rather than handing it to counterfeiters.

The reseller economy was already preparing. Hays read from a long post by “gemchanger” advising “every unemployed guy with a group chat of equally unemployed friends” to line up for the Royal Pop. The post estimated retail in the $300–$500 range, in-store only, one piece per person per store per day, and argued that the opportunity would be “per warm body.” It suggested avoiding the most obvious stores in Soho, Times Square, London, and Singapore, and instead targeting secondary locations such as Troy, Michigan; King of Prussia; Canoga Park; and Honolulu. Hays’ conclusion was blunt: “I think this is going to be a hit.”

Coogan agreed that the haters were likely wrong. People love G-Shocks, he said, and an AP-Swatch collaboration could simply be “a fun watch” and “a nice entry point” for someone getting into watches. It could also sit on the wrist of an existing AP owner as a beach watch rather than a replacement for the real thing.

Quaid Walker later added watch-market specifics. He said the clues pointed to a Royal Oak derivative: the announcement used Royal Oak-style font, the Swatch POP reference historically relates to a watch that can also be popped into a lanyard, and there were hints of colored lanyard packaging. He also identified the movement as Swatch’s Sistem51, used in the Blancpain collaboration. That would make the product an automatic mechanical watch, not a quartz watch. Walker described Sistem51 as Swatch’s own movement, “theoretically like a fully disposable movement” because replacement is more efficient than repair, but with meaningful specifications such as a 90-hour power reserve.

Walker’s read matched the hosts’ strategic thesis. There are two problems: fakes and the lack of a desirable AP below roughly $30,000. Hays argued that a person who was going to buy a stainless steel Royal Oak is unlikely to downgrade to the Swatch version instead. But someone who wants an AP someday might buy the Swatch now, and someone who already owns the real thing may wear the cheaper collaboration casually. For AP, that is less cannibalization than funnel design.

Cerebras’ IPO is an inference-speed story, not just a chip-company listing

Cerebras was presented as the cleanest public-market expression of the AI inference boom. John Coogan said the company had updated its filing, with Reuters reporting an IPO date of Thursday, May 14. The offering increased from 28 million to 30 million shares, the price range moved from $115–$125 to $150–$160, and the raise rose from roughly $3.5 billion to as much as $4.8 billion.

$4.8B
maximum Cerebras IPO raise described in the source after the upsized filing

The reported demand was the more important signal. Coogan said the round was allegedly oversubscribed by 20x, implying something like $100 billion of demand for roughly $5 billion of stock. He cautioned that this did not mean the stock would “10x on Day 1,” but said it was a strong setup. A tweet shown from Omer Cheema said Polymarket was projecting Cerebras to close above a $50 billion market cap by the end of day one, roughly 100% above the previously reported $26 billion target valuation.

The business narrative, as Coogan described it, had become much simpler than it was during Cerebras’ long quiet period. The company had long faced skepticism about customer concentration and architectural risk. The fear was that Cerebras might be over-optimized for transformer architectures: if AI research moved away from attention-heavy transformer models, the hardware bet could be impaired. Coogan said that did not happen. Transformer-based architectures remained dominant, inference costs became critical in the age of AI agents, and speed became “so, so important.”

The on-air demonstration was not a benchmark table; it was an experience claim. A speaker said that using Cerebras chips with “GPT 5.3 Spark” in Codex was “insane” because of the speed. Coogan said that for a semiconductor company that can otherwise feel abstract, the product can be tried directly: download Codex Desktop, choose 5.3 Spark from the dropdown, and ask it something non-coding, such as the history of the Roman Empire. It will return a full page “instantly” with “5.3 level intelligence,” he said. That is the product story: meaningful model output without the waiting period that has defined many AI workflows.

Coogan extrapolated that experience into broader demand. If AI work currently means firing off a request and coming back five minutes or even two hours later, faster inference cuts that by half or by 10x. He argued that labs and AI-heavy applications with exploding demand — Cursor, Anthropic, Meta, Google, and others — would be logical buyers unless they already had a direct substitute. Reuters, as quoted by Coogan, listed Amazon and OpenAI among Cerebras customers, which also reduced the concern that the business was simply “all OpenAI.”

The technical differentiator was explained in coarse but useful terms. Traditional semiconductor manufacturing fabricates a silicon wafer, cuts it into hundreds of smaller chips, packages those chips individually, and connects them into systems. Cerebras instead uses the entire 300-millimeter wafer. Coogan described it as four trillion transistors, 900,000 AI-oriented compute cores, and petabits per second of internal bandwidth. The key advantage, in his framing, is memory bandwidth for AI inference needs such as KV caches.

The venture-capital story was also material. A tweet shown from JJ said Benchmark VII still owned more than 20% of Cerebras and that the IPO was more than 20x oversubscribed. Hays found that Benchmark had invested in May 2016, which Coogan called “overnight success” after a decade. Coogan also noted that the company had roots in SeaMicro, an ultra-dense server company sold to AMD in 2012, and that Andrew Feldman and other industry veterans started Cerebras in 2015.

The structure of the segment made the market point clear: in May 2026, being a chip company tied to AI inference was close to ideal timing. Hays quoted Dan Toombs saying that “if you were looking for the ideal time to IPO, being a chip company in May 2026 is hard to beat.” Coogan agreed. The historical patience, the architectural bet, and the inference bottleneck had converged into one public-market moment.

Trump’s China trip is nominally about trade and technology, but Iran is setting the agenda

The China summit was framed as a business delegation moving through a geopolitical bottleneck. Jordi Hays said the trip was being called a “field trip” because of the number of major executives involved: Tim Cook, Larry Fink, Stephen Schwarzman, Citi’s Jane Fraser, Cisco’s Chuck Robbins, David Solomon, and reportedly Elon Musk. But John Coogan said the core of the summit would “obviously” center on Iran.

The Wall Street Journal article shown on screen described Xi’s China as a combination of “dazzling technology,” “military muscle,” and “an economic mess.” Coogan summarized the Journal’s summit framing: as Trump and Xi met in Beijing, Iran would loom over the meeting. The US and Israel’s war against Iran had already delayed the summit and led to closure of the Strait of Hormuz. Trump, in Coogan’s reading of the Journal account, was eager to move on from a Middle East war that was sapping domestic power and straining the global economy.

Hays added that Trump had described the peace deal that morning as being on “major life support.” Coogan treated that as better than dead, but only barely. The US position, as he summarized it, was to push China — which relies on Iran for low-cost oil and has a transactional relationship with Tehran — to help broker an agreement. Xi also had an incentive to stop the fighting, because Middle East turmoil restricts China’s oil supply and reduces other countries’ ability to buy Chinese goods.

Oil and shipping were central. Coogan said Trump had threatened to resume “Project Freedom,” a US-led operation to help ships navigate the strait safely, and had added that it would include “other things,” which Coogan called ominous. Oil prices had climbed amid fear of prolonged disruption through a chokepoint carrying roughly one-fifth of global oil flows.

Only after that did the technology agenda enter. Coogan said the summit would also include trade deals around Chinese purchases of American agriculture, energy, aerospace products, and investment mechanisms. The technology industry, however, was hoping for a peaceful winding down of the Iran conflict so the discussion could move to export restrictions, GPUs, the AI supply chain, rare earths, and the inputs needed for the tech industry to flourish.

A third speaker raised another possible layer: AI regulation. The reference was to recent news about CAISI and a possible move away from safety-focused pre-release testing. The speaker suggested that if executives such as Tim Cook were going to China, they would not be arguing primarily for AI safety. Coogan joked through the implication: China could ask for Mythos 2, unfettered access, enough to distill it quickly, and perhaps the model weights and GPUs too, all under the premise of inspecting whether it could be released in China.

The humor preserved the underlying issue. International AI governance and China access are entangled with model release, export controls, chips, and commercial pressure. Coogan said some international collaboration would be important in the long term and that “everyone agrees on this,” but the immediate summit looked more likely to be dominated by war, oil, and shipping than by clean sound bites for the tech CEOs traveling with the delegation.

A national bank for dollar distribution argues the dollar’s problem is not demand

Ferdinand Dabitz announced that Augustus had received conditional approval from the OCC to charter as a US full-service national bank. He brought a $100 bill as a prop and used it to state the company’s thesis: there is no problem with the dollar as a product. In his words, “the dollar is the best product in the history of the world.”

Dabitz argued that global demand for dollars is “quasi infinite,” especially outside the United States. He offered two kinds of evidence. Quantitatively, he said Europe and the US contribute roughly 40% of global GDP but 80% of global reserve holdings and global money movement. Qualitatively, he said the commercial success of stablecoins expresses the same underlying demand: people around the world want access to dollars.

The problem, in Augustus’ view, is distribution. Dabitz said dollar distribution breaks at the clearing-bank layer. A clearing bank is the bank with the charter and account at the Federal Reserve or another central bank to actually move and hold money. Existing clearing banks, he said, are “made of paper”: slow, closed on weekends, closed after 5 p.m., closed on Christmas, and dependent on human-run processes from another era. Augustus wants to rebuild that layer “for the AI era” in a full-stack way.

Coogan pressed him on dollar bearishness, citing Ray Dalio-style arguments about America and the dollar. Dabitz did not respond with a macro forecast. He framed the issue as mission and alternatives. Ten or twenty years ago, he said, the dollar and euro dominated with little competition. Today, China is building and distributing the digital yuan, including in Africa, while Russia pushes BRICS Pay as an attempt to get off the dollar. Augustus’ mission, as he put it, is to “secure and advance Western currency dominance” by building better clearing banks and removing friction from dollar access.

The customer is not the consumer or a startup checking account. Dabitz said Augustus is targeting global financial institutions: banks in South America, banks in Southeast Asia, financial institutions in the global South, and similar institutions elsewhere. Augustus had already launched euro clearing and sold it to large financial institutions, including Kraken, processing billions of euros. The dollar charter, if completed after conditional approval, would make the same platform more useful to those customers.

The Augustus argument is a fintech thesis with geopolitical language. Stablecoins prove demand. Chinese and Russian alternatives create urgency. Clearing banks are the bottleneck. A new national bank becomes infrastructure for making the dollar easier to hold and move globally.

Match Group is trying to turn a roll-up into an operating company

Spencer Rascoff described Match Group as a company with a serious mission and a serious integration problem. He joined as CEO after serving on the board, having previously co-founded and led Zillow and earlier co-founded Hotwire. The reason he took the Match role, he said, was personal motivation around loneliness: a “real global problem” that affects longevity, mental health, and society. Match Group, as the leading dating-app company, is one of the few companies positioned to do something about it.

His operating diagnosis started with people and organization, not features. Rascoff said he repeats a mantra internally: great people, properly motivated and organized, build great products informed by user research, which when properly marketed generate audience, revenue, profit, and shareholder value. He began with employee engagement, motivation, and breaking down silos.

The company had been assembled as a roll-up. Barry Diller’s IAC bought or built a portfolio that included Match.com, Plenty of Fish, OKCupid, Tinder, Hinge, Pairs, Meetic, and many others. Match Group spun out to IAC shareholders six or seven years ago, but Rascoff said much of the hard integration work had never been done. He is trying to move it from holding company to operating company.

The first integration layer is infrastructure and service functions. Rascoff said BLK, Chispa, Match.com, and other brands now share an integrated backend, allowing new features to launch across a dozen apps. What Match has not yet broadly done is liquidity sharing — showing a Tinder user to an OKCupid user, or a Plenty of Fish user to a BLK user. It has done cross-sell, such as inviting a user on one app to join The League and porting the profile over. Rascoff said that has driven audience growth and revenue.

Tinder is the turnaround center. Rascoff said he effectively took on running Tinder himself with the team because it represents almost $2 billion of Match Group’s roughly $3.5 billion in revenue and is the largest dating app worldwide, number one in 166 countries. Yet it had “lost its way” on innovation, lacked founder-led energy, had not prioritized user outcomes, and did not have an innovative product roadmap. Over the last six months, he said, the company recast that direction and made major progress.

One key product example is Double Date. About a quarter of Gen Z Tinder users now use it, Rascoff said. Two friends create a joint account by linking profiles, swipe on pairs, and then enter a four-way chat. The feature addresses a larger behavioral problem: for Match Group, the biggest competitor is not Instagram as a place people meet, but Instagram, TikTok, Netflix, YouTube, and inertia as reasons people do not date. Rascoff said Gen Z can be nervous about meeting new people, unsure of the return on time, or lacking some social skills because of pandemic disruptions. Double dating and real-life events are meant to reduce the friction of leaving the phone and meeting people.

AI is both internal productivity tool and product layer. Match uses AI for recommendations, profile creation, and trust and safety. Rascoff said Face Check — requiring a video selfie — reduced interactions with fake accounts or deepfakes by about 60%, and has been rolled out in almost every country across most brands. User reports also feed into the system, and Match now cross-bans: if it detects a fake account on one app, it removes the account across the portfolio.

60%
reduction in interactions with fake accounts or deepfakes attributed by Rascoff to Face Check

He also described the internal AI coding effect. Pull requests were up 40% year over year a couple of months earlier, and then 60% year over year. Match now spends $5 million to $10 million annually on AI tools, up from essentially nothing 18 months ago, and is slowing hiring somewhat to fund that operating expense. Coogan pushed on whether pull requests pay the bills. Rascoff acknowledged the CEO/CFO question: the benefit is hard to feel immediately, but real product acceleration is visible.

His best example was Tinder’s real-life events. In the first week of January, the company decided to focus on in-person events as part of the Tinder turnaround. By March 12, at a product event in LA, it demoed in-app real-life events. Rascoff said that product launch would traditionally have taken six to twelve months, but AI compressed it into two or three.

Advertising remains secondary. Rascoff said Match makes less than $100 million in ad revenue out of almost $4 billion in total revenue. Most money comes from subscriptions, where users pay for extra features and powers. He said Tinder’s ads can be native — swipeable like profiles — and perform well, but the company remains more focused on user outcomes and subscription alignment.

He also answered a housing question from his Zillow background. The US has a supply problem, not a demand problem, he said. Mortgage-rate lock-in keeps people in homes with two to three percent mortgages when moving would mean five or six percent. Transferable mortgages would be a major unlock, but are essentially impossible in the current regulatory environment. More builder-friendly legislation matters too, especially in California. He said the country is missing roughly a million homes that were not built after the financial crisis, when builders dropped from about a million homes a year to 200,000 or 300,000 for several years.

AI research tools are moving from search box to workspace

Eric Olson said Consensus had raised a $30 million Series B led by Great Point Ventures. The company has been building AI for academic and scientific researchers, with a product historically focused on search and literature review. The new funding is intended to move Consensus “beyond search” into a workspace for researchers.

Olson said the go-to-market has been driven by organic demand. Many users are in labs or universities, and the product spreads through network effects among researchers. More recently, Consensus has begun selling directly to university libraries; Olson said it works with more than 100 universities, which then distribute the product to students.

The product philosophy rejects the fully autonomous scientist, at least as the near-term center. Coogan asked about research loops: coding and cybersecurity advance quickly because feedback is short and can be simulated, while wet-lab science requires physical experiments and slower real-world feedback. Olson said some companies are pursuing a “push a button, discoveries come out” autonomous scientist. Consensus is “betting against that.” It wants to automate the automatable parts — search, iterative literature review, finding materials — while letting scientists focus on human parts of science: drawing connections across domains, talking to people, developing ideas, and collaborating.

The analogy Olson accepted was the IDE. Researchers need a harness where they can iterate with a model, give feedback, and move through paths. But unlike coding, research has multiple distinct surfaces. A researcher may start in search and discovery, move into writing a paper, analyze data, then return to search to find references for a paragraph. Consensus wants to build a workspace that supports those transitions rather than one monolithic surface.

Integration will matter because researchers do not all work in one environment. Olson mentioned Benchling for biotech and Word or Google Docs for graduate researchers. Consensus wants APIs into those surfaces, while also building dedicated surfaces of its own. Asked about hosted IPython notebooks such as Google Colab, Olson’s answer was “why not both?” — partner where useful and build where necessary.

The broader point was that AI does not remove the scientist from the cockpit. It changes the cockpit. Search becomes iterative. Writing becomes connected to reference discovery. Lab notebooks and data environments become adjacent surfaces. Consensus is positioning itself not as a magic discovery machine, but as operating infrastructure for researchers who still decide what matters.

Giga Energy’s AI data-center business grew out of stranded gas and Bitcoin boxes

Matt Lohstroh described Giga Energy as a company that learned to build data-center infrastructure by mining Bitcoin on flared natural gas. Coogan introduced the company as having scaled to $350 million in revenue while raising only $3.4 million of equity funding. Lohstroh said the company started almost seven years earlier at Texas A&M with college friends, around flaring natural gas and Bitcoin mining, before evolving into a supply-chain and infrastructure business.

The first build was a 50-kilowatt modular data center in Southeast Texas, roughly a 20-foot shipping container, connected to a natural-gas generator at the wellhead. It mined about a quarter of a Bitcoin per month. Things broke constantly. The infrastructure became the rate limiter. Giga began asking how it could build its own Bitcoin “boxes.”

Then Bitcoin drew down 85%. Giga found itself selling more modular data centers — the picks and shovels — than mining Bitcoin itself. By 2019 it was selling boxes built around Bitcoin ASICs, and over a four-year period sold roughly 1.2 gigawatts of modular data centers.

The company’s capital structure was customer-financed. Lohstroh said the $3.4 million of equity raised in 2021 and March 2022 mostly kept the company alive while it found product-market fit; it did not drive the growth. Customers paid 30%, 40%, or 50% down, and Giga matched payment milestones to factory cash outflows. “Our customers financed the whole business,” he said.

By 2024, Giga had moved into renewables, commercial and industrial electrical distribution, transformers, switchgear, and sales to churches, prisons, hospitals, and other non-Bitcoin customers. Its electrical equipment was fungible. Then it received a large RFQ from an AI data center, which pulled the company into the AI infrastructure opportunity. Giga is now focused on rapid data-center deployment, manufacturing long-lead equipment and building the data center with its own “picks and shovels.”

The company stops at the rack level. Its value proposition is getting powered shells online quickly: “you need to burn these chips,” Lohstroh said, and Giga can get 50 or 100 megawatts of IT online in nine months. Coogan connected that to Satya Nadella’s complaint that chips may be available but powered shells are scarce.

The operational mantra is “building in the factory, not in the field.” In Bitcoin, Giga could turn on sites from bare dirt to energized racks in roughly 60 days. In AI data centers, the nine-month timeline is enabled by doing most commissioning and integration in a factory-controlled environment. Lohstroh said the modular design and prefabrication reduce onsite labor needs by about 95%.

95%
reduction in onsite labor needs Lohstroh attributed to modular prefabrication

Community opposition remains a constraint. Lohstroh said Giga attends town halls and explains water use, noise, and what the project will look like. On noise, many facilities may comply with local laws — for example under a 65-decibel threshold — while still producing an irritating low hum. Sound walls, dirt mounds, lower-decibel products, and distance help, but Lohstroh also suggested noise can become an easy proxy for broader opposition to having an AI data center nearby.

On local electricity prices, he said tariffs differ by municipality and utility, including within ERCOT. It is possible to structure protections so consumers do not see increased pricing, but local marginal pricing around these sites may rise somewhat unless hyperscalers step in. He also expects data-center deployment to become a federal issue: random municipalities blocking AI inference from coming online, he said, is not practical for what the country is trying to build.

The industry metric he wants everyone aligned around is “time to token.” Whether the structure is a tent, a pre-engineered metal building, or something else, the relevant question is what gets compute online fastest.

Consumer AI companies are building new interfaces: trading agents and playable content

Two startup interviews centered on AI as an interface shift rather than a back-office efficiency tool.

Jay Azhang described Nof1 as an AI research lab focused on financial markets. The company’s bet is that markets can become an environment for training models with adaptive intelligence, including models that eventually generalize across markets and outperform human traders and algorithmic systems over long horizons. Nof1 announced a $15 million round.

Azhang distinguished Nof1 from hedge funds. A hedge fund’s goal is to make as much money as possible in a quarter or year. Nof1’s goal is to make trading agents a thing and use markets as a learning environment. He described trading agents as “coding agents for markets.” The company is building a consumer-facing platform rather than a fund.

The user experience is natural-language strategy creation. A user might have a low-resolution thesis or systematic idea but lack coding ability, data access, or execution infrastructure. Nof1 wants the user to describe the thesis in natural language and have models turn it into a deployed trading agent that embodies that viewpoint. Azhang predicted that in two or three years, trading without a trading agent will feel like coding without a coding agent.

The scaffolding is substantial. Nof1 needs high-quality live data from every market users want to trade, cleaned and prepared for agents. It needs guardrails because LLMs are not yet good enough at autonomous trading tasks. Azhang said Nof1 launched Alpha Arenas partly to prove that point. It also needs a harness for trading and deep research queries, DevOps, server management, execution, paper trading, simulation, and strategy iteration. The company is integrating with brokerages so users can connect their existing account, or onboard through a brokerage such as Alpaca or Interactive Brokers.

Amir Sadeghian described Astrocade as a social entertainment platform where anyone can create games with natural language, publish them to a feed, and have others play them. A user can ask for a car race underwater with an elephant driver, or a Candy Crush-like game where matching forms words. The mission, he said, is “sharing fun with the world.”

Sadeghian put Astrocade in the lineage of consumer platforms created by easier creation tools. Twitter made text creation broadly accessible; Instagram rode the iPhone camera and made everyone a photographer; TikTok did something similar with video. Astrocade’s claim is that AI makes game creation accessible. He said the platform had launched publicly seven or eight months earlier, had about 5 million monthly active users, hundreds of thousands of creators, and more than 75,000 games created.

The strongest examples were not obvious AAA games but ultra-casual playable content. One viral genre involved washing things: cleaning the Mona Lisa, a car, or other objects. Coogan connected that to viral rug-cleaning and power-washing videos: satisfaction comes from watching something become clean and revealed. Sadeghian argued that the platform’s advantage over passive social media is engagement. Users play rather than consume. He described the category not simply as games, but as “playable content,” shorter-form and ultra-casual, often played in minutes and even alongside other media.

Astrocade is currently web-based, with lightweight HTML games. The hardest technical problems are not only generation but recommendation and serving. Sadeghian said AI “learning how to generate fun” is central, but the recommendation system may be even harder than some AI problems. Video recommendation has clearer signals: how much of a video someone watched, whether they completed it. Games are open-ended. Finishing quickly may indicate success and satisfaction rather than disinterest. The platform has to distribute tens of thousands of games to millions of players with different backgrounds and skill levels.

The business is not yet monetizing. Astrocade is free, and Sadeghian said it has processed trillions of tokens. He expects eventual advertising, creator monetization, microtransactions, and other familiar gaming models, but said the company is focused first on growth and experience. Coogan pushed on inference cost, estimating from Sadeghian’s comments that 75,000 games at $10–$20 each could imply around $1.5 million of inference. Sadeghian said Astrocade does route tasks to cheaper models where possible, but also wants to use the best models as intelligence improves.

Both Nof1 and Astrocade are betting that the next interface is not a blank chat box. It is an agent wrapped around a domain-specific environment: markets in one case, games in the other. The hard part is not just prompting. It is data, guardrails, distribution, feedback, and the product surface that makes the agent useful.

AI infrastructure math is becoming political math

The data-center discussion returned repeatedly to second-order constraints: fiber, capital expenditure, heat, aesthetics, and public backlash.

A viral post from Will Manidis, read by Hays, claimed that AI data centers were driving up fiber prices and thereby making battlefield fiber-optic drones more expensive in Ukraine and Russia. The underlying post said a 50-kilometer fiber spool had risen from $300 to $2,500. Coogan described fiber-tethered drones as a battlefield technique used to avoid jamming and radio disruption. If the reported price movement was actually driven by AI data-center demand, he said, it would be a bizarre second-order marketplace effect.

David Sacks’ back-of-envelope data-center math was also shown. The post estimated that a one-gigawatt data center would require about $50 billion in all-in capex, generate $25 billion to $30 billion in annual enterprise revenue, incur $1 billion to $2 billion in annual electricity cost, and pay back in roughly two years. Coogan noted that Grok called the fact-check “mostly accurate,” saying the capex aligned with Nvidia’s $50 billion to $60 billion estimate, while revenue matched high-end AI infrastructure-as-a-service leasing.

Hays was skeptical, calling it “napkin math” and comparing it to CoreWeave. A tweet from mallinvestment.jpeg broke down CoreWeave with roughly $31 billion of in-service PP&E as of March 31, 2026, $1.4 billion of operating lease expense for active data centers, and an assumed 10% cap rate implying $14 billion of cost-equivalent lessor assets. That produced roughly $45 billion of all-in capex generating revenue from about one gigawatt of active power, close to Sacks’ capex number. But CoreWeave’s revenue guidance was $12 billion to $13 billion, implying capex-to-revenue of 3.6x, versus 1.8x in Sacks’ math. The conclusion from the tweet: either CoreWeave is much worse at this, or the optimistic numbers are about 50% off before margins.

The politics of data centers are being fought in strange units. A Pubity post claimed Kevin O’Leary’s proposed Utah data center would require 9 gigawatts of energy when fully built — double Utah’s current energy usage — and dump “23 atomic bombs worth of thermal load” on the environment every day. A reply from Kimball Call called that “complete BS,” arguing that by the same math, sunlight hitting the property already produces more than 50 atomic bombs worth of heat energy daily. Coogan read the reply as evidence of a misinformation campaign and noted the absurd escalation: energy use now being described in “atomic bombs units.”

Aesthetics appeared too. Meltem Demirors shared conceptual designs for beautiful data centers, prompting the hosts to ask whether anyone would actually build them. Hays said he found some existing data-center imagery beautiful in its own blank-cube way; Coogan joked that the job was already finished. That exchange was lighter, but it sat inside the same structural problem Lohstroh described: the industry must build enormous physical infrastructure quickly while persuading local communities that the structures, noise, heat, water use, and energy demand are tolerable.

Long Lake is buying services businesses on the thesis that AI changes their margin structure

Alex Taubman said Long Lake intends to acquire American Express Global Business Travel for $6.3 billion. Long Lake, which he described as an acquisitive company bringing applied AI to services sectors globally, had already acquired 30 businesses across the services economy. The Amex GBT deal is its largest by far.

Taubman’s target market is the services economy itself: more than $20 trillion in the US alone, and larger globally. Long Lake focuses on sectors with strong customer trust, high retention, and long-standing relationships. Across its businesses, he said, the company typically sees more than 100% net dollar retention and 10-, 15-, or 20-plus-year customer relationships. Amex GBT, he noted, is more than 111 years old, founded in 1915 to help American Express travelers checks customers get out of Europe during World War I. Its acquisition of Carlson Wagonlit added a business more than 150 years old.

The AI thesis is not to replace the service business with software, but to give employees “superpowers.” In Amex GBT, Taubman said that means travel counselors who can respond faster, be more accurate, provide more products and services, improve retention, and improve customer experience. In Long Lake’s existing 30 businesses, he said AI is making team members 20%, 30%, or 40% more productive.

That productivity shift changes the financial profile of services businesses. Services have historically had lower margins than software. But if AI increases revenue per employee and produces operating leverage as a company scales, the line between services and software characteristics begins to blur. Taubman called this a convergence of services and software over the next five to ten years.

Long Lake is built around that convergence. It has a proprietary applied-AI platform called Nexus, and once Nexus is deployed into an industry, incremental acquisitions in that vertical become easier to integrate. The company started with residential services, including HOA management, and tends to cluster acquisitions in service verticals. Taubman said the company incorporated about 28 months earlier and has averaged roughly one acquisition per month; he expects the cadence to accelerate as Long Lake enters additional service lines.

The financing model differs from traditional private equity, according to Taubman. Long Lake has traditionally been under-levered, with very little debt in its existing businesses, though the Amex GBT acquisition will use financing from JP Morgan and others as disclosed in the press release. Taubman said Long Lake is more growth-oriented than traditional PE and is set up as a permanent-capital operating company, not a fund. That lets it invest in customer experience, technology, and product over a multi-year horizon.

Coogan asked how venture investors, with their own fund timelines, fit into a permanent-capital strategy. Taubman argued that VC already has longer hold periods than traditional PE, because power-law outcomes can compound for more than 10 years. Public markets may eventually provide liquidity if Long Lake goes public, and private markets are also more liquid than before. He also said becoming public could reduce Long Lake’s cost of equity and potentially cost of debt, which matters because the strategy is capital-intensive.

Coogan raised the possibility that incumbent SaaS companies, under pressure from AI, might start buying services companies themselves. Taubman said he would not be surprised by convergence in either direction — services buying software or software buying services — but said existing software companies may face conflicts if they begin competing with their own customers. Long Lake’s advantage, in his view, is that it was purpose-built from day one with applied-AI engineers, change-management and growth capabilities, and M&A talent from technology and finance firms.

The company’s stated mission is to bring AI to the real economy. Coogan tied that to Satya Nadella’s “10% GDP growth” prophecy. Taubman did not quantify the macro outcome, but his argument was that productivity improvements in low-margin, high-trust service sectors are the path by which AI leaves software demos and enters GDP.

The remaining signals: junk mail, luxury grocery, LA tech, and bubble charts

The source carried several smaller but telling market signals.

Palmer Luckey’s argument to ban junk mail was read at length. Luckey compared junk mail with spam calls and email, noting that unsolicited spam calls are prohibited by the FCC and email is regulated by the CAN-SPAM Act. He claimed junk mail is the majority of mail and costs 100 million trees per year. Coogan agreed with the thrust. He said true spam email has become rare, crediting both Google filtering and the CAN-SPAM Act, while spam calls and texts remain more common. Luckey’s follow-up argued that America has granted a quasi-governmental agency a monopoly on letter delivery and then allowed it to flood homes with unwanted garbage. If the average American spends 30 seconds a day sorting mostly spam mail, Luckey wrote, that is over a billion hours. Coogan suggested virtual mailbox services and AI filtering as a partial workaround, but Luckey said he had already used opt-out tools and still received “pounds” of junk mail daily.

Luxury grocery produced a local consumer-brand debate. Laurel Supply, described in an X post as LA’s latest “hypebeast grocer” that “outbougie’d Erewhon,” opened in West Hollywood. The hosts treated it as an attempted higher-end Erewhon clone. A TikTok tour showed 100% organic produce, a smoothie/juice/coffee/matcha bar, health and beauty products sourced globally, a stone mill imported from Ukraine for in-store flour, prepared foods, gelato, sushi, an organic butcher, a live olive tree, natural light, and valet. Coogan’s read was that it looked great but did not add much new to LA grocery beyond milling flour on site. Hays argued that the more differentiated version would be a no-plastic ultra-premium grocer — “Erewhon meets plasticlist meets Bryan Johnson” — with status-signaling bags approaching luxury fashion. The location debate mattered too: a tweet from Michael Miraflor said the Holloway Drive corridor could become a cool-person, influencer, out-of-town destination, “another pink wall.”

Rascoff, speaking as an LA tech operator, said the AI boom has pulled some mind share back to the Bay Area. LA had stronger moments when creator economy, crypto/NFTs, D2C, celebrity food and beverage, entertainment, consumer, media, and hard tech were hotter. He called LA a “very credible number two,” and said a SpaceX or Anduril IPO would be massive for the ecosystem, recycling wealth into angel investments. ServiceTitan and Honey were cited as prior positive moments.

A University of Michigan OpenAI investment surfaced as another sign of institutional luck or foresight. A tweet shown from Stephen Council said that before ChatGPT, Microsoft’s $1 billion bet, and IPO plans, the University of Michigan put $20 million into OpenAI, with a possible $2 billion payday equal to about a tenth of its current endowment. Coogan speculated that the university may have been considering donating, then discovered it could participate in the for-profit and chose that route.

The market ended with a chart. A tweet from Simplifying Stocks overlaid 1999 and 2026 market patterns and declared, “It feels like 1999.” Coogan joked that if one benchmarks, fits the graph, and adjusts the X and Y axes perfectly, “you get the perfect pattern.” His forecast was deliberately empty: the market would either go up or down over the next couple years. Hays improved the joke by saying historical returns suggest perhaps 51% or 52% up.

The joke was a useful ending because much of the source lived inside the same uncertainty. A $500 AP may strengthen a luxury brand or upset collectors. A wafer-scale chip company may become the inference bottleneck winner or be priced for perfection. A services roll-up may become the AI productivity vehicle for the real economy, or discover how hard change management is. The source did not settle those questions. It made clear which claims the people closest to them are willing to make.

The frontier, in your inbox tomorrow at 08:00.

Sign up free. Pick the industry Briefs you want. Tomorrow morning, they land. No credit card.

Sign up free