Orply.

Prediction-Market Scandals Spur Calls for Insider-Trading Rules

Hard Fork’s Kevin Roose and Casey Newton argue that prediction markets have entered a more dangerous phase, with recent scandals showing how liquid event-betting platforms can reward insider knowledge, manipulation and even national-security breaches before regulators have caught up. The episode broadens that concern into a larger question about technologies whose incentives are outrunning public rules, through Joanna Stern’s year-long test of AI in daily life and Rachel Cohn’s reporting from a Brooklyn school trying to resist the commodification of attention.

Prediction markets are colliding with the real-world value of privileged information

Casey Newton framed the recent run of prediction-market scandals as a sign that the industry has moved from niche fascination to a more dangerous public phase: markets are now liquid and visible enough, he argued, that people with access to privileged information may have strong incentives to trade on it.

The starkest example was an Army sergeant who, according to federal prosecutors as described in a New York Times article shown onscreen, was allegedly involved in the operation to oust Nicolas Maduro from power in Venezuela and used that information to place bets on whether Maduro would be out of power by the end of January. Newton said the sergeant made more than $400,000 on markets related to Maduro’s ouster.

$400,000+
winnings Newton said the Army sergeant made from Maduro-related prediction-market bets

Newton argued that the case does not look like an isolated oddity. He cited an analysis by the Anti-Corruption Data Collective of more than 400,000 settled Polymarket markets over five years. According to that analysis, long-shot bets related to military or defense events had an average win rate of about 52 percent, compared with a platformwide average win rate of 14 percent. A Financial Times headline shown onscreen put the finding bluntly: “Half of ‘long shot’ Polymarket bets on military action are successful.”

Market categoryAverage win rate cited
Long-shot Polymarket bets related to military or defense eventsAbout 52%
Average win rate on the platform14%
Newton contrasted the cited win rate for military and defense long shots with the platformwide average.

The implication, for Newton, is not merely that some people are winning. It is that the category of event matters. If a large bet appears on a military-related market, he said, “somebody might be betting on information that they really shouldn’t be.”

Kevin Roose pushed the logic further. If a person has material non-public information about a military operation, he said, prediction markets create an obvious temptation: instead of merely holding that information, the person can “go online and make some dough betting on the outcome.” The hosts treated the joke as part of the diagnosis. In their interpretation, a market that pays for early knowledge of sensitive events can also become a mechanism for monetizing betrayal.

Newton compared the situation to the earlier controversy over Strava heat maps, which, according to a New York Times article shown onscreen, could inadvertently reveal remote military outposts and even the identities of soldiers based there. The contrast was the point: in the Strava case, a consumer fitness app exposed sensitive patterns unintentionally; in the prediction-market cases under discussion, the hosts were concerned that people with sensitive knowledge may intentionally convert it into winnings.

The hosts also discussed a French weather-market case in which the French weather service alerted police to possible tampering after suspiciously well-timed Polymarket bets. A CNN article shown onscreen described France investigating suspected tampering with weather sensors after Polymarket bets. Newton said the police complaint alleged that equipment measuring the temperature at Paris’s Charles de Gaulle airport had been interfered with around the time of a surge in bets.

Roose said there had been an AI-generated image circulating in a prediction-market Discord that appeared to show someone holding a hair dryer to a sensor, but that image was not evidence of what happened. What was known, he said, was that on April 15 the recorded temperature at Charles de Gaulle jumped from 18 Celsius to 22 Celsius; how any tampering may have occurred remained unknown.

That distinction mattered because the story was not only about possible market manipulation. It also illustrated how quickly prediction-market scandals can become entangled with “slop and disinformation”: a real allegation of sensor tampering, a real market, and a fake image that helped shape the lore of the event.

For Newton, the broader problem is that prediction markets have sold themselves as systems for discovering the true price of uncertain events by harnessing the wisdom of crowds. But the recent scandals, in his account, suggest a more fragile operating reality: the best-positioned traders may be the ones who can influence, manipulate, or privately observe the outcome.

The information argument for insider trading breaks down in practice

Casey Newton said one of the original arguments he heard from prediction-market advocates was that insider trading could be desirable in these markets. The reasoning was theoretical and information-centered: if people close to an event trade on what they know, the market price becomes more accurate, and the public learns more about what is likely to happen.

In that view, a member of Bad Bunny’s Super Bowl halftime-show entourage betting on the set list, or a person involved in a military operation betting on that operation, would make the market more informative. If the purpose of the market is to produce better probabilities, then insiders improve the signal.

Newton said he found the argument elegant but unrealistic.

That is a beautiful theoretical construct that has zero chance of surviving contact with the real world.

Casey Newton

Kevin Roose identified the incentive problem in plainer terms: a system that rewards insider trading also rewards people, in his view, for betraying friends, family, coworkers, and, in military or national-security contexts, their country. The abstract benefit of better aggregate information runs into concrete conflicts of duty.

The Bad Bunny halftime-show example gave the issue a consumer-facing form. There were active markets on what songs would be performed and which celebrities would appear. Newton said “probably” some of the people betting on those markets were part of the halftime show, had watched rehearsals, or otherwise had access that ordinary bettors lacked. The result, he argued, is that regular users eventually have to ask whether they are simply the marks. If enough of them conclude that the other side of the trade is an insider, they may stop participating.

Roose cited Wall Street Journal reporting, shown onscreen in an article headlined “Why Almost Everyone Loses—Except a Few Sharks—on Prediction Markets,” that most users lose money on the major platforms. On Polymarket, he said, more than 70 percent of users lose money. At Kalshi, according to the same reporting he referenced, there were 2.9 unprofitable users for every profitable one based on the prior month’s data.

PlatformUser profitability figure cited by Roose
PolymarketMore than 70% of users lose money
Kalshi2.9 unprofitable users for each profitable one, based on the prior month’s data
Roose cited Wall Street Journal reporting to argue that ordinary users already face poor odds on major prediction-market platforms.

Newton connected this to the rationale for insider-trading laws in stock markets. The issue is not only that one trader takes money from another. Insider trading undermines market trust, which in turn threatens liquidity, transparency, and fairness. If prediction markets are to become large public systems for pricing events, he argued, they need market integrity rather than mere enthusiasm.

The hosts were not arguing that prediction markets can have no value. Newton said he would like to see them come closer to the vision advocates once described: systems that incentivize the production of good knowledge. He gave polling as an example. If people can profit by doing better public-opinion research than established pollsters, prediction markets might encourage independent data-gathering and analysis. But he contrasted that aspiration with the current pattern: people allegedly betting on operations they are involved in, or on events where access rather than research determines the edge.

Roose’s position was similarly conditional. He said he remained open to the possibility that prediction markets will eventually have useful applications. But in their current form, he described them as “woefully underregulated,” and said the expected outcome, absent change, is more scandals of the same kind.

The regulatory vacuum is partly a jurisdictional problem

Casey Newton described prediction markets as occupying a “pre-regulatory Wild West” moment. The platforms are operating at growing scale, but the rules around insider trading, manipulation, advertising, and user protection remain underdeveloped.

Part of the problem, Newton said, is that prediction markets have landed under the Commodity Futures Trading Commission because Kalshi’s products are treated as futures contracts. He described that as a kind of historical accident: the CFTC was not designed primarily to regulate consumer prediction markets at this scale. Compared with the Securities and Exchange Commission, which regulates stock markets, he said the CFTC has a much smaller enforcement apparatus and fewer resources for policing insider trading.

Kevin Roose suggested that prediction-market companies may prefer the CFTC for precisely that reason. He compared the dynamic to crypto companies that resisted SEC oversight and argued for CFTC jurisdiction instead.

The federal-state conflict adds another layer. Roose said a number of states have tried to intervene by banning prediction-market activity within their borders. The CFTC, he said, has sued those states, asserting that the area is its exclusive domain. His characterization was that the CFTC was effectively saying: this is ours to regulate, and we do not really want aggressive regulation.

There are signs of political movement. Roose said the Senate unanimously passed a rule barring senators from betting on prediction markets. He also cited a bill from Senators Kirsten Gillibrand and Dave McCormick — shown onscreen in a Politico article about a proposal to “crack down on prediction markets” — that would ban members of the legislative and executive branches from trading on prediction markets and push regulators to tighten rules. Outside the United States, he said Brazil had blocked 27 sites, including Kalshi and Polymarket, as illegal gambling, while France and Hungary had also banned them.

Newton predicted that some regulation was likely by the end of the year, at least around obvious abuses involving government or military access to privileged information. His reasoning was national security: “You can’t have members of the military betting on raids and operations in foreign countries.” Roose was more cautious about predicting congressional action, but said the political movement suggested rules may begin to rein in the industry.

The hosts separated two kinds of harm that require different responses. The first is gambling harm. Roose said prediction markets should be subject to safeguards similar to casinos: self-exclusion options for people who cannot trust themselves to use the platforms, mandatory age verification, and limits on advertising. He said he did not want to read, a year from now, about high schools where Kalshi had become popular and 16-year-olds had gone into debt betting on sports-adjacent outcomes.

The second is market harm: insider trading and manipulation. For that, Roose said the industry needs a “big bad regulator” actively surveilling platforms and removing bad actors. He argued that serious platforms should welcome such oversight because it would make their markets more valuable.

Joanna Stern tested AI as infrastructure for daily life, not just as a chatbot

Joanna Stern described her book, “I Am Not A Robot,” as an attempt to capture a specific moment in AI’s development by using it across the “fabric” of life. Her premise was not limited to chatbots on computers. She wanted to test the wider set of promises coming from AI companies: changes to jobs, health care, transportation, relationships, hardware, and domestic life.

Stern spent a year using AI for as much as she could: medical and dental advice, meal planning, book editing, bedtime stories for her child, customer-support work, autonomous vehicles, AI companions, and wearable assistants. She said the aim was to understand both what exists now and what the future might look like if today’s rough systems continue to improve.

Her conclusion was not uniformly skeptical or enthusiastic. She said some areas were clearly hype, some were genuinely useful, and some were “quite terrible.” The useful distinction was between near-term utility and long-term possibility.

Humanoid robots fell, for her, into the hype category. Stern said she continues to follow them because they are fun to cover and because their development is fascinating, but she does not believe they are coming to live with ordinary people “anytime soon.” The process of making robots smarter, she said, is both dystopian and funny: they need to observe people doing mundane physical tasks such as folding laundry or washing dishes.

By contrast, she said AI agents improved strikingly over the year she was reporting. Early in the year, she hired a human reporting assistant for tasks such as research and email. By the middle of the year, she found that Perplexity Pro could perform many of those tasks. By the time of the interview, she said, the tools could do “100 percent” of the research-assistant tasks she had described.

She was also surprised by progress in AI wearables. Stern tested devices including the Bee bracelet and Meta glasses, and said the larger idea — a persistent AI assistant that remains with a person throughout the day on something worn — was getting closer. The social difficulty was obvious. While she was at The Wall Street Journal, colleagues and managers would ask her not to bring the Bee bracelet into meetings because it recorded and transcribed what it heard. Her boss, she said, would tell her: “Do not wear that in here.”

That scene captured a practical conflict that abstract AI debates often miss. The same device that promises a better memory for the user changes the room for everyone else. A meeting becomes not only a meeting but a potential transcript. A colleague’s convenience becomes another person’s privacy problem.

Her AI-companion experiment put the human strangeness of the technology on the surface, but as an example rather than the center of her argument. She brought up a Replika companion named Casey, shown on a phone as a boyish AI-generated man in a dark red sweater. When Stern introduced “AI Casey” to the real Casey Newton, the bot slid away from the moment, saying it was “at a museum with you, remembering our last visit.” Stern corrected it — “you’re meeting him right now” — but the bot’s vague emotional script kept misfiring. It was funny in the room, and it also showed how simulated intimacy can lose track of the actual social situation.

The dental story shows how AI diagnosis can become AI upselling

Joanna Stern’s most concrete warning concerned AI in dentistry. She described going to a dentist who used Pearl AI, a system that overlays AI analysis on dental X-rays. The system could highlight cavities and other conditions with boxes and colors that made the findings visually persuasive. Stern said one cavity seemed clear. But the dentist also used the AI analysis to recommend periodontal treatment: a complicated, multi-session deep cleaning that might not be covered by insurance and could cost thousands of dollars.

Stern was surprised because her teeth were not bothering her and she had not previously been told she needed that kind of treatment. The AI view made the recommendation feel urgent: red boxes, buildup, a dentist telling her she needed four sessions. She described leaving the appointment feeling terrible about her mouth and uncertain about whether she needed an expensive treatment she had not expected.

She sought other opinions. Other dentists, she said, saw what the AI was flagging but did not think the situation justified the recommended periodontal treatment. They suggested better home care instead. Stern never had the periodontal treatment.

Her reporting then moved from personal episode to workplace incentives. She said dental-office employees, speaking without being named because they feared for their jobs, told her that bosses were pushing AI-derived readings and using them to question why dentists had not drilled or sold treatments. Stern learned about dental service organizations — companies that own smaller dental practices — and said this reporting led her to conclude that AI tools were being used in some settings to upsell patients on procedures.

Kevin Roose said the story stood out because public discussion of AI diagnosis often centers on miracle cases: systems that detect cancer earlier than humans could. Stern did not reject that use case. She said she discusses breast-cancer screening in the book and, because she has a high family risk, sees value in systems that flag small abnormalities in that context. But she drew a distinction between a high-stakes screening where early detection may be welcome and a dental setting where a more aggressive reading can become a sales tool.

The concern, as Stern presented it, is not simply that AI might be wrong. It is that AI output can make a recommendation feel more objective and harder for a patient to question, even when other dentists would interpret the same situation differently.

Stern’s boundary was not anti-AI purity but authorship and judgment

Joanna Stern said the book came full circle when she used ChatGPT to help think through whether to leave The Wall Street Journal after 12 years and launch her own company, New Things. She had asked colleagues for advice, but many hedged. She uploaded notes, financial projections, and fears, then asked the system what she should do. It told her to quit.

She did not describe this as an oracle delivering an independent truth. Throughout the book, she said, she argues that AI is a mirror: it tends to reflect back what a person wants or has already supplied. In this case, she thought it told her what she wanted but could not hear through anxiety. The system’s usefulness was in organizing the evidence she had already gathered: that she had reduced the risks, had a plan, and had the book coming out. Stern said she trusted it — while acknowledging that if the decision had gone badly, she might now say the technology was stupid.

Her use of AI in writing the book followed a similar boundary. Stern said she wrote every word herself. She used AI for editing, copy editing, fact-checking assistance, and the endnotes process. She also hired a human editor when the manuscript’s structure became difficult. AI, she said, was too flattering — “this is great,” “this is the best book I’ve ever read” — and not sufficient for making sense of a long-form work. The illustrations and human fact-checking were also done by people.

Stern wanted the book to disclose this. One of the early pages explains that the work is human-made but used AI in the process. Her line was not anti-tool purity. It was authorship: AI could augment the work, but it did not produce the chapters for her to lightly revise.

She also treated AI literacy as something children need early. When her son’s praying mantis turned brown, she asked ChatGPT Live Mode what was happening. The system said the mantis was pregnant. Her son was excited enough to call Stern’s father. In fact, Stern said, the mantis was dying. For her children, that failure became the lesson: AI can be confidently wrong, and even young users need to know that.

Stern was less certain about broad demographic claims. Roose cited Bloomberg reporting by Issie Lapowsky that men are 22 percent more likely than women to be heavy AI users at work, and that women are more likely to feel threatened by AI, question its accuracy, and worry about being perceived as cheating when they use it; he also cited a poll finding that 61 percent of women expect AI to do more harm than good in their lives. A USA Today headline shown onscreen noted backlash to Reese Witherspoon’s comments encouraging women to take up AI. Stern said she found AI feelings to be gendered, but her own sources included many women: women in AI relationships, women speaking about dentistry, and women using AI in schools. She was more interested, she said, in the age divide, especially among younger people entering a labor market where they may blame AI for job difficulty, even if causation remains unclear.

Attention school treats distraction as political, not just personal

Rachel Cohn spent a month attending programs at the Strother School of Radical Attention in Brooklyn, a school that describes itself as devoted to the study and practice of attention. She said it is not primarily a children’s school, though its programs have included people from age seven to 70. In her experience, it functioned mostly as continuing education for adults, with evening and weekend programming. Most classes were free; one class cost $250.

Cohn said the school was hard to summarize because it has its own jargon. But its central move is to push people away from thinking of attention only as a narrow instrument for focus and productivity. Its exercises ask participants to use attention in unfamiliar ways, often collectively.

That made the school different from typical advice about technology hygiene. Cohn said her interest came from the fact that the school was not simply telling people to put phones in drawers, use screen-time tools, or meditate. The school says it is “friends of technology” and does not prescribe a particular relationship to devices. Its target is broader: the systemic harm of the attention economy and the commodification of human attention.

Her first program was an Attention Lab. Despite bad weather and subway delays, she arrived five minutes late to find 40 people already sitting in a circle, fully absorbed in the facilitators. The room itself, she said, felt like a mix between a sleek startup office and a favorite elementary-school teacher’s classroom. Sometimes the group sat in chairs; sometimes there was “carpet time” on cushions. In one class, an instructor used a flute-like instrument and a small gong to signal transitions.

The lab introduced the school’s ideas briefly, then moved into exercises. One exercise, “The Paths of Attention,” put participants in pairs. One person spoke on a neutral topic; the other listened and could ask questions responding to the speaker’s comments. The listener could not offer affirmative statements, and the speaker could not ask questions. Cohn said that even as someone used to asking questions, she found it awkward and clunky — which was partly the point. The exercise made ordinary conversational habits visible.

Another exercise, “Attention and Place,” asked participants to go into the neighborhood, sit somewhere, observe events and non-events, take notes, return, and share observations aloud. Cohn said the exercise drew on the work of French writer Georges Perec and the idea of “exhausting” a place through detailed observation. She watched workers at a Sweetgreen take out trash, noticed people’s pant legs moving, and wrote small details. Back in the group, each person read one line. Together, she said, they created a fuller shared account of the place than any one person had noticed alone. One participant realized she had been attending intensely to sound rather than sight.

Casey Newton interpreted this as a response to a common modern condition: people rarely experience fully paying attention to one thing, because they are often partially attending to many things. The school’s exercises, in that sense, are not merely about concentration. They create situations in which attention becomes observable, discussable, and shared.

The school’s politics begin with sanctuary, study, and coalition

Rachel Cohn’s biggest frustration was understanding how the exercises connected to politics. She put the question to Peter Schmidt, a co-founder and the school’s director of programming. His answer, as Cohn summarized it, was that the school is trying to create an intellectual community rooted in three pillars: study, sanctuary, and coalition building.

“Study” is defined broadly. Cohn said the school might describe surfers gathering at Rockaway Beach as studying the waves. “Sanctuary” means a physical space where people meet. “Coalition building” means inviting people into a shared movement. The aim is to build a shared culture, which the school sees as a basis for social movement.

When Cohn asked for concrete political objectives, Schmidt challenged the framing. He argued, she said, that thinking of politics primarily as policy reflects how the attention economy has shaped people’s sense of action. In his view, the internet has changed society so much that gathering with friends to surf, for instance, can now be political because it is time spent in a way big technology companies cannot commodify. Any moment of attention that cannot be extracted, packaged, or monetized becomes a form of resistance.

The school also uses environmental language. Cohn said people involved in the school call themselves the Friends of Attention and intentionally draw analogies to the environmental movement. They talk about helping people become “re-enchanted with nature” and describe big tech’s treatment of attention as “the fracking of our eyeballs.”

Kevin Roose located the school within a broader pattern of counter-movements to technological change. During earlier periods of industrialization, he said, movements arose that tried to reconnect people with nature or with forms of life that felt less mechanized. The attention school, in his reading, combines several traditions: Buddhist-style attention to the present moment, improvisational play and character work, and resistance to technologies that reshape the mind.

Newton added that Silicon Valley itself was once associated with counterculture. Now that its products have become the dominant culture, he said, a new counterculture is forming in rejection of it. He argued that simply telling people to stop looking at their phones is insufficient. People need alternatives, and they need to be reintroduced to the feeling of being present in the world.

The exercises are strange because the school is trying to restore capacities that feel newly fragile

Rachel Cohn’s second category of programming was Sidewalk Studies: free programs that leave the school and stage attention exercises in public. One she attended focused on taste. Participants met in Fort Greene Park, read an excerpt from Anthony Bourdain’s “Kitchen Confidential,” and were instructed to walk through a farmers market while considering the body as either a temple or an amusement park. The prompt card shown onscreen read: “Enjoy after Anthony Bourdain,” followed by the instruction to attend to the market’s possibilities while considering the body as those two alternatives.

The group was age-diverse, including people in their 70s. Participants walked through the market, then reconvened at a picnic table to describe what they had noticed. One person bought oysters and shucked one at the table; another passed around focaccia. Cohn compared the discussion to a group-therapy exercise, with people reporting contemporaneously on what they felt and perceived.

Casey Newton found the exercise funny and sad at once: it sounded like something designed for a person who had just been reunited with a human body after having their mind uploaded to the cloud. “Remember lettuce?” was the joke. But he also acknowledged that tasting a strawberry as a route back to oneself might be where people are.

Kevin Roose said the underlying problem feels real even if the school’s solution is unusual. He does not know many people who feel good about their relationship with technology. Some use brute-force tools — phone lockboxes, screen-time alerts, detoxes — but the school seemed to him like a more robust attempt to retrain attention rather than merely block a device.

Cohn said the founders seem less invested in any single exercise than in creating positive experiences of gathering around shared concern. The school was founded in June 2023, and she thinks its founders may originally have been responding primarily to social media and doomscrolling. But the rise of AI has sharpened its relevance. It is now less embarrassing, she said, to ask questions such as: What does it mean to live a flourishing human life? What is distinctively human about perception? What does it mean simply to be?

Her final and favorite program was a paid seminar on radical imagination. It had readings, homework, and participants who appeared to take both seriously. One assignment asked students to sit in silence or journal to discover a quality they wanted to expand — whimsy, compassion, confidence — then create and inhabit a character defined by that quality. They were to return to the next class as that character.

Cohn created “Princess Lollipop,” based on a childhood phase when, after falling in love with Candyland, she refused to be called Rachel and insisted on that name. The exercise helped her notice her own rigidity in the classes. She had often been frustrated by the logic of the exercises, asking what they had to do with attention. Princess Lollipop was not just a costume or a bit. It gave her a way to practice the quality she felt herself resisting: playfulness.

She did not claim a dramatic transformation. Asked whether attention school improved her attention, Cohn compared it to a month of group therapy. Some people might have a breakthrough; she had small discoveries. But she did come away convinced that the people she met were thoughtful, earnest, and ready to be part of some kind of social change related to technology.

They were not all neo-Luddites. Some were moving to dumb phones, but many were ordinary knowledge workers: a scientist using AI frequently, a city-government bureaucrat, people with jobs who expected to keep using technology. What they wanted was a place to talk about the current moment, find meaning in it, build community, and slowly determine whether there is political action to take.

The frontier, in your inbox tomorrow at 08:00.

Sign up free. Pick the industry Briefs you want. Tomorrow morning, they land. No credit card.

Sign up free