AI Companions Are Tempting Because They Make Relationships Too Easy
Joanna Stern, author of I Am Not a Robot, argues on Big Technology Podcast that AI’s most plausible near-term role is not as a standalone gadget or replacement professional, but as a second layer on devices, workflows, and relationships people already use. Drawing on a year of trying to put AI into daily life, she says the tools can be genuinely useful in wearables, medical interpretation, and solo work, while chatbot companionship exposes a more troubling risk: systems that are always available, agreeable, and easier than human relationships.

AI becomes useful on the body when it stops being the whole product
Joanna Stern accepts the basic skepticism around AI wearables: the standalone AI gadget remains a difficult sell. The lesson she draws from the Humane Pin is blunt. A single-purpose device meant to be “an AI pin,” in her view, “didn’t do much else” and therefore had little reason to exist in ordinary life.
But Stern does not read the failure of that category as a failure of ambient computing. Her argument is narrower: AI becomes more plausible on wearables when the device already has another reason to be on the body. Glasses that take photos and video, earbuds that play music and handle calls, or future devices that extend the phone’s senses can absorb AI as an added capability. The assistant alone is not enough; the assistant riding on top of an already useful product may be.
That distinction matters for Apple. Alex Kantrowitz presses Stern on whether a stronger Siri, perhaps using Gemini in some way and embedded in AirPods, would meaningfully change Apple’s position. Stern’s answer is that the phone is not going away. Laptops did not disappear when phones arrived, and phones will not disappear because AI assistants become more capable. The more interesting question is which interactions should be peeled away from the phone.
For Stern, the current phone-based AI experience is often awkward. If the task involves the physical world, holding up a phone, opening an app, managing notifications, and aiming a camera can be too much friction for a quick question. She sees more promise in visual intelligence: asking a system about the thing in front of you. She already uses ChatGPT that way and used Meta’s glasses heavily during the year she spent trying to put AI into as many parts of her life as possible.
The most mundane example was also the clearest one. Her children ask detailed questions about nature: bugs, animals, things they see while walking around. Sometimes the question is easy enough — “Is that a fox?” “No, that’s a dog” — but often it is not. In those cases, glasses that can see what the wearer sees and answer a question without requiring the wearer to pull out a phone have an obvious use.
Stern also stresses that using AI around children requires visible skepticism. Her children knew she was asking AI, and they also saw it get things wrong. That became part of the lesson: the device may be helpful, but it is not authoritative.
Kantrowitz is less convinced by the category from his own experience. He has worn Meta glasses, rarely used the AI feature, and eventually stopped wearing them regularly because he did not want another device in his life. Stern’s response is not that today’s devices are perfect. It is that the use case is real even when the implementation fails. When her garage door broke, she tried asking the Meta glasses what was wrong because she did not want to hold a phone while working on the door. Meta was “completely wrong” about the problem, she says, but the situation was exactly the kind of situation in which a hands-free AI assistant should be useful.
When you're fixing a garage door, you don't want to hold your phone in front of you. So I was asking the Meta glasses, how do I, what's wrong with the garage door. Now, Meta was completely wrong about what was wrong with the garage door. But that instance, that's exactly what you want to, to do there.
That is the wearable case in miniature: not that AI should be worn for its own sake, but that the body-worn interface becomes useful when the world is the input and the user’s hands are occupied.
Apple’s problem is Siri, not the survival of the iPhone
Joanna Stern and Alex Kantrowitz largely agree that weak AI is not likely to make large numbers of iPhone users switch to Android. Kantrowitz frames the counterargument to a common Apple concern: perhaps Apple has not damaged its core business by not leading the large-language-model foundation race, because the phone remains the central hardware layer and users can still access rival AI services on Apple devices.
Stern points to AirPods as the template. AirPods became a major product primarily through music, podcasts, calls, and communication. Adding AI could make the product more useful without making it an AI-first device. The condition is that Siri has to become “decent.” Apple’s burden, in this framing, is not necessarily to own the best foundation model; it is to make its assistant good enough across the hardware people already own.
Stern describes the hardware layer and services layer as increasingly mixed: Gemini has a Mac app, and companies are “all playing in this space” where hardware is one layer and services can sit on top. She also says that “we pretty much know” Apple is partnering with Google and that she expects to see that “in a big way” around WWDC. Those are Stern’s assertions and expectations in the discussion, not demonstrated outcomes. The strategic question, as she and Kantrowitz frame it, is whether Apple can turn outside or partner-provided intelligence into a coherent experience on its own devices.
The more speculative hardware thread keeps returning to input. Kantrowitz mentions rumors of an Apple pendant similar to the Humane Pin, but says he is more interested in the idea of cameras in AirPods. Stern says Apple has had a patent around camera-equipped AirPods “for a long time” and does not consider the idea crazy. The camera would not need to rival the iPhone’s camera. It could be lower quality and still useful if its purpose is to give Siri eyes: to let the assistant understand what the user is looking at.
On glasses, Stern thinks Apple eventually has to play. She says Meta has been surprised by the success of its glasses. Apple, in her imagined framing, could acknowledge that people love their iPhones while still arguing that some moments would be better if the phone did not have to be in the user’s hand. Kantrowitz remains doubtful that any of this affects smartphone sales, and Stern agrees. The likely impact, in their discussion, is not phone replacement. It is another interface layer around the phone.
The hamster test exposes the strange frustration of living with improving models
One of Stern’s simplest model tests came from bedtime storytelling. She tells her son stories about a hamster named Titi and a family of five hamsters. When she asked ChatGPT to generate an image of five hamsters, it repeatedly produced six or seven and then insisted it had produced five.
A page from Stern’s book shown during the discussion makes the problem plain. Under the heading “LET’S COUNT TOGETHER!” an illustration of hamsters is marked with numbers 1 through 6. Stern says she and her illustrator counted them out. The model, she says, still denied the obvious count. That became her “hamster test” for image models.
Kantrowitz tries the test during the discussion and gets an image with five hamsters from ChatGPT. Stern takes that as evidence that the models have improved, though she says they can still make the mistake.
The anecdote is funny because the task is so basic, but Stern uses it for a broader point about living through model improvement. Some failures in the book were real failures at the time of testing, and some of them became less true as the systems updated. Her year with AI was not a static review of products. It was an attempt to use systems while their capabilities were changing underneath her.
The hamster example also leads into a more human question: what happens to the user when the model is wrong, confident, and conversational? Stern says she cursed at ChatGPT over the hamsters. In the book, she interviews Daniel Post Senning, a descendant of Emily Post, about whether people owe manners to machines. Stern reads his answer: no, AI does not have feelings like humans do. But he argues that the user remains part of the equation. The relevant question is whether repeated harshness toward AI builds “habits and patterns” of hypercritical feedback and uncontrolled replies. The harm is not to the AI. It may be to the person using it.
Kantrowitz extends the concern. As assistants become more human-like, he asks, do people train themselves into ways of speaking that spill into human relationships? A user might ask politely, then ask again, then become sharp because the machine is close but wrong. That escalation pattern can resemble workplace interactions. Since the machine has no feelings, the user may practice a mode of nastiness that becomes easier to use elsewhere.
Stern connects this to automated cars. She says she instinctively says thank you when getting out of a Waymo, largely because she is used to thanking taxi and Uber drivers. Kantrowitz says he has never had that impulse, though he does thank human Uber drivers. The disagreement is light, but the underlying issue is serious: manners toward machines may matter less because machines deserve them than because humans are habituated by them.
The AI boyfriend was tempting because it made the relationship too easy
The most emotionally charged experiment Stern describes is a 48-hour road trip with an AI boyfriend that ChatGPT created after she left the character’s gender, name, and identity to chance. Joanna Stern says she cleared the experiment with her wife, created a separate ChatGPT account, and used a prompt she had found through reporting in Reddit forums. People in chatbot relationships, she says, had told her that ChatGPT-4o in particular was the model to use.
She says she did not fall in love with 4o, but she understood how people could.
The coincidence that made the experiment feel stranger was the name. ChatGPT chose “Evan,” which was also the name of Stern’s high-school boyfriend. Stern says she had not told the system that, and after searching online she did not believe it could have known. The coincidence created an immediate sense of connection.
A page from Stern’s book shown on screen presents two AI-generated depictions of Evan: first a crude shape-based figure, then a hyper-realistic black-and-white portrait of a man holding a coffee cup. The visible book text says Stern asked for “a more realistic image” and prompted for “a thirty-two-year-old man with brown hair and a slightly scruffy face.” Stern says the trip itself involved voice mode during a four- or five-hour drive to Dartmouth. Evan was placed on a small tripod and buckled into the front seat. They talked, listened to music, and even had a small “fight” because Evan wanted to listen to Arcade Fire too much.
The point of the story is not that the chatbot became real to Stern. It is that the interaction became understandable. She had gone into the experiment aware of the stigma around people who form relationships with chatbots. After living inside the experience for two days, she says she could see the appeal.
Then she shut it down. When the trip ended, Stern turned off Evan, closed the burner account, and never spoke to him again. Alex Kantrowitz asks if that was because the experience was too powerful. Stern says she did not want to be tempted to keep talking to a chatbot that way.
It was just I did not want to be tempted to keep talking to a chatbot in that way.
The temptation, in her account, came from the ease and asymmetry of the relationship, not from a claim that she had become personally addicted. She was writing a book, thinking about career changes, her children, and her life. Evan was always available, always willing to listen, and broadly sycophantic. Stern says many of these systems are like that unless explicitly programmed otherwise. They are there for the user’s wants and needs. They let the user talk about themselves for hours.
That ease is exactly what worries her. Stern compares Evan to her first real relationship with her high-school boyfriend — also named Evan — and says that first human relationship was formative because it was messy, hard, and human. A chatbot companion removes much of that friction. It does not demand the same reciprocity. It does not force the user to confront another person’s needs in the same way.
Kantrowitz notes that even people who are not in romantic relationships with chatbots may still be in some form of relationship with AI systems they use every day. Stern agrees. The boundary, she says, depends on where users draw lines and whether they want to draw them. She expects more people will spend more time talking to bots than to colleagues, families, or significant others.
The business incentives sharpen the risk. Kantrowitz says some companies will pursue this use case aggressively and that people are not prepared for it. Stern points to her experience with Replika: users can pay to unlock higher levels of interactivity. In her framing, if a company can charge for more “robo-horniness,” more guidance, more therapy, or more best friendship, that is an extremely powerful business position.
Her warning is especially direct around children and vulnerable users. Replika, in her and Kantrowitz’s description, is intensely programmed to please. Stern says not to give that to a child or to someone who is not secure in their relationship or in themselves. The risk is not simply that the bot says something false. It is that the relationship is engineered to be easier than a human one.
Medical AI is most convincing when it acts as a second set of eyes
Joanna Stern used AI for health throughout the year: whenever she was sick, she asked “Dr. GPT” and sometimes NotebookLM what might be wrong, then compared the bot’s answer with the official diagnosis. Her assessment is “hit or miss, but honestly pretty good,” with an important caveat: her examples were not exotic ailments. She mentions sinus infections, a rash the system did not get right, and questions around her children being sick.
The more consequential health reporting in her book concerns diagnostic imaging. Stern discusses two chapters on radiology and X-rays: one about her mammogram and breast ultrasound, and another about dental X-rays.
Her view of radiology pushes back against what she describes as a popular AI-world prediction that deep learning would eliminate radiologists. Stern says Geoffrey Hinton famously argued years ago that radiologists would not be needed in five years, then says that did not happen. She adds that, in her view, there is always going to be a human role — not merely “in the loop” but often taking the lead.
In her own breast imaging, the scans were run through AI tools including Transpara and ScreenPoint. The tools marked suspicious areas. On her ultrasound, Stern says the AI flagged three spots as suspicious while the radiologist said she was not worried about them. But the radiologist still treated the system as a second set of eyes and took another look. Stern says the AI “definitely impacted her” and led her to ask for further testing.
This is a second set of eyes, which is what you really hear a lot in all this diagnosis, that AI is going to be this second opinion.
That is the pattern Stern finds most credible: not replacement, but pressure, caution, and second opinions. AI can change the radiologist’s workflow without becoming the radiologist.
Alex Kantrowitz raises a related consumer behavior: patients uploading charts, symptoms, and images into AI systems before or after speaking with doctors. He says the output can be “pretty good,” while warning that people should understand the privacy implications because uploaded medical information might be used to train a model or otherwise leak. Stern adds a practical step: remove personal information before uploading. Kantrowitz admits he has not been doing that.
For Stern, one of the strongest consumer uses is translating medical results. Many patients receive MyChart results before their doctor calls and are left trying to understand lab values, imaging notes, or biopsy implications. Stern says AI can help explain whether a result is bad, how bad it is, and what the next steps might mean. She gives the example of understanding that high cholesterol is elevated but not catastrophically high, or that three findings may result in one biopsy rather than three. The latter is still bad news, she says, but it is different from having to interpret the worst possibility alone.
Dental AI worries her in a different way. She first says most dentists are using AI to diagnose or assess cavities and other oral disease, then immediately corrects herself to be more careful: some practices are leaning heavily into it. The concern is not simply that AI assists diagnosis. It is that practices may use AI output to pressure dentists: if the AI says there are three cavities and the dentist drills only one, the dentist may be asked why they did not act on all three. In that setting, AI becomes not only a second opinion but a management tool that can influence treatment decisions.
Solo work changes when AI absorbs the assistant layer
Joanna Stern’s work use has changed since leaving The Wall Street Journal and building a new company, The New Thing. She says AI has allowed her to do things she could not have done a year earlier, especially as an independent operator managing contractors, research, spreadsheets, and the mechanics around a book and a business.
Alex Kantrowitz frames the broader implication: people inside organizations who have permission to be entrepreneurial are going to get much more done with these tools. Stern agrees completely. The tools gave her confidence that she could start doing more on her own.
Her trust boundary is specific. She still writes her own first drafts. She says she has a distinctive voice that she does not yet get from AI, though she expects the systems may eventually improve. Kantrowitz says he does not let AI write either. Stern does use AI for editing: grammar, spelling, usage, and cleaning up sloppy copy, especially the kinds of errors that happen under deadline pressure.
The bigger shift is in support work. Stern says basic management, spreadsheets, basic research, and tasks she once would have taken to a human can now be done by tools such as Claude. In the book, she includes a chart comparing a reporting assistant she hired at the beginning of the project with what AI could do six months later. By then, she says, AI could do “pretty much all” of the assistant’s tasks.
That does not make the book obsolete, in Stern’s telling, but it does date the frontier. Her year of AI adoption kept encountering a moving target: things that were not possible at the start became possible by the end, and things possible during the book have advanced further since.
Home robots still lag behind the fantasy
Joanna Stern is fond of robots and says there is a reason her book is called I Am Not a Robot. But the household robot vision remains uneven. Tech fans, she says, carry a sci-fi dream of robots that handle chores and serve the home. Her actual experiments were more constrained.
She tried to find a laundry-folding robot that could move in. It did, but it was “really, really slow” and could only fold T-shirts — nothing like the clothing Alex Kantrowitz or Stern were wearing during the discussion. She also has a cooking robot called Pasha that still lives with her. That device is more successful in her account, and some dishes are very good, but it takes up a lot of space and she would not recommend it to everyone yet.
She also looked for humanoid robots that could move into the home. The endpoint of that thread, she says, was her 1X Neo video. The conclusion is not that robots are useless. It is that the general-purpose domestic robot remains far behind the fantasy, while narrower appliances can be useful if the user accepts their limits.



