AI Tools Could Shrink Car Development From Six Years to Three
AI is entering car development first as a way to compress the years of sketching, 3D modeling, simulation, testing and software work behind a new vehicle, not as a prompt that produces a finished car, Verge contributor Tim Stevens argues on The Vergecast. Stevens says that could move automakers toward three-year development cycles and lower costs, but warns it may also encourage homogenized design and erase the junior work that trains future designers and developers. The Verge’s Hayden Field applies the labor question to the wider AI business, where coding tools such as Claude Code and Codex are gaining traction while companies cite AI in layoffs without clear evidence that the productivity gains or ROI are there.

AI is compressing car development before it redesigns the car
Tim Stevens described traditional car development as a long chain of sketches, digital models, clay models, virtual and physical testing, engineering work, and production preparation. It often starts with a business brief: a company decides it needs, for example, a two-door or four-door vehicle with a certain capacity and purpose. Designers sketch options, the company narrows them, and the winning ideas move from paper into digital sketches, then 3D models, then physical models.
That pipeline can take “upwards of five or six years,” Stevens said. The practical consequence is that a car arriving this summer may have been designed when the market looked very different. He pointed to EV strategy and touch-heavy interfaces as examples. Around 2020 and 2021, automakers were designing into a market that looked more enthusiastic about EVs and more accepting of large touchscreens. Now, he said, some EV regulations are being rolled back, anti-EV initiatives have grown in the US, and many drivers are asking for knobs and buttons again.
The next gen amazing new technology cars that are hitting the roads this summer were in production in design and development in the early 2020s, you know, back around when COVID was still going on.
The main near-term promise is not that General Motors or another automaker feeds a prompt into a chatbot and receives a radical new car. Designers are not ready to give up creative control. The immediate work is process compression: moving faster between the same stages.
Stevens gave a concrete example from GM. A designer can provide several angles of a sketch, and AI can turn that sketch into a 3D model in about five minutes. He said that task used to take a designer a couple of weeks in a 3D modeling suite. The GM-attributed examples included a gray 3D model of an aerodynamic car and sketches of a futuristic black-and-silver Chevrolet concept car from multiple angles. The use case was visualization and iteration, not handing the entire design process to a model.
The same logic applies to aerodynamics. Automakers already use computational fluid dynamics to simulate how air moves around a vehicle, but Stevens said those simulations can require specialized training, large computing resources, and sometimes weeks to run. A company such as Neural Concept, he said, is trying to bring computational-fluid-dynamics-style work into AI systems so simulations that would have taken hours and hours on a supercomputer can happen in minutes. A GM-attributed aerodynamic simulation used in the discussion showed a blue SUV surrounded by colorful airflow lines — the kind of work AI could make faster to iterate.
AI is also being applied further away from visible design. Stevens pointed to EV battery chemistry, where different cathode and anode compositions affect charging speed, battery lifetime, and temperature sensitivity. Testing all possible combinations physically would be slow and expensive. A machine learning system can evaluate many permutations and return candidate options without the company building and testing 20 different batteries.
The ambition, as Stevens summarized it, is to pull a five- or six-year development cycle toward something closer to three years. That would not remove wind tunnels, crash testing, or human review. It would reduce the time between question and answer.
Faster iteration does not settle the question of taste
David Pierce pushed on the tension between engineering tasks that almost everyone would like to speed up and artistic tasks that are part of why car design exists at all. Fewer wind tunnel runs sound like an easy win. But hand-tuning a full-scale clay model, or painting a clay concept so it looks like a real car onstage, is not clerical work.
Tim Stevens said automakers still make full-size clay models, though fewer than before. Some of the process has already been mechanized: a 3D milling machine can turn a digital model into a full-scale clay form. But the finishing remains human. There is still “a lot of hand tuning and tweaking,” he said, and many concept cars revealed onstage are essentially clay objects built around a frame, without engines or complete interiors.
That distinction matters because the most visible risk in AI car design is homogenization. The question posed to Stevens was whether companies that start feeding decades of cars, sci-fi references, and market data into generative tools might converge on a lowest-common-denominator version of “the future”: swoopy, space-age, and familiar. A GM-attributed concept image during that exchange — a silver, covered-wheel car against a city skyline — fit the kind of future-car visual language being questioned.
Stevens did not dismiss the risk, but he argued the tools could also make designers bolder. If AI can quickly turn a sketch into a 3D model and then into a rolling video of how the car might look in the real world, designers can test more provocative ideas earlier. They can ask whether a design works on the highway or in a morning commute scene before committing to expensive downstream work.
The human beings are still in control, they're just having their tasks sped up right now.
The optimistic version is that AI lowers the cost of trying something strange. The pessimistic version is that it makes every company more responsive to the same signals and therefore more alike. Stevens said the answer will not be visible immediately; even on accelerated timelines, it will take years before AI-assisted designs show up on roads.
The missing answer is what happens to the entry-level ladder
The sharpest labor concern in car design, in Tim Stevens’s account, is not an immediate mass layoff. Stevens said automakers he spoke with are generally saying they are not looking to cut staff, that they want to maintain staffing levels, and that AI is about efficiency. His concern is the pipeline.
The tasks most likely to be automated first are often the tasks given to junior people: turning a sketch into a 3D model, improving a sketch, making something more realistic, writing documentation, building tests, learning by doing the work around senior people. If those tasks disappear, it becomes harder for new graduates to enter the profession and learn the process.
Stevens connected this to his own path into tech journalism. He said he started by rewriting press releases into easier-to-parse versions for casual readers — a task AI can now do well. David Pierce made the broader point that many careers have depended on “being near” someone doing the work: sweeping floors, answering phones, doing low-level tasks, and absorbing the craft. If AI removes those tasks, the apprenticeship structure breaks.
Stevens said he had not heard a good answer from anyone about how car companies would keep the design pipeline healthy. He raised a practical question: will design houses hire people out of college into more senior roles, skipping entry-level work? Will they need the same number of people?
The same problem appears in software. Stevens described modern cars as increasingly “software-defined vehicles,” a term both he and Pierce said many people dislike even as it spreads through the industry. The example was simple: a turn signal used to rely on a physical relay clicking a bulb on and off. Now the timing can be defined in software. Horns, active safety systems, and other components that were once discrete hardware are being pulled into larger chips and software systems.
That creates huge software integration work, more cybersecurity requirements, and more long-term update obligations. AI can help with documentation, unit testing, and other chores Stevens described as important but not the work a developer necessarily looks forward to. But those are also the chores assigned to new developers.
The most concrete labor worry, then, is not that AI immediately replaces the senior designer deciding what a Cadillac or Chevrolet should be. It is that AI erases the low-status work that produces the next senior designer.
Cars still have to pass physical tests
The automotive industry has one constraint that much of the AI industry does not: the product has to pass physical tests. Tim Stevens said crash testing, emissions testing, and other real-world requirements do not disappear because AI helped design the car. A vehicle either meets those requirements or it does not.
That does not make AI irrelevant to compliance. Software regulation is becoming more important as vehicles require regular updates, quick cybersecurity fixes, and support over the life of the vehicle, which Stevens put at probably around 10 years. AI could help companies track issues, update patches, and keep increasingly complex vehicle software current.
But Stevens drew a line between assistance and proof. AI may help an automaker manage software obligations or find problems faster. It will not make a car pass fuel-economy or crash standards if the engineering is wrong. Eventually, the industry still has to put the thing on the road — or, as the exchange put it, drive it into a wall.
If AI does shrink development cycles, Stevens said it could make cars cheaper by lowering R&D costs. Buyers are paying not only for metal, circuitry, and components, but also for years of development. Shorter cycles could also let automakers respond more quickly to tariffs, supply-chain problems, geopolitical disruptions, and changing consumer demand.
The tradeoff is that faster feedback can become trend-chasing. Stevens worried that automakers might make fewer “Hail Mary” vehicles: cars that run against market fashion but later become icons. He compared the risk to algorithmic entertainment, where a product appears assembled from trend inputs rather than from a point of view. His example was a hypothetical “holiday movie plus action movie” that somehow results in Dwayne Johnson as an elf.
For car companies, Stevens’s business case for AI is straightforward: reduce expensive iteration, reduce mistimed bets, and get closer to current demand. The cultural risk he identified is just as straightforward: fewer strange bets, more market-shaped sameness.
Slate is a counterexample to algorithmic car logic
David Pierce framed the Slate truck as almost the opposite of an AI-generated futurist concept. Slate’s pitch, as Pierce summarized it, was extreme minimalism: no radio, few default features, everything treated as an add-on, and a very low effective price. The truck visuals were correspondingly plain: a gray, boxy pickup, sometimes shown with wood planks in the bed, a covered bed, or a front cargo area.
Tim Stevens said that pitch became more complicated after the loss of a $3,500 federal rebate. The truck had been positioned as a mid-$25,000 vehicle that could effectively cost around $18,000 or $19,000 with incentives. Without that rebate, he said, the mid-$20,000 price is a harder sell.
Slate still hopes to undercut the cheapest truck on the market by several thousand dollars, Stevens said, but the comparison is less clean than it once looked. The Ford Maverick XL starts a little over $28,000 and includes power windows, a stereo, and paint. Slate’s appeal depends on buyers who want DIY customization: 3D-printed accessories, vinyl wraps, and a community built around modifying a simple vehicle.
Stevens said Slate had recently raised $650 million, which the company says will be enough to get the truck into production later this year. He also noted a leadership change: Peter Faricy, a former Amazon executive, became CEO, while Chris Barmann remained at the company overseeing vehicle development, design, and production. Slate had previously downplayed the Amazon connection as Jeff Bezos being an investor rather than Slate being an Amazon company, Stevens said, but Faricy’s arrival makes that connection harder to wave away.
His view was mixed. He still sees a need for a cheap small truck and remains interested in Slate’s customization model. But at the higher effective cost, with less EV enthusiasm and without the federal incentive, he does not see it currently shaping up as a mass-market success. Ford is also promising a sub-$20,000 electric truck, though Stevens said that is probably at least another year off.
The Slate detour matters because it is a useful stress test for the AI-design premise. A faster, more market-responsive car industry might be better at giving buyers what the spreadsheet says they want. Slate is betting on something narrower and stranger: a cheap, unfinished-feeling truck that asks buyers to supply part of the product’s identity themselves.
The AI business is moving toward the boring place where money is made
David Pierce called Claude Code versus Codex the most interesting and consequential product fight in AI right now. His setup was that Claude Code has become a major part of Anthropic’s momentum, while OpenAI has made a large pivot toward coding, including Codex and new coding-focused models.
Hayden Field assessed the response this way: everyone she talks to “loves Claude Code unequivocally,” while Codex is starting to grow on people. OpenAI has seen a usage spike, she said, and is putting major marketing effort behind Codex, but it does not yet have Claude Code’s position.
Codex is being positioned less like a general chatbot than a developer workspace. The OpenAI-attributed interface shown during the discussion resembled a code editor or IDE: a file navigation sidebar, an AI chat thread labeled “Create Codex app CTA,” a split pane showing edits to src/hero.tsx and tools/build.py, and controls to open or commit changes. That matched the broader product reality Field and Pierce were discussing: these tools are not mainly about casual conversation. They are about getting work done inside software projects.
Claude Code’s prominence is also why it is attracting criticism. Field described viral posts about Claude Code bans or failures that can damage Anthropic’s public reputation before the company responds. That is what happens, she said, when a product is at the top and users feel they do not have a strong alternative.
Pierce laid out the broader strategic progression: start with a developer tool like Claude Code, move to a more accessible interface that can operate on folders and files, then try to expand into an “everything app” around accomplishing tasks. Field was skeptical of the glamour of the everything-app idea, noting that versions of it have circulated for years, including Elon Musk’s stated ambitions for Twitter. But in her reporting on AI, she said, the money-making work has consistently been less glamorous backend and enterprise work.
Consumer products may come along for the ride, but Field’s account of the business increasingly looks like business software. Anthropic pursued that direction from the beginning, while OpenAI is now pivoting after gaining broad consumer brand recognition through ChatGPT.
Pierce compared the strategic choice to a familiar software split. One path is product-led growth: people adopt a tool because it is useful, then companies pay for it. The other is Microsoft Office-style enterprise distribution: businesses buy the tool, employees use it, and consumer side uses are incidental. In his reading, focusing on Codex and Claude Code means embracing the second path. Field agreed.
OpenAI wants a better story, but job fear is hard to PR away
Hayden Field said OpenAI’s internal and public mood looked “a little better” than when she wrote her Verge analysis saying “the vibes are off,” but “still not great.” Codex fanfare helped. So did OpenAI not doing badly in the Musk v. Altman trial, though she characterized that partly as Musk floundering rather than OpenAI winning outright.
She also pointed to a recent OpenAI blog post by Sam Altman about principles, democratization, and empowerment. The opening lines shown from the post said AI could improve many aspects of society, give people more capability and agency, and support “widespread flourishing.” Field read that as part of a PR push to rebrand public perception.
David Pierce raised a larger question: AI companies spent years selling products through a mix of promise and doom — job disruption, new world orders, the need to reorganize society around AI — and are now trying to sound more optimistic. He asked whether Altman’s more positive framing could work at a moment when public opinion toward AI has become more negative.
Field’s answer was blunt: no. People have seen enough already, she said, including software engineering jobs beginning to be automated and creative tools being trained on artists’ work. She mentioned Anthropic’s Claude Design as an example of creative-tool pressure, while noting similar products from other companies. Her view was that people are not likely to be reassured by an AI CEO saying job displacement will not happen.
The broader mood, as Field described it, is practical anxiety. When people outside her reporting life ask her about AI, the main question is whether it will take jobs and replace people. She also referred to a New York Times guest essay by Jasmine Sun titled “Silicon Valley Is Bracing for a Permanent Underclass.” Field said Sun spoke with people across industries and expected to be comforted, but was not.
The point was not that every worst-case claim is certain. It was that optimism without a mechanism no longer lands. People want to know how society will deal with the fallout, Field said, not hear that “everything’s fine.”
Anthropic’s government fight is about usable deployment
Hayden Field described a fast-moving conflict around Anthropic, the Department of Defense, and government use of AI tools. The Pentagon struck a deal with seven AI companies, she said: OpenAI, Google, Microsoft, Amazon, Nvidia, xAI, and Reflection. Anthropic was pointedly absent.
The deal allows the Defense Department to use those companies’ tools for any lawful use and to deploy them on classified networks, according to Field. That second point mattered because Anthropic had previously been the first AI company to receive that kind of clearance for its tools. The new deal reduced one of Anthropic’s practical advantages: if government users need a basic AI tool inside classified systems, they now have other options.
But many government users still appear to prefer Anthropic’s models. David Pierce compared the dynamic to a workplace software switch: if a company forces people from one familiar tool into another, the replacement may work, but users still want what they know. Field agreed that the vibe was similar.
She did not characterize the situation as existential for Anthropic. The company is not cut into a large deal, which is not helpful for profit goals, but the government is still using Mythos, Anthropic’s cybersecurity model. She also noted that Anthropic hired the former head of the Pentagon’s think tank as strategist in residence and hired a Trump-linked lobbying firm, moves she read as signs the company is working to get back in with the administration and the DoD.
Mythos itself remains partly opaque. Outsiders cannot see or use it, so its real performance is difficult to assess. Field described it as a powerful cybersecurity technology rather than something to be terrified of. Its significance is that it can crawl through important systems and flag gaps or vulnerabilities on its own. Competing models may require a user to ask about a specific issue they already know to investigate.
That autonomy is what makes Mythos powerful and potentially risky if released without guardrails. Field expected similar or better models could emerge from another lab within six months to a year, possibly open-sourced. Pierce agreed that model leads do not last long and that similar capabilities may soon be available from companies with fewer scruples about release.
AGI is losing its usefulness as a milestone
Hayden Field has been covering changes to OpenAI and Microsoft’s agreement, including the removal of a famous AGI clause. Her Verge article on the subject was titled “Microsoft and OpenAI’s famed AGI agreement is dead,” with the subtitle that the two companies’ deal had become less exclusive and the AGI clause had been dropped.
Asked whether AGI is dead, Field did not declare it definitively over. Her view was that the term is dying “a slow gradual death” because no one knows exactly what it means and it is not easily defined. Companies have been rebranding around alternative terms such as human-centered AI, powerful AI, and superintelligence.
The rough definition she offered was powerful AI systems equal to or surpassing human knowledge across a wide variety of tasks. But Field and Pierce emphasized that this is difficult to measure. In some senses, Field said, that capability is already here; in others, it is far off. That makes AGI a poor line for business deals, public communication, or policy.
Pierce said he was glad the idea of AGI as a singular moment is fading. In his view, treating AI as a technology that steadily improves is more useful than imagining one night when AGI is achieved and the world changes on a dime. He argued that the milestone framing has made AI sound more sudden and apocalyptic than technological change usually is.
Field added that focusing attention on an opaque future milestone can distract from present consequences. If the debate is about a far-off AGI date, powerful people can avoid scrutiny of how current AI systems are already affecting workers, vulnerable populations, minorities, and specific industries.
The shift away from AGI language does not make AI less consequential. It makes the conversation less dependent on a vague threshold that companies can define when convenient and abandon when inconvenient.
Layoff claims are running ahead of the ROI evidence
When asked whether companies have shown that AI-driven layoffs are actually worth it, Hayden Field said there has not been enough research on the ROI question, likely because companies do not want to look foolish if they make the wrong move.
She described a gap between perceived productivity and actual output. Some people using AI heavily feel extremely productive, especially engineers, but the few studies available suggest heavy users may not always be as productive as they feel.
Field described “productivity maxing” as a current workplace behavior: people running agents, generating outputs, and feeling powerful, while the actual value of what they are creating may be unclear. Some companies are also creating metrics that require employees to use AI, ranking them lower if they do not.
The more serious problem comes after layoffs. If a company cuts staff and expects AI to replace their jobs, the remaining workers often have to use AI tools to do the departed workers’ tasks in addition to their own. Field compared AI in that setting to an intern: it still requires training, prompting, correction, and supervision. It does not simply do its thing in the background.
Her expectation is that many companies will overcorrect, as they have in past layoff cycles. They will cut too many people because it looks good on the balance sheet, overwork the remaining staff, and eventually rehire some roles — not necessarily everyone, but enough to undo part of the cut.
David Pierce and Field both saw pandemic overhiring as part of the story. Pierce recalled a period when companies behaved as if pandemic-era digital behavior would be permanent: everyone at home, metaverse enthusiasm, digital transactions, NFTs. Field agreed that companies overhired and are still reckoning with it, though she noted that many rounds of layoffs have already addressed some of that.
The answer, in their exchange, was not either AI efficiency or overhiring. It was both. AI gives companies a business rationale and a shareholder-friendly story about efficiency. But the actual efficiency gains are uneven, sometimes unclear, and often dependent on remaining employees doing more work with tools that still need management.
Field drew a similar distinction around AI investment and FOMO. A year earlier, she said, saying “AI” on an earnings call could attract enormous investor interest. Now the mood is more balanced. For large enterprises, she sees real value in the boring backend work: querying data, using information companies already have, and making sense of it in new ways. That is not merely hype.
But companies still chase flashier AI demonstrations to look cutting-edge. Field used the example of a Disney Plus category of Sora-made videos and said she was not sure she would watch that. The durable value, in her account, is not the shiny demo. It is the unglamorous work that helps companies use data and complete tasks.




