Orbital Compute Becomes Cheaper If Launch Costs Fall Below $500/kg
Philip Johnston, Starcloud’s co-founder and chief executive, argues that AI data centers could become cheaper in orbit than on Earth if launch costs fall to about $500 per kilogram. His case rests on continuous solar power in a dawn-dusk orbit, avoiding land and battery costs, and using constellations of optically linked satellites for inference workloads. Starcloud’s plan, he said, starts with an orbital GPU proof point and points toward an 88,000-satellite network delivering roughly 20 gigawatts of compute capacity.

The economic case turns on launch cost
Philip Johnston’s case for space-based data centers begins with energy. Starcloud is building orbital compute, he argued, because the economics of powering AI workloads may soon favor space over Earth.
He framed the comparison against a new solar-powered data center on Earth. A terrestrial solar project has three main costs: permitted land, which he said can be the largest cost for new North American solar projects; battery storage and backup power, because peak solar output lasts only about four hours a day; and the solar cells themselves.
In Johnston’s model, orbit removes the first two costs from the comparison. There is no permitted land to buy, and a dawn-dusk sun-synchronous orbit can keep the system in sunlight continuously. He also said one square meter of solar panel in space produces eight times the energy of one square meter on Earth, reducing the solar area required for the same output.
The swing factor is launch. Johnston put the break-even point at roughly $500 per kilogram: below that, he argued, launch becomes cheaper than the terrestrial combination of land, batteries, and solar infrastructure. He described that as about a 10x reduction from current launch costs, but within reach of coming vehicles. Starship, he said, is designed for launch costs of roughly $10 to $20 per kilogram.
Starcloud’s first proof point was an H100 in orbit
Before laying out the economics, Philip Johnston showed the deployment of Starcloud-1, a satellite carrying five Nvidia GPUs, including an H100. The video showed a satellite module separating from a spacecraft above Earth, with telemetry visible on screen: “T+ 01:14:19,” speed “26143,” and altitude “514.”
The H100 mattered, Johnston said, because many people believed state-of-the-art data-center GPUs could not run in space. The two expected blockers were thermal dissipation and radiation tolerance. GPUs are power-dense and produce a lot of heat; radiation raises the risk of bit flips.
Starcloud-1 was meant to answer those objections in hardware. Johnston said Starcloud was the first to train a model in space, using Andrej Karpathy’s nanoGPT. He also said the company ran a version of Gemini and performed high-powered inference on SAR satellite data.
It was a very significant step in proving that we can actually run the state-of-the-art terrestrially.
If orbit can supply cheaper energy but cannot host data-center-grade chips, the cost case does not matter. Starcloud-1 is Johnston’s proof point that the chip side is technically plausible.
The planned constellation is built for inference
Philip Johnston said Starcloud has filed with the FCC for a constellation of 88,000 satellites. Each satellite would be about 200 kilowatts, and together the constellation would provide “on the order of 20 gigawatts” of new compute capacity.
The intended workload is inference. Johnston showed a concept video in which a user asks for a 3D video tour from five house photos; the request travels by optical link to the constellation, which processes it and returns a result. He said the same architecture could serve back-office business-processing agents and code-generation agents.
The orbit is central to the design: a dawn-dusk sun-synchronous orbit that is “always in the sun,” with 24/7 power. Johnston also claimed sub-50-millisecond latency to anywhere on Earth, with satellites connected by optical links.
| Planned feature | Johnston’s description |
|---|---|
| Satellites | 88,000 |
| Power per satellite | About 200 kilowatts |
| Total compute capacity | On the order of 20 gigawatts |
| Primary workload | Inference |
| Network | Optically linked |
| Latency | Sub-50 milliseconds to anywhere on Earth |
| Estimated CapEx | About $100 billion |
Johnston put the capital expenditure for the 88,000-satellite constellation at about $100 billion, which he said is “much lower” than doing the same thing terrestrially. He described it as the start of the largest infrastructure project ever and, in his opinion, the beginning of a Kardashev type 2 or Dyson-sphere-type civilization, potentially even Kardashev type 3.
This really is the start of the largest infrastructure project ever.
The hard engineering problems are heat, debris, and radiation
In the questions, Philip Johnston was pressed first on heat. He agreed that heat dissipation is harder in space despite the low ambient temperature because space is a vacuum. The solution, as he described it, is radiator surface area that emits heat in infrared.
His rough math: solar panels generate about 200 watts per square meter, while a radiator kept around 50 degrees Celsius dissipates about 800 watts per square meter. That implies radiator area around one quarter of the solar-panel area. A 400-square-meter solar array would need about 100 additional square meters of radiator. He pointed to the Stefan-Boltzmann equation, saying thermal dissipation rises with the fourth power of temperature, so running chips hotter can cut radiator surface area and mass. Starcloud, he said, is working with Nvidia on a chip he called “space Ruben 1,” designed to run hotter without a higher failure rate.
Debris risk came next. Johnston said Starcloud takes Kessler-type risk seriously and that “everybody needs to be a responsible user of space.” For early satellites, he pointed to low altitude as one mitigation: Starcloud’s first satellite flies around 400 kilometers, where it would naturally de-orbit within a few months. At that altitude, he argued, the chance of a runaway Kessler effect is “very, very low.”
He also argued that congestion is often visually exaggerated. Satellite maps can draw each dot at about the width of California, he said, making space appear more crowded than it is. He cited SpaceX operating around 10,000 satellites in low Earth orbit without a collision, attributing that to sophisticated collision avoidance. On that basis, he said dawn-dusk sun-synchronous orbit could fit “on the order of terawatts of compute” without major collision-avoidance problems.
Radiation remains an active design issue. Johnston said Starcloud has done four rounds of testing at a cyclotron in Knoxville and uses the resulting telemetry to guide shielding choices. For heavy ions, the company goes to Brookhaven National Lab. In a 24-hour test, he said, chips can be exposed to five years’ worth of radiation dose, with results informing both shielding and software choices.
Training in space is possible, but not the near-term market
Asked whether the architecture would provide contiguous compute, Philip Johnston said Starcloud’s foreseeable market is “almost exclusively” inference. His first reason was market composition: he expects inference to become 99% of the compute market very soon, while large training runs become a small percentage of AI workloads over a five-to-10-year horizon. His second reason was engineering difficulty. Training would require docking together a much larger structure.
When the audience asked to see it, Johnston showed an animation of a five-gigawatt orbital data center: Starship launches delivering rectangular Starcloud modules, which dock and release around a central spine to form a large solar-panel and radiator structure. The on-screen scale labeled the structure as a “5 GW Data Center” and “4 km.”
Johnston said each Starship launch vehicle could carry 40 megawatts. The concept design connected those modules to a central spine and an enormous solar panel, with a one-kilometer-by-four-kilometer radiator on the back. That, he said, is how a large model would be trained in space. But he placed it well beyond the immediate plan: “at least 15 years” before Starcloud reaches anything like that.


