Data centres in space: all the pros and cons

Data centres in space — crazy but maybe not quite as crazy as you think. Here are all the pros and cons of putting data centres in space so you can impress your colleagues and friends the next time this comes up in conversation.

People started talking a lot about data centres in space around 6 months ago. Starcloud and SpaceX have both submitted FCC proposals to put satellite-based data centres in space, and Google wants to play too with Project Suncatcher. I went from thinking it was totally crazy (mostly because I know that space is hard and putting stuff in orbit is really expensive) to thinking it’s possible and maybe even smart within 3 years with the right scaling circumstances.1

Feel free to share this with anyone who is really confident data centers in space definitely will or won’t be a thing.

The case for

Effectively free, 24/7 solar power

In a sun-synchronous orbit, solar panels receive near-constant sunlight — no night, no clouds, no atmosphere. The capacity factor is above 95%, compared to ~24% for terrestrial solar in the US. Solar irradiance is also about 36% higher above the atmosphere, and Google claims panels can be up to 8x more productive in orbit. And as Elon Musk pointed out on a recent Dwarkesh podcast, solar cells built for space can actually be cheaper to manufacture — they don’t need heavy glass or robust framing because there’s no weather to survive. I mean, space solar is about 1,500-3,000 times more expensive than terrestrial solar today, but maybe someday, sure.

No need for batteries

On Earth, solar-powered data centres need battery storage (or need to draw from the grid) to cover nighttime and cloudy periods. In a sun-synchronous orbit, solar is near-continuous, eliminating the need for batteries entirely. This significantly increases the cost advantage of orbital solar over terrestrial solar-plus-storage.

Fewer permitting and land use constraints

This is the “Abundance” argument. On Earth, getting permits for large-scale energy and data centre projects can take years. As Musk puts it: “it’s harder to scale on the ground than it is to scale in space.” In space, no one can hear you(r) scream(ing data centre). There’s no social licence to manage, no local planning regimes, no environmental reviews, no connection agreements with utilities. That said, orbit isn’t unregulated — you still need FCC approval2 (or equivalent national authority) for satellite deployment, and the International Telecommunication Union coordinates frequency bands and orbital positions internationally to prevent interference. SpaceX’s million-satellite orbital data centre filing is currently in FCC review. But the regulatory burden is arguably much lighter than siting a gigawatt power plant and data centre on Earth — no land acquisition, no grid interconnection studies, no water rights.

No water needed for cooling.

Schematic of a typical evaporative cooling system in a data centre (US Department of Energy)

Terrestrial data centres that use evaporative cooling consume water — it circulates through the cooling system but eventually evaporates. That said, as Andy Masley has written, the water issue is often overstated: only about 10% of water attributed to AI use is consumed directly onsite at the data centre or the associated powerplants through evaporative cooling, and the remaining ~90% is non-consumptively withdrawn by power plants and returned to the source.3 On the other other hand, water used to cool open-loop powerplants before being returned to source isn’t zero impact — it’s returned warmer than it was drawn4, and this has to be managed to ensure the environmental impacts don’t breach certain limits.5 Also, fish are frequently killed by impingement, where they are trapped against the water intake filters, and early-life-stage fish are killed by entrainment,6 where they are drawn through pumps and heat exchangers.7, 8

Fish stuck against an intake structure (NRDC)

In space, cooling is done via radiators that reject heat as infrared radiation (see appendix). This isn’t necessarily easier than terrestrial cooling — “the radiator mass and area are hypothesised to dominate the entire spacecraft”9 — but it does mean you don’t need water for cooling.

The case against

Launch costs

Getting mass to low Earth orbit (LEO) today costs around $1,500/kg on a Falcon Heavy and $2,720/kg on a Falcon 9. Google’s Project Suncatcher team estimates costs need to fall below $200/kg for orbital data centres to be cost-competitive with terrestrial energy costs, which they project could happen by the mid-2030s. It may be a question of when, not if — but it isn’t cheap today.

Low or zero serviceability

It’s hard and almost certainly not cost effective to swap out damaged or end-of-life hardware in orbit.10 These may be essentially disposable data centres, replaced every 5–6 years, compared to data centres on Earth that last longer but can have their GPUs swapped out at end-of-life. Musk counters this by arguing that GPUs tend to fail right after they’re made or not at all, meaning you could test each before putting them on the satellite and mitigate losses that way.

Radiation

Radiation can lead to cumulative degradation of electronics over time. However, most proposed orbital data centres would sit below the inner Van Allen belt (starting at around 640 km). At these altitudes the radiation environment is relatively mild and commercial off-the-shelf components should be viable with appropriate screening. The main concerns would be galactic cosmic rays, solar particle events, and the South Atlantic Anomaly where the inner belt dips closer to Earth. Google tested its Trillium TPUs in a proton beam simulating five years of LEO radiation — the logic survived fine, but high-bandwidth memory was the most sensitive component.11 Their two prototype satellites in partnership with Planet will test this in orbit in 2027.

The Van Allen belts (Booyabazooka)

Latency and bandwidth

For many AI workloads, fibre connections on the ground will probably always be faster than bouncing data to and from orbit. Laser inter-satellite links can achieve multi-Gbps to 100+ Gbps per beam, and Google hit 1.6 Tbps in lab tests, but the bandwidth for returning the signal to Earth would likely be the bottleneck.12 This probably means orbital data centres are best suited to training and batch inference rather than real-time applications.

Space debris

More satellites means more collision risk. At scale, this contributes to orbital congestion and the potential for a Kessler syndrome cascade (Gravity was a terrible movie but it did introduce this risk to a mainstream audience). Orbits are a shared, finite resource. Space is big, but the space around our planet isn’t that big, especially if you’re as scale-pilled as Musk.

Environmental impact of launches

Building and launching thousands of rockets per year has its own carbon and atmospheric footprint. An EU-funded study (ASCEND) found that space data centres only beat terrestrial ones on carbon if the launcher is reusable and emits less than 370 kgCO2/kg of payload over its lifespan.

Net assessment

Most of these cons are engineering problems that get cheaper over time. It’s not going to happen this year13, but I wouldn’t bet against it happening this decade.

Launch costs are falling fast. Falcon Heavy is at $1,500/kg today; Starship could plausibly get this below $100/kg within a few years.

But battery costs are falling fast too — and that goes the other way. One of the biggest advantages of orbital solar is that you don’t need batteries. On Earth, solar requires batteries to cover nighttime and cloudy periods, and those batteries are expensive. Except they’re getting dramatically cheaper, and people keep underpredicting the pace of renewable energy development. Even optimistic forecasters keep underestimating future battery price reductions.14

Lithium-ion pack prices hit a record low of $108/kWh in 2025, down 93% since 2010. Stationary storage specifically has plunged to $70/kWh, a 45% drop in a single year. The cheaper batteries get, the less it costs to pair terrestrial solar with storage — and the weaker the “no batteries needed” advantage of space becomes.

So the question — is which curve wins, the cons getting smaller or batteries getting cheaper?


Appendix: How do you cool something in a vacuum?

Some folks may not know how stuff can cool down in space with no air or water around it. Satellites and any other body in space cools down via blackbody radiation. Any object above absolute zero degrees will radiate heat, but spacecraft and satellites have radiators designed to do this more efficiently. Radiators reflect visible light and sunlight while radiating out infrared. Heat from individual components (like GPUs, one day) is transferred to the radiators via fluid circulation in heat pipes or through high thermally conductive pathways like metal.

The ISS is a good example. Note how the radiators are perpendicular to the solar arrays (and the direction of the sun) so they heat up even less.

Solar arrays and radiators on the International Space Station (Xie and Burger 2016)

A sun-synchronous orbit means constant sunlight, which means more heat to deal with, but thanks to the magic of radiators it’s not an insurmountable problem.


  1. Smart from one point of view, which doesn’t necessarily mean I support it. ↩︎
  2. I enjoyed seeing the FCC need to explain what a Kardashev II-level civilisation is in a regulatory document. ↩︎
  3. See this Semi Analysis post for more about data centre cooling systems. Also, Microsoft’s latest data centres use closed-loop cooling that requires zero water for evaporation. ↩︎
  4. Increased river/lake water temperature can stress or kill fishes and other wildlife. This is doubly harmful because elevated temperatures typically decrease the level of dissolve oxygen while also increasing metabolic rates so organisms need more oxygen. ↩︎
  5. Even small temperature increases can cause decline in bottom-dwelling organisms. Organisms already in warmer environments are even more vulnerable to additional thermal stress. ↩︎
  6. In 2005-06 a coal plant in Ohio killed ~46 million fish and ~2 billion fish eggs and larvae. ↩︎
  7. Going even deeper down the rabbit hole we can see that impingement and entrainment don’t affect fish population levels because many of them would have died anyway, other things like pollution have a much greater effect, and “only” ~10% of the wild population died due to the coal plant in the case of the Ohio Bay Shore coal plant and Maumee River. I’m also not sure what the net effect is on wild-animal suffering of a fish/larvae dying in a water intake vs dying naturally. ↩︎
  8. Note that these are problems that any new electricity load would cause, and as we’re trending away from thermal generation (coal and nuclear — gas too but it uses less water for cooling) and towards renewables, this problem may reduce in scope. ↩︎
  9. As a counterview, Mach33 Research say that for small spacecraft radiators are only 10-20% of total mass and ~7% of total planform area. ↩︎
  10. You may as well just launch another satellite instead of maintaining another. ↩︎
  11. The primary radiation effect at LEO altitudes is single-event bit flips in memory caused by energetic particles. For traditional software, even a single bit flip can be catastrophic, but large neural networks may be inherently resilient: as Musk put it on the same Dwarkesh episode, “if you’ve got a multi-trillion parameter model and you get a few bit flips, it doesn’t matter.” Memory shielding can further reduce the risk, and Google’s proton beam tests suggest that with appropriate screening of the most vulnerable components (particularly high-bandwidth memory), radiation is a solvable engineering problem. ↩︎
  12. The current top downlink for space-to-ground is 200 Gbps with NASA’s TBIRD system, compared to a modern submarine fibre cable with 200+ Tbps. Maybe that’s not a fair comparison. ↩︎
  13. It could happen today if someone wanted to, approval aside, but what I really mean is it’s not going to be better than terrestrial data centres this year. ↩︎
  14. The solar hedgehog graph is one of my top 10 favourite graphs. They just keep underestimating solar growth. ↩︎

Data centre demand forecasts, phantom load, and who pays?

I’ve started a Substack. Feel free to subscribe to me there as well.

A few interesting things on data centres and energy this week – one analysis places data centre demand on the Australian National Electricity Market (NEM) as materially lower than headline figures suggest, Australia hasn’t yet figured out who should pay for it, and Anthropic promises they’ll pay for it for their US data centres.

Most Australian data centre connection requests are phantom demand

Australia prospective data centre growth
AEMO’s prospective data centre growth in the NEM – source

Oxford Economics has done useful work estimating how much of the data centre connection pipeline in the NEM will actually translate into grid demand. The answer: roughly 6 out of every 7 MW requested won’t materialise. Of the prospective projects expected to proceed — around 6 GW of capacity — actual grid draw at maturity is estimated at around 2.8 GW, less than half.

Australian data centre phantom load
Phantom demand is primarily driven by connection requests considered unlikely to hit the grid – source

The gap has two sources. Nearly 90% comes from projects that simply won’t proceed, i.e. connection requests that won’t materialise. The remaining gap reflects the difference between what a data centre requests and what it actually draws: built-in redundancy, peak-based connection requests rather than average load, and an additional buffer on top. In practice, facilities draw roughly half of their requested capacity.

Australian data centre phantom load
Connection requests not linked to prospective projects – source

This matters because AEMO’s forecasts and the Integrated System Plan are necessarily informed by connection request data. If planners treat that pipeline as a reliable proxy for future demand, they risk building infrastructure that gets socialised across all electricity consumers. Connection requests are a leading indicator of investment interest, not a forecast.

Anthropic commits to covering electricity cost impacts

Anthropic, the creator of Claude, published a voluntary commitment to cover electricity cost increases their US data centres impose on other ratepayers. The commitments are substantive: paying 100% of grid upgrade costs that would otherwise be passed to consumers, procuring net-new generation to match consumption, investing in curtailment systems to reduce strain during peak demand, and working with utilities to estimate and cover any residual demand-driven price effects.

Keep in mind that energy costs are a relatively small percentage of training frontier models (around 2-6%, compared to staff costs at 29-49%, chips at 23-32%, server components at 15-22%, and cluster-level interconnect at 9-13%), so wearing all the energy costs of training and inference don’t seem likely to affect the margin much. It’s probably the bare minimum.

Percentage of costs for training and experiments of ML models
Source

For Australia, the direct mechanism doesn’t transfer cleanly. We don’t have the same vertically integrated utility structure, and the AER and AEMC are already working through cost recovery frameworks for large loads. But the underlying question is identical: should data centres pay the full cost of grid upgrades they necessitate, or should those costs be spread across all consumers? In Australia, how connection assets are categorised by the AER significantly affects whether everyday consumers end up subsidising that infrastructure.

Bring-Your-Own tariffs: interesting, but largely a US story

RMI published a useful explainer on “Bring-Your-Own” tariffs — mechanisms that let large electricity users fund new generation themselves in exchange for faster grid access and differentiated rates. The most prominent example is Google’s Clean Transition Tariff in Nevada, where it contracted enhanced geothermal capacity to meet some of its demand. A “Clean Transition Tariff” variant restricts eligible resources to clean energy, turning corporate demand into a driver of new clean firm capacity.

The appeal is intuitive: large loads get speed-to-power, utilities get risk protection, and other ratepayers are insulated from the cost of generation that might otherwise be stranded. In the US context, where data centres are running into multi-year interconnection queues, this solves a real problem.

Australia’s competitive wholesale market and different utility structure make a direct equivalent unlikely. We don’t have the same vertically integrated utilities with integrated resource plans to build alongside. But the underlying principle — that large loads should finance the incremental generation they require, rather than free-riding on existing capacity — is directly relevant to current policy discussions here, and it’s worth watching how the US experiments play out.

The consumer perspective

In a previous role (my views are not necessarily representative… etc. etc.) at Energy Consumers Australia I wrote a piece on how data centre growth could affect household electricity bills. The international evidence is sobering: in Ireland, data centres accounted for 88% of increased electricity demand between 2015 and 2024. In Virginia, unconstrained data centre growth is projected to add $40/month to household bills by 2040. In Australia, network costs already account for nearly half of household electricity bills, and they’re rising.

Data centre load also has implications for non-bulk energy costs such as system strength and frequency control and ancillary services (FCAS). Data centres can shift load rapidly in ways that may not be visible to the Australian Energy Market Operator in real time — illustrated vividly by an incident in Virginia where 60 data centres simultaneously switched to backup power during a grid disturbance, requiring the operator to rapidly curtail generation. If AEMO lacks real-time visibility of data centre demand, ancillary service costs increase and may get recovered broadly. The AEMC is working on rule changes to address this, which is worth watching.