every few months someone floats the idea of putting a data center in space. it sounds futuristic. the renderings always look cool. and then you run the numbers and it falls apart so quickly that it starts to feel like a rite of passage in tech: you propose it once, you learn what physics does to your dreams, and you move on.

google just did this with project suncatcher, and they’re not alone. starcloud (backed by nvidia) is launching satellites, the EU’s ASCEND study has thales alenia space planning demos for 2028, and china’s already launched 12 satellites for a 2,800-satellite constellation. even elon and bezos are making noises about it.

the short version is that space gives you none of the things a data center actually needs. the longer version is why this idea keeps resurfacing even though the underlying problems have not changed in fifty years.

heat: the thing that actually kills you

the first and most basic issue is heat. data centers are giant heat machines that happen to compute things. on earth, you dump that heat into air or water. in space you get a vacuum. there is no air to move heat around. the only way to cool anything is radiating it away, and radiators are enormous.

here’s the math that breaks everyone’s brain: the ISS has six deployed radiator arrays covering about 6,500 square feet total. they reject 70 kilowatts of heat. that’s it. 70kW for a structure the size of a football field.

a hyperscale data center draws 100 megawatts or more. microsoft’s chicago facility can pull 198MW — enough to power 150,000 homes. to run something even a fraction of that size, you’d need radiator wings larger than city blocks. most spacecraft radiators reject between 100 and 350 watts per square meter. they are heavy, fragile, and expensive to launch.

benjamin lee from upenn put it well in scientific american: orbital data centers need large radiators that dump heat into the vacuum of space, adding significant mass that has to be launched on rockets. mass is the enemy of everything in space.

power: the thing everyone handwaves

then comes power. a modest data center can draw tens of megawatts. a big one can push hundreds. in orbit you’re stuck with solar. you’d need solar arrays the size of neighborhoods, along with the structure to hold them, track the sun, and survive constant radiation damage.

google’s proposal puts satellites in a dawn-dusk sun-synchronous orbit so they’d see almost constant sunlight. sounds great until you do the math on how much solar panel area you need for 100+ megawatts, how much that weighs, and how you keep it all pointed the right direction while dealing with thermal expansion and micrometeorite impacts.

the proponents wave away this problem by saying launch costs are dropping. and they are! falcon heavy is down to about $1,400/kg, and starship might eventually hit $100-200/kg. google’s own research suggests launch costs need to fall to $200/kg by the mid-2030s for this to pencil out. but even cheap launches don’t solve the physics. you still need to get all that mass up there, assemble it, and maintain it.

bandwidth: the real problem!!

you also need to send and receive data. real data centers push terabits per second between each other. memphis alone saw demand for long-haul and metro bandwidth grow from 0.3 terabits to 13.2 terabits in just one year.

getting that kind of throughput from orbit is not something we can do at scale today. google claims they can achieve 800 Gbps to multiple terabits per second between satellites using free-space optical links. that’s optimistic engineering speak for “lasers in space, which work great until something gets between them.”

even starlink with its 45ms average round-trip latency from LEO is measurably slower than terrestrial networks. for a data center that needs to talk to other data centers and serve millions of users simultaneously, latency and throughput matter enormously.

my favorite quote on this comes from tom morgan of the centre for space domain awareness, who told live science: “putting the servers in orbit is a stupid idea, unless your customers are also in orbit.”

radiation: the slow cooker for electronics

space slowly cooks anything electronic. cosmic rays, solar particle events, and the van allen belts all conspire to flip bits, degrade components, and eventually kill hardware. single event effects (when a high-energy particle hits just the right transistor) are a constant threat. moore’s law makes this worse: smaller transistors run on less charge, making them more vulnerable to disruption.

you need shielding, which adds mass, which makes launches even more expensive. the ISS uses aluminum and multi-layer insulation, but that’s not enough for the dense compute you’d want in a data center. google tested their TPUs in a 67MeV proton beam and found they’d last about five years with shielding. five years for hardware that costs enormous amounts to launch.

and once something breaks, you cannot roll a cart down an aisle and replace a drive. maintenance becomes a robotics problem or a crewed mission problem. both are slow and wildly expensive.

the actual use case

here’s where i’ll be less cynical: there are narrow use cases where space computing makes sense.

processing satellite imagery in orbit before downlinking saves bandwidth. if you’re collecting terabytes of earth observation data, running inference on “is this a ship? is this a forest fire?” and only sending back the results is genuinely useful. starcloud is actually doing this — they trained an LLM in space and ran gemini on their satellite.

but that’s “space-based edge computing,” not “data center in space.” it’s a few kilowatts of processing for space-native workloads, not a hyperscale facility serving earth’s internet.

the real problem this is trying to solve

the reason this keeps coming up is that data centers now consume about 1.5% of global electricity and that could double by 2030. in some places they’re already straining local grids. ireland’s data centers account for over 20% of national electricity consumption. utilities in virginia and silicon valley are turning away new projects.

space data centers are what happens when an industry looks at a terrestrial infrastructure problem and says “what if we just… left?”

the actual solutions are boring: better cooling tech (liquid cooling, immersion cooling), more efficient chips, building data centers near cheap renewable power, maybe accepting that we can’t train infinitely larger AI models forever. but boring doesn’t get keynotes.

so, maybe next cycle

by the time you run the full cost model, you realize everything is working against you. radiators dominate the mass budget. solar arrays dominate the footprint. every refresh cycle takes another rocket launch. and a structure that large becomes a debris hazard, which means you need propulsion and a plan for when something hits you at ten kilometers per second.

people usually end these conversations with one of two conclusions. either the idea never made sense, or it might make sense one day for a very narrow set of workloads where latency doesn’t matter and earth is somehow out of power and room. neither of those describe the world we live in right now.

so yes, space data centers are fun to imagine. they just aren’t feasible with the physics, economics, or networking realities we have. we get to build plenty of ambitious things. this one stays in the bucket with space hotels and orbital monorails.

maybe next cycle.


if you’re interested in the actual near-term future of data center cooling, the boring answer is liquid cooling and immersion systems. microsoft is working on microchannels etched directly into silicon. TSMC is doing direct-to-chip cooling. the future is wet, not orbital.