The race to build data centers in orbit isn’t really about computing in space. It’s about who controls the next era of computing on Earth. SpaceX has filed an application with the US Federal Communications Commission to launch a constellation of orbital data centers. Google is reportedly planning a test constellation of data-crunching satellites. Amazon, already dominant in cloud infrastructure and increasingly active in launch services, is positioning itself for the same frontier. The premise is seductive: unlimited solar power, free thermal dissipation into the vacuum of space, and no strain on terrestrial energy grids already buckling under AI workloads. But the deeper logic is strategic — the companies capable of solving four fundamental engineering barriers will lock in an advantage that no terrestrial competitor, and no nation without its own launch capability, can easily overcome.
As Silicon Canals has reported, the US and China already control 90% of AI data centre capacity. Moving compute to orbit wouldn’t just extend that dominance — it would harden it into something approaching permanence, raising the barrier to entry for every nation without its own rockets. The engineering challenges are real, but so is the power consolidation they enable. Whoever solves these problems first doesn’t just build a better data center; they reshape the geopolitics of computation itself.
The heat paradox
Space is cold, but cooling electronics there is harder than on Earth. In constantly illuminated sun-synchronous orbits, equipment temperatures can remain extremely high — too hot for safe long-term electronics operation. Without convection or flowing water, heat must be radiated away through specialised systems. Industry experts note that thermal management and cooling in space is generally a huge problem. The European Space Agency has been developing mechanically pumped fluid loop systems for satellite heat rejection, but scaling these to data-center size remains unproven.
Radiation-hardened chips
Beyond the magnetosphere’s protection, cosmic radiation degrades semiconductor performance and introduces errors. Aircraft crews already face a higher risk of developing cancer from radiation exposure at cruising altitude; orbital hardware faces far greater bombardment. Recently, Nvidia touted new hardware designed for orbital AI compute, and startup companies have launched satellites fitted with advanced GPUs. But experts at Carnegie Mellon University note that redundancy requirements are severe: systems need not only to meet current needs but also require redundancy, extra parts, and reconfigurability to continue working when components fail.
Orbital congestion
Proposals for massive satellite constellations run headlong into physics. As of early 2026, there are roughly 15,000 active satellites in orbit — more than triple the number five years ago — with SpaceX’s Starlink constellation alone accounting for over 7,000 of them. Approved filings with the ITU and FCC would add tens of thousands more: Amazon’s Project Kuiper plans 3,236 satellites, and SpaceX has approval for up to 12,000 Starlink units with applications pending for 30,000 beyond that. Researchers have estimated that low Earth orbit can safely support somewhere between 60,000 and 100,000 active satellites across all orbital shells, given the minimum spacing needed for collision avoidance — a ceiling that begins to look uncomfortably close when data-center constellations are added to the manifest. Starlink satellites already perform extensive collision avoidance manoeuvres. Low Earth orbit is already quite crowded. This congestion problem contains its own concentrating logic: extremely large constellations may not be feasible unless controlled by a single entity capable of coordinating thousands of orbital manoeuvres in real time — which means, in practice, SpaceX or a company very much like it.
Can it actually pencil out?
A 2024 feasibility study led by Thales Alenia Space concluded that gigawatt-scale orbital data centers could exist before 2050, though requiring solar arrays far larger than the International Space Station’s. Industry estimates suggest the cost-competitiveness crossover may occur within the next couple of decades.
Each of these four barriers — thermal management, radiation hardening, orbital congestion, and raw economics — is a genuine engineering challenge. But none of them are abstract. They are filters, and they favour incumbency. The companies most aggressively promoting orbital compute — SpaceX, Amazon, Google — are also the dominant players in launch services, cloud infrastructure, and AI training. Every barrier that persists is a moat that deepens their advantage. If orbital data centers become viable, the question won’t simply be whether the engineering works. It will be whether the rest of the world can afford to participate — or whether computation itself becomes a resource controlled from orbit by a handful of firms with the rockets to get there.

Feature image by Chris Lyo on Pexels















