Google’s Space Data Center Plan Is Absolutely Wild

Google's Space Data Center Plan Is Absolutely Wild - Professional coverage

According to DCD, Google is partnering with Planet Labs on Project Suncatcher to launch AI chips into space, with the first two satellites planned by early 2027. The company’s research paper outlines a vision for 81-satellite clusters forming kilometer-wide arrays in low Earth orbit at around 650km altitude. Google tested its Trillium-generation TPUs in particle accelerators to simulate space radiation and found they survived, though thermal management and reliability remain challenges. CEO Sundar Pichai acknowledged this “moonshot” will require solving complex engineering problems. The project comes as multiple companies including SpaceX and Blue Origin also pursue space data centers, creating a new space race for computing infrastructure.

Special Offer Banner

Sponsored content — provided for informational and promotional purposes.

Why even do this?

Here’s the thing – we’re running out of power and space on Earth for these massive AI data centers. Google‘s basically looking at the ultimate expansion plan: putting compute where there’s unlimited solar power and no neighbors complaining about the noise. Their research suggests that if launch costs drop to $200/kg (which is way below current rates), space computing could become cost-competitive with terrestrial data centers by 2035.

But that’s a huge “if.” Current launch costs are more like $1,500-$2,900 per kilogram, and we’d need SpaceX’s Starship flying 180 times per year to hit those targets. Meanwhile, ground-based power costs will keep changing too. It’s a massive gamble on future economics.

The technical nightmares

Let’s talk about what could go wrong. First, networking – Google’s ground data centers use super-fast optical links between chips. In space, they’re proposing satellites flying in ridiculously tight formation (way closer than anything we’ve done before) to make the laser links work properly. We’re talking about needing 10Tbps bandwidth between satellites using technology that requires way more optical power than traditional satellite systems.

Then there’s radiation. Their TPUs survived proton beam tests simulating five years in space, but the high-bandwidth memory had uncorrectable errors. Basically, your AI models might get corrupted by cosmic rays during training. Google says it’s “likely acceptable for inference” – meaning running existing models might work, but training new ones could be problematic.

The cooling problem nobody’s talking about

Think about how much heat those TPUs generate on Earth with massive cooling systems. Now imagine trying to cool them in the vacuum of space. Google’s paper mentions needing “advanced thermal interface materials” and preferably passive cooling systems because if your cooling fails up there, you can’t send a technician to fix it.

And that’s the real kicker – when things break in orbit, you can’t just replace them. Google’s solution? “Redundant provisioning.” Translation: launch way more satellites than you need because some will inevitably fail. That drives up costs even more.

The new space race is here

What’s fascinating is how many players are jumping into this. Elon Musk says SpaceX “will be doing” space data centers. Jeff Bezos predicts gigawatt data centers in space within 10+ years. Even former Google CEO Eric Schmidt bought a rocket company specifically for this purpose. There’s also Starcloud-1, which just launched with an Nvidia H100 chip, proposing a massive 5GW data center across a 4km solar array.

Google’s approach is different though – they’re going for many small satellites instead of massive monolithic structures. Their research paper argues that assembling huge structures in space is too complex, while smaller satellites are more manageable. But can they really coordinate 81 satellites flying in tight formation without collisions?

This feels like one of those ideas that’s either brilliant or completely insane. The technical challenges are enormous, the economics are speculative, and we’re talking about building infrastructure where failure means losing millions of dollars in hardware that you can’t retrieve. But if anyone has the resources to try this crazy experiment, it’s Google. Whether this becomes the future of computing or an expensive lesson in orbital physics remains to be seen.

Leave a Reply

Your email address will not be published. Required fields are marked *