According to DCD, Google is launching its TPU AI chips into space through Project Suncatcher in partnership with Planet Labs, with the first two satellites scheduled to launch by early 2027. The company published a research paper detailing plans for massive 81-satellite clusters forming 1km radius arrays in low-earth orbit. Google tested its Trillium-generation TPUs in particle accelerators to simulate space radiation and found they survived, though significant challenges remain around thermal management and reliability. CEO Sundar Pichai acknowledged this “moonshot” requires solving complex engineering problems, while the company theorizes these orbital clusters could eventually scale to terawatts of compute capacity.
Why even do this?
Here’s the thing – everyone’s suddenly obsessed with space data centers. Elon Musk says SpaceX “will be doing” them, Jeff Bezos predicts gigawatt orbital data centers within a decade, and even former Google CEO Eric Schmidt bought a rocket company specifically for this purpose. But Google’s approach is different from what startups like Starcloud are proposing. Instead of building massive single structures that would need space assembly, Google wants swarms of smaller satellites flying in tight formation. Basically, they’re thinking modular rather than monolithic.
The engineering nightmare
So how do you make satellites talk to each other fast enough to function as one big computer? Current inter-satellite links max out around 100Gbps, but Google’s terrestrial data centers need hundreds of gigabits per second per chip. Their solution involves flying satellites incredibly close – within hundreds of kilometers or less – and using dense wavelength division multiplexing technology. The closer they fly, the less power needed and the more independent data streams they can establish. But we’re talking about formations tighter than anything ever attempted before.
Radiation is another huge problem. Google tested their chips by blasting them with proton beams equivalent to five years of space radiation. The TPUs survived, but the high bandwidth memory had some uncorrectable errors that might be okay for AI inference but could wreck training runs. And then there’s cooling – you can’t just open a window in space. They’ll need advanced thermal materials and passive cooling systems to move heat from chips to radiators without any moving parts.
When does this make sense?
The economics are still completely insane. Current launch costs range from $1,500 to $2,900 per kilogram, and Google calculates they’d need that to drop to around $200/kg to even start being competitive with terrestrial data centers. Their optimistic scenario? Maybe by 2035 if SpaceX’s Starship starts flying 180 times per year. But by then, ground-based data center costs will have changed too. It’s a moving target that requires believing in some pretty aggressive cost reduction curves.
What this actually means
Look, this is classic Google moonshot thinking. They’ve published their research paper and they’re being transparent about the massive challenges. The fact that they’re even seriously researching this tells you something about where they think computing is headed. We’re hitting physical limits on Earth – power availability, thermal constraints, real estate costs. Space offers unlimited solar power and natural vacuum cooling, but introduces a whole new set of problems.
Is this practical anytime soon? Probably not. But Google’s playing the long game here. They’re thinking about what computing infrastructure looks like in 20-30 years, not next quarter. And honestly, with the insane power demands of AI models growing exponentially, maybe looking beyond our atmosphere isn’t completely crazy after all.
