According to Bloomberg Business, Apple’s AI strategy underwent a major crisis in 2025, culminating in a pivotal partnership with Google. After disappointing internal progress and delays to its “Apple Intelligence” platform, Apple entered talks with Anthropic and OpenAI before finalizing a deal with Google in November 2025 to use its Gemini models. The first phase, labeled internally as “Apple Foundation Models version 10,” will power a Siri update in iOS 26.4, set for beta next month and a public release in March or April. A fully reimagined Siri, codenamed “Campos,” will debut in iOS 27 this fall, relying on a more advanced “version 11” of the Gemini-based model. This strategic shift triggered a management reshuffle, effectively ending AI chief John Giannandrea’s tenure and consolidating power under software head Craig Federighi.
The internal meltdown
Here’s the thing: Mark Gurman’s reporting paints a picture of a company caught flat-footed. The emergency all-hands meeting where leadership called the initial reports “bulls–t”? It clearly didn’t work. The talent drain, led by Ruoming Pang leaving, tells you everything. Engineers vote with their feet, and they were leaving for “higher pay and more stable environments.” That’s a brutal indictment of the internal project’s morale and trajectory. So when talks with Anthropic stalled over billions and OpenAI was seen as a strategic competitor poaching staff, Google became the only viable lifeline. It’s awkward, but it was probably the only move left.
The two-phase Siri plan
Now, Apple is basically running a two-track salvage operation. The imminent iOS 26.4 update is a stopgap—it gets a more capable Siri out the door using Google’s tech but hosted on Apple’s own Private Cloud Compute servers. It lets them say they’ve delivered on some 2024 promises. But the real bet is “Campos” in iOS 27. That’s the full architectural reboot, and it’s fascinating that they’re considering running it directly on Google’s cloud TPUs. That’s a huge concession on Apple’s famed control over the stack. It basically admits that for raw AI model power, they can’t compete with the scale of Google’s infrastructure. At least, not yet.
What got left behind
And this pivot has casualties beyond just people. Ambitious projects like the AI-era Safari browser and the “World Knowledge Answers” search competitor have been paused or scaled back. The vision of standalone chatbots in every app is being rethought in favor of a more integrated Siri. It feels like a necessary consolidation, but it also shows how scattered the initial plan was. Federighi seems to be applying a classic Apple focus: nail the core interface (Siri) and user experience first, and worry about the underlying commodity—the AI model itself—later. But is treating advanced AI as a commodity, like storage, a sustainable long-term strategy for a company that vertically integrates everything from chips to retail stores? I’m skeptical.
The long-game question
So where does this leave Apple’s own AI ambitions? The report says device-side models will still be developed in-house, and they’re building better servers for next year. But the momentum is clearly with the partnership model for the cloud. The big question is whether this is a permanent state or just a bridge. Apple famously hates relying on partners for core tech—remember the multi-billion dollar drama to replace Intel chips and Qualcomm modems? They’ll want to own this eventually. But for now, the priority is simply catching up. They can’t afford to miss another cycle. The February preview will be the first real test of whether this awkward, necessary marriage with Google can actually deliver a Siri that doesn’t feel years behind.
