Apple Confirms Deal With Google To Use Gemini Models

Google:

Apple and Google have entered into a multi-year collaboration under which the next generation of Apple Foundation Models will be based on Google’s Gemini models and cloud technology. These models will help power future Apple Intelligence features, including a more personalized Siri coming this year.

After careful evaluation, Apple determined that Google’s Al technology provides the most capable foundation for Apple Foundation Models and is excited about the innovative new experiences it will unlock for Apple users. Apple Intelligence will continue to run on Apple devices and Private Cloud Compute, while maintaining Apple’s industry-leading privacy standards.

This deal is sensible and practical. Apple needs LLM technology that it doesn’t yet have, and Google is willing to license it to them on terms Apple is comfortable with. Apple can leverage Google’s technology to meaningfully improve Siri sometime this year, possibly delivering features as soon as April. Evidently, if it only relied on the output of its internal model efforts, whether through hubris or stinginess, that improvement would not be possible. Put simply, striking this deal results in a better outcome for Apple’s customers, and I’m glad they have done it.

This announcement shows Apple is finally serious about making Siri better. I have to assume this is the biggest financial commitment Apple has ever done for Siri. Behind the scenes, we know Mike Rockwell is leading these efforts, after reportedly being frustrated by the lack of progress while he was developing the Vision Pro. So far, so good.

Obviously, the biggest short-term focus is to update Siri on iPhone. Not just shipping the long-delayed personal context features that were infamously first demoed at WWDC 2024, but also bringing it up to par with the modern LLM-powered voice assistant experiences available on Android, and through apps like ChatGPT voice mode. The rollout will be piecemeal, constrained by Apple’s own frontend engineering resources and expansion of its server capacity. Nevertheless, I think they are targeting to have something tangible to show before the end of the iOS 26 cycle, and then they’ll have even more Gemini-backed AI features underpinning iOS 27.

I’m also hopeful that Gemini running in Private Cloud Compute will also provide an AI story for Apple Watch and the HomePod, products that previously had no Apple Intelligence roadmap at all. But with Gemini models now running in their cloud, it isn’t hard to imagine how they could make HomePods work better, without needing any new hardware at all.

The risk with a partnership like this is that the priorities of the third-party begin to dictate the contours of your own product roadmap; you don’t want to be limited by what your suppliers can give you. This is the motivation behind the so-called ‘Cook doctrine’. In a 2009 quarterly earnings call, then-COO Tim Cook said “we believe that we need to own and control the primary technologies behind the products that we make”. Apple silicon is the epitome of this strategy working, of course.

Artificial intelligence really feels like a core technology Apple should own. I don’t subscribe to the idea of LLMs as homogenous commodities. Eventually, someone will innovate something in this space that is so compelling it pulls customer attention away from the incumbent mobile device form factors, and there’s no guarantee they would want to then license that out. The Gemini deal buys Apple a lot of time to breathe, but it must redouble its efforts on its own internal AI research, for the sake of the company’s long-term future.