Key Takeaways
- Google’s AI-focused event showed the gap between Apple and its competitors in the AI race is widening.
- Google’s Pixel 9 currently offers advanced AI features, while Apple’s Intelligence will be rolled out gradually over the next year.
- Despite the similarities, Google’s Gemini idea for phones is ultimately more ambitious.
Despite the heavy emphasis on the Pixel 9 lineup launching within the next few weeks, it was hard not to think about Apple during Google’s event. That’s not so much because of the iPhone-like hardware Google showed off, but rather because the real star of the event, Google’s choice to put AI at the new center of Android, dramatically revealed just how far behind Apple Intelligence has fallen.
Google and Apple have a similar problem to solve: translating AI models that are most useful to computer engineers and medical researchers into consumer products that regular people are comfortable using. The key difference between what Google demoed at its event and what Apple unveiled at WWDC 2024 is timing. Apple may present a brighter, safer, and more user-friendly vision of how AI works on the iPhone, but Google is ready to offer nearly all of the features Apple showed off essentially now, rather than in an extended beta program. Some ideas Apple hasn’t even tried.
Google may have held its Made by Google event to showcase new smartphones and smartwatches, but the big takeaway from the event was the widening chasm between Apple and its competitors, who are better prepared to take advantage of the growing attention generative AI is receiving. This chasm remains the same no matter how new the iPhone.
Related
Does Apple Intelligence actually have a shot at winning the AI race?
If Apple can beef up its in-app tools, it has a chance to stand out from the competition.
Google and Apple have similar ideas about AI in mobile phones
A central, contextual assistant for all your apps
Though Google and Apple are in fundamentally different businesses (Google focuses on services, Apple on hardware), they ultimately landed on very similar ideas about how AI should work on a smartphone: Both have assistants (Gemini and Siri) that have direct access to on-device information and common requests, and when needed, the assistant can leverage contextual information from other apps to respond to more complex questions and tasks.
The companies are also pursuing a combination of on-device processing and sending requests to the cloud. Google has long relied on its own servers for some of the more demanding tricks the Pixel can perform, such as Video Boost, which color-corrects and smooths even the grainiest video footage. But with the Pixel 9, one of its flagship features, Pixel Screenshots, runs entirely on-device, thanks to an updated version of Google’s smaller Gemini Nano model. The app organizes the screenshots you take and lets you search them in natural language, something Apple hasn’t even attempted yet.
Google and Apple are spreading transcription and summarization, two features that AI is typically good at, across their operating systems. Google offers Call Notes, which transcribes and summarizes phone calls. Apple is similarly adding call recording and transcription to iOS 18. Gemini can summarize the contents of your Gmail inbox, but the Mail app in iOS 18 only shows summaries at the top of emails. Both companies also offer in-device image generation tools to create images that can be used anywhere on your phone.
Google / Pocket-lint
Gemini is technically more flexible than Siri in the types of questions it can answer, and Apple wants to complement this by giving ChatGPT the option to send more complex requests, but the two companies are largely in agreement on the extent to which AI can currently be used on smartphones: Google can only offer more complex features with Gemini Live, like combining a photo and a text request into a single prompt (which Siri can’t do), or having a lifelike conversation with its AI assistant.
The key is that these things can be done right now. The company held the event live, with plenty of live demos of new features. Not everything worked, and it was a bit choppy overall, but it illustrated the point. Apple famously used to do attention-grabbing live keynotes before switching to pre-recorded, heavily edited video presentations early in the COVID-19 pandemic, and has never looked back since. Google’s “doing it live” was one of several ways the company tried to differentiate itself from Apple throughout the event. More importantly, it showed that these new AI features can be done right now, not in months or years.
Related
5 cool things Google’s Gemini AI can do for the Pixel 9
The new Google Pixel 9 smartphone comes with some unique AI features.
Apple Intelligence is still months away
It’ll be a while before we meet the new Siri
apple
A quick look at Apple’s web page explaining the features of Apple Intelligence reveals two key details that the company hasn’t said much about publicly.
- Apple Intelligence will be released as a “beta” this fall alongside iOS 18, iPadOS 18, and macOS Sequoia.
- “Several features, additional languages and platforms will be available over the next year.”
The vague language describing Apple Intelligence as a beta, suggesting all features won’t be available until 2025, gives Apple a lot of flexibility to ship something that’s very different from the experience it showed off in its video presentation. If the developer beta is to be trusted, some of Apple Intelligence’s biggest features likely won’t be included when Apple’s new software is released later this year. Pocket-lint was able to get hands-on with Writing Tools for generating text, Apple’s new summarization and transcription features, and Siri’s visual redesign, but other features Apple demoed, like Image Playground and Siri’s ability to work across apps to get contextual information on the screen, are missing.
When the average flagship Android smartphone is able to do some pretty big things that the iPhone 16 can’t, iPhone 16 owners might lose their patience.
Bloomberg reports that Apple is technically planning to release Apple Intelligence in iOS and iPadOS 18.1, but the features will be added over time in “multiple updates to iOS 18 throughout late 2024 and early 2025.” Revamped Siri features will reportedly be included in one of the 2025 updates, while new visuals will arrive in the 18.1 update. That means one of the selling points of Apple’s new iPhones won’t be available at launch, and the secret sauce that most closely ties Apple Intelligence together with Google’s Gemini does with the Pixel 9 is probably still a year away.
This isn’t necessarily a catastrophe, but timing could be important: Apple likes to take its time, but when the average flagship Android phone can do some pretty big things that their phone can’t, new iPhone 16 owners might lose patience.
AI in smartphones is still in its early stages
The jury is still out on whether Google’s new AI features are worthwhile or will work as expected. Multiple errors during the live event suggest there are still plenty of rough edges to work out, but I can’t deny that I was a little excited by what Google showed off. I’m primarily an iPhone user, but it was exciting to see Google show off even a small part of the AI assistant dream it’s been touting for years. I’m not sure if it’s hard to use, but it at least looks usable.
We don’t know whether the slow release of Apple Intelligence is due to (totally legitimate) caution or because Apple is legitimately lagging behind its competitors, but the fact is that the company is on the back foot, at least for the rest of 2024. That’s not a good thing, given that many of the Pixel’s underlying AI ideas are similar to Apple’s or even more ambitious.