Consumers and investors are tired of the AI hype, and Google knows it.
“When it comes to AI, there’s been too many promises and ‘coming soon’ and not enough real-world usefulness,” said Google senior vice president Rick Osterloh at the “Made by Google” event announcing the new Pixel phones in Mountain View on Tuesday. “That’s why today we’re turning to reality. We’re going to answer the biggest question people have about AI: What can it do for me?”
So, did Google deliver on that promise? When you strip away all the frills from the keynote – the celebrity appearances, the “Tensor Processing Unit” jargon, the tech specs for the Pixel phones, and the vision for what Gemini can deliver in the long term – you get: new Tell us about your Android experience. Is there one killer app that you feel is essential?
Trouble! Google Gemini fails twice during live demo
Here’s a complete list of everything that was provided as working demos during the 90-minute event — in other words, real-world usefulness, not ads or promises.
1. Based on the poster, Gemini can check your Google Calendar to see if you’re free for an upcoming concert.
Thank you, Gemini!
Credit: Google
Poor Dave Citron. During the keynote’s most awkward moment, the Google product leader had to summon the “demo genie” and switch his phone before Gemini could actually show the answer to “check your calendar to see if she’s free when she comes to San Francisco this year.” (The “she” was the artist Sabrina Carpenter; Citron had just sent Gemini a photo of her concert poster.)
“Sabrina Carpenter will be in San Francisco on November 9, 2024,” Gemini finally replied. “There are no events on your calendar for that time period.”
Mashable Lightspeed
AI reading text in images and understanding its context is nothing new, but the addition of Calendar is a novelty that gives Google an advantage. In theory, Apple Intelligence could do the same thing when it debuts.
Citron’s next demo showed how Gemini could write a letter to your landlord about a broken air conditioner or to a professor about a class, familiar territory for any AI assistant.
2. Gemini Live offers “free conversation”
Next, Google VP Jenny Blackburn showed off the Gemini Live voice assistant. The two discussed science experiments that Jenny’s niece and nephew might like, and after some back and forth, they decided on making invisible ink. The conversation just flowed.
All well and good, but OpenAI demoed its GPT-4o voice assistant in May, which also featured interruptible conversations. The feature is currently available to a small group of ChatGPT Plus users, but not all of them. Does this mean Google got there first?
3. Gemini Nano provides on-device call summaries
One feature that might not be as creepy as it sounds is Call Notes, which “follows up after a call with a totally private summary of your conversation.” But don’t worry, that’s because it uses Gemini Nano, an AI service that doesn’t require cloud access and is based exclusively on Pixel 9 phones. (The on-device part isn’t new; Samsung does the same thing with Galaxy AI.)
4. Screenshots are searchable.
With what we’re calling the most useful AI feature of 2024, the Gemini Nano has achieved another success.
But then we got a lot of the visual stuff that AI assistants have done a lot of times before: creating party invitations with Pixel Studios, auto-framing with Magic Editor, adding generative AI images to your images, inserting yourself into family photos or photos with celebrities (the new, embarrassingly named “Add Me” feature), and even a cute but totally unrelated feature to AI (the “Made You Look” feature, which directs your child’s attention to the Pixel’s rear screen).
So, is this feature set enough to overturn the skepticism surrounding the AI bubble? Don’t expect Gemini to answer that question anytime soon.
topic
Google Google Assistant