Despite regulatory challenges in Europe, Meta is moving forward with the next phase of its AI development plans, today announcing the expansion of the Meta AI chatbot to seven more languages, new creative features within Stream, and the ability to choose which Meta AI models to use for various features.
This actually points the way to the future of AI interaction, but we’ll get to that later.
First, Meta is expanding access to Meta AI to seven more regions, Argentina, Chile, Colombia, Ecuador, Mexico, Peru and Cameroon become the latest countries to gain access to Meta’s in-app chatbot.
In addition to this, users can now also prompt the bot in seven new languages: French, German, Hindi, Hindi Roman, Italian, Portuguese, and Spanish.
Meta’s built-in AI bot has received mixed reviews, but Meta CEO Mark Zuckerberg said the bot is on track to become the world’s most used AI assistant.
This isn’t all that surprising, considering half the connected world uses Facebook, WhatsApp, and IG, and Meta puts each app in front of people’s eyes when they open it, so it’s hard to say whether this reflects popularity or usefulness, but rather ubiquity, but either way, Zuckerberg is clearly using this as an indicator that the company’s AI development is moving in the right direction.
Now, millions more people will see Meta AI prompts every time they open an app. Welcome to the future.
Meta is adding even more functionality to its chatbot, now allowing users to create AI-generated images of themselves directly from the chat stream.
As Meta explains:
“Have you ever wanted to be a superhero, a rock star, or a professional athlete? Meta AI’s ‘imagine me’ prompts will help you see yourself in a whole new light. The feature will start rolling out as a beta in the US. Imagine yourself uses new AI to create an image based on your photo and prompts like ‘imagine me surfing’ or ‘imagine me on vacation at the beach.'” Cutting-edge personalization model.”
In essence, it’s the same as Snapchat’s “Dreams” feature, offering a way to create your own fantasy images. Like Dreams, I expect the novelty will wear off quickly, but it could be another way to at least get more people to try out Meta’s AI tools.
Meta is also adding new editing tools for generative AI images, allowing you to customize them right in your stream.
Meta says this process makes it easy to add, remove, change or edit objects while preserving the main image.
“Let’s say you say, ‘Imagine a cat snorkeling in a fishbowl,’ and you want to make it a corgi. In that case, to adjust the image, you can simply write, ‘Change cat to corgi.'” “Next month, we’ll be adding an (AI edit) button that will allow you to further tweak the image you imagined.”“
This could be a useful addition: One of the main pain points with AI-generated images is the inability to adjust or correct them if they aren’t what you expected. Depending on how this works, this could be a valuable addition to Meta’s AI art app.
However, I don’t really like this:
As you can see from this example, Meta also gives users the ability to add AI-generated images to their Facebook posts, which in itself isn’t too bad, but as you can see from this example, they’re creating fake images of places you’ve actually been to.
For example, why?
My biggest concern about Meta’s broader integration of AI content is that it could dilute and potentially replace the human element, the actual “social” aspect of “social media.” For years, people have complained that bot-generated content on social apps ruins the experience of connecting, and now we’re being encouraged to use bots ourselves.
Would that be beneficial? Is that really what we want from an interactive community?
I don’t know. I understand that these features can expand your creativity in new ways. But I don’t think this is it.
Meta is also introducing new options that allow users to choose which AI models to use for different tasks within the app.
This is a slightly more technical option, but the idea is that by giving users access to a range of Llama models, they will have more opportunities to ask more complex, technical questions, especially around math and coding topics.
For example, most people would obviously choose the most powerful option if possible, but at the same time, this is unlikely to be widely used since it is not something you can select up front but rather requires you to actually go into the AI model settings and select it.
Finally, Meta will also be releasing a beta version of Meta AI in VR to select users in North America.
“Meta AI replaces your current voice commands on Quest, allowing you to control your headset hands-free, get answers to your questions, get real-time information, check the weather, and more.”
This is an important update because perhaps the real value of Meta’s AI tools lies in VR creation, which is currently limited by the technical complexities of building immersive worlds, requiring a lot of expertise, development time and investment.
But what if Meta could enable AI tools to create VR experiences based on simple text prompts, allowing you to instantly experience any experience you can imagine, right then and there, by simply speaking?
That’s the ultimate promise of Meta’s AI tools, and why Meta is so keen to push AI chatbots: as a way to get people comfortable with the process of asking the tools for what they need.
This is a long-term vision, but it is the next step, and bringing Meta AI to VR will enable users to shift their behavior in this direction.
And current developments that undermine this “social” aspect aren’t really of concern because they weren’t designed with the current situation in mind. Sure, there’s some utility, or at least novelty, and Meta is intent on creating the best version of the chatbot possible. But really, it’s all about aligning future habits, especially new users with the habit of interacting through chat prompts.
Sure, Meta’s AI bots might be a bit annoying for some, but a decade from now, when we’re all interacting in VR, they’ll be the only way the next generation will engage with digital tools.