Saturday, July 6, 2024
HomeCybersecurity UpdatesGoogle guidelines, Meta GDPR controversy, Microsoft recall backlash

Google guidelines, Meta GDPR controversy, Microsoft recall backlash

The AI ​​debate

Google is urging third-party Android app developers to incorporate generative artificial intelligence (GenAI) features responsibly.

The search and advertising giant’s new guidelines are an effort to combat problematic content, including sexual content and hate speech, that is generated through these tools.

As such, apps that use AI to generate content must ensure that they don’t create restricted content, have mechanisms for users to report or flag offensive material, and market themselves in a way that accurately represents the app’s functionality. App developers are also encouraged to rigorously test their AI models to ensure they respect user safety and privacy.

“Please test your apps across different user scenarios and make sure to protect your apps from prompts that may manipulate generative AI features to create harmful or offensive content,” said Prabhat Sharma, director of trust and safety for Google Play, Android and Chrome.

The move comes after a recent investigation by 404 Media uncovered several apps in the Apple App Store and Google Play Store advertising the ability to create non-consensual nude images.

Meta’s use of public data for AI raises concerns

The rapid adoption of AI technologies in recent years has raised widespread privacy and security concerns about the safety of training data and models, and also provided ways for malicious actors to extract sensitive information and tamper with the underlying models to return unexpected results.

Cybersecurity

Additionally, Meta’s decision to use publicly available information across its products and services to improve its AI services, enabling “the best recommendation technology in the world,” led Austrian privacy organization noyb to file complaints in 11 European countries for violations of GDPR privacy laws in the region.

“This information may include public posts and public photos and their captions,” the company said in a statement late last month. “In the future, we may also use information you share when interacting with our generative AI features, such as Meta AI, or with businesses to develop and improve our AI products.”

Specifically, noyb accuses Meta of shifting the burden onto users (i.e. making it opt-out instead of opt-in) and of not providing adequate information about how the company plans to use customer data.

Meta, meanwhile, said it relies on a “legitimate interest” legal basis to process certain first-party and third-party data in the EU and the UK to improve its AI and build better experiences. EU users can opt out of the processing by submitting a request by June 26.

The Norwegian tech giant stressed that its approach is in line with how other European tech companies are developing and improving AI experiences, but Norway’s data protection authority Datatilsynet said it had “doubts” about the legality of the process.

“In our view, it would have been most natural to ask for consent before users’ posts and photos were used in this way,” the agency said in a statement.

“The European Court of Justice has already stated that Meta has no ‘legitimate interest’ to override users’ data protection rights when it comes to advertising,” said Neuby’s Max Schrems, “but the company is now trying to use the same argument when it comes to training undefined ‘AI technologies.'”

Microsoft Recall Faces More Scrutiny

Meta’s latest regulatory flurry also comes at a time when Microsoft’s own AI-powered feature, “Recall,” is receiving swift backlash due to potential privacy and security risks that result from capturing screenshots of users’ activity on their Windows PCs every five seconds and turning them into a searchable archive.

Cybersecurity

In a new analysis, security researcher Kevin Beaumont discovered that bad actors could deploy information stealers to exfiltrate the database that stores the information parsed from screenshots – the only prerequisite for this to happen is that they need administrative privileges on the user’s machine to access the data.

“Recall allows threat actors to automatically scrape everything a user has viewed within seconds,” Beaumont said. “[Microsoft]should recall Recall and rework it to be a legitimate feature at a later date.”

Other researchers have similarly demonstrated that tools like TotalRecall make it easier to exploit Recall and extract highly sensitive information from the database. “Windows Recall stores everything locally in an unencrypted SQLite database, and screenshots are stored directly in a folder on your PC,” says Alexander Hagenah, who developed TotalRecall.

As of June 6, 2024, TotalRecall has been updated to no longer require administrative privileges, using one of two methods outlined by security researcher James Forshaw to circumvent the requirement for administrative privileges to access Recall data.

“It’s only protected by (access control lists) to SYSTEM, and privilege escalation (or non-security boundary *cough*) could lead to information disclosure,” Forshaw said.

The first technique involves obtaining a token and impersonating a program called AIXHost.exe, or, even better, leveraging the privileges of the current user to modify the access control list and gain access to the entire database.

That said, it’s worth pointing out that Recall is currently in preview, and Microsoft may make changes to the application before it’s rolled out broadly to all users later this month. It will be enabled by default on compatible Copilot+ PCs.

Did you find this article interesting? Follow us twitter To read more exclusive content we post, check us out on LinkedIn.

RELATED ARTICLES

LEAVE A REPLY

Please enter your comment!
Please enter your name here

Most Popular

Recent Comments

error: Content is protected !!