Saturday, July 6, 2024
HomeCybersecurity UpdatesApple launches privacy-focused private cloud computing for AI processing

Apple launches privacy-focused private cloud computing for AI processing

Private Cloud Computing

Apple has announced the launch of a “breakthrough cloud intelligence system.” Private Cloud Computing (PCC) It is designed to process artificial intelligence (AI) tasks in a privacy-preserving manner in the cloud.

The tech giant described PCC as “the most advanced security architecture ever deployed for large-scale cloud AI computing.”

PCC coincides with the arrival of new Generative AI (GenAI) features (collectively known as Apple Intelligence, or AI for short) that the iPhone maker has announced in its next-generation software, including iOS 18, iPadOS 18, and macOS Sequoia.

All Apple Intelligence features (both those that run on-device and those that rely on PCC) leverage in-house generated models trained on “licensed data, including data selected to power specific features, as well as public data collected by our web crawler, AppleBot.”

Cybersecurity

Alongside Apple Intelligence, OpenAI’s ChatGPT will be integrated into Siri and system-wide writing tools to generate text and images based on prompts the user types in. Apple points out privacy protections built into the process for users who choose to access the virtual assistant.

“IP addresses are hidden and OpenAI does not store your requests,” Apple said. “ChatGPT’s data usage policies apply to users who choose to connect their accounts.”

The idea of ​​PCC is essentially to offload complex requests that require more processing power to the cloud while at the same time ensuring that the data is not retained or exposed to any third parties, including Apple, a mechanism the company calls stateless computing.

PCC’s underlying architecture is custom-built server nodes combining Apple Silicon, Secure Enclave and Secure Boot, backed by a hardened operating system customized to run Large Language Model (LLM) inference workloads.

Apple says this not only “ultra-narrows the attack surface,” but also leverages code signing and sandboxing to ensure that only approved, encrypted code can run in its data centers and that user data never crosses trust boundaries.

“Technologies such as pointer authentication codes and sandboxing serve to thwart such exploits and limit an attacker’s lateral movement within a PCC node,” the company said. “The inference control layer and dispatch layer are written in Swift and use separate address spaces to ensure memory safety and isolate the initial processing of requests.”

“Memory safety combined with the principle of least privilege eliminates any kind of attack on the inference stack itself, and limits the level of control and functionality gained in the event of a successful attack.”

Another notable security and privacy measure is routing PCC requests through Oblivious HTTP (OHTTP) relays, which are operated by independent parties to hide the origin of requests (e.g., IP addresses), effectively preventing attackers from using IP addresses to attribute requests to specific individuals.

It’s worth noting that Google also uses OHTTP relays as part of its Privacy Sandbox initiative and in Safe Browsing in the Chrome web browser to protect users from visiting potentially malicious sites.

It’s also worth pointing out that Google uses OHTTP relays as part of its Privacy Sandbox initiative and in Safe Browsing in the Chrome web browser to protect users from visiting potentially malicious sites.

Apple further noted that independent security experts can inspect the code that runs on Apple Silicon servers to verify the privacy aspects, adding that the PCC will ensure through encryption that devices do not communicate with the servers unless the software is publicly logged for inspection.

“All production private cloud computing software images, including the OS, applications, and all associated executables, are exposed for independent binary inspection so that researchers can validate them against the transparency log measurements,” the company said.

Cybersecurity

“The software will be released within 90 days of being logged or after the relevant software update is available, whichever is earlier.”

Apple Intelligence, available later this fall, will be limited to iPhone 15 Pro, iPhone 15 Pro Max, M1 or later iPads and Macs, with Siri and device language set to US English.

Alongside Apple Intelligence, OpenAI’s ChatGPT will be integrated into Siri and system-wide writing tools to generate text and images based on prompts the user types in. Apple points out privacy protections built into the process for users who choose to access the virtual assistant.

“IP addresses are hidden and OpenAI does not store your requests,” Apple said. “ChatGPT’s data usage policies apply to users who choose to connect their accounts.”

Expected to be available to the public later this fall, Apple Intelligence will be limited to iPhone 15 Pro, iPhone 15 Pro Max, iPads and Macs with M1 or later that have Siri and device language set to US English.

Other new privacy features introduced by Apple include the option to lock and hide certain apps with Face ID, Touch ID or a passcode, letting users choose which contacts they want to share with the app, a dedicated Passwords app, and an updated privacy and security section in Settings.

According to MacRumors, the Passwords app also features a setting to automatically upgrade existing accounts to Passkey. Additionally, Apple has replaced the Private Wi-Fi Address Switching for Wi-Fi networks with a new Wi-Fi Address Rotation setting to minimize tracking.

Did you find this article interesting? Follow us twitter To read more exclusive content we post, check us out on LinkedIn.

RELATED ARTICLES

LEAVE A REPLY

Please enter your comment!
Please enter your name here

Most Popular

Recent Comments

error: Content is protected !!