Apple has agreed to adopt a series of artificial intelligence safety measures set out by the Biden-Harris Administration.
The move was announced by the administration on Friday. Bloomberg was first to report the news.
By adopting the guidelines, Apple joins the likes of OpenAI, Google, Microsoft, Amazon and Meta.
The news comes ahead of Apple’s highly anticipated release of Apple Intelligence (Apple’s name for AI), which is expected to be broadly available with the public release of iOS 18, iPadOS 18, and macOS Sequoia in September. The new features, which Apple announced in June, aren’t even available in beta at the moment, but will be gradually rolled out over the coming months.
Mashable Lightspeed
Apple is one of the signatories to the AI ​​Safety Institute Consortium (AISIC), which the Biden-Harris administration launched in February, but the company has now committed to a series of safeguards, including testing AI systems for security flaws and sharing the test results with the U.S. government, developing mechanisms for users to know if content was generated by AI, and developing standards and tools to ensure the safety of AI systems.
The safeguards are voluntary and unenforceable, meaning companies will not face any penalties if they do not comply with them.
The European Union’s AI law – a set of regulations designed to protect citizens from high-risk AI – will become legally binding when it comes into force on August 2, 2026, although some provisions will apply from February 2, 2025.
Apple Intelligence may one day introduce a paid version
Apple’s upcoming AI features include integration with ChatGPT, a powerful AI chatbot from OpenAI. Following this announcement, Elon Musk, owner of X and CEO of Tesla and xAI, warned that he would ban Apple devices from his company, deeming them an “unacceptable security breach.” Musk’s company is conspicuously absent from the AISIC signatory list.