Because the development of generative AI poses significant risks on many fronts, it seems like every other week major companies are setting up new agreements or their own forums to monitor, or give the impression of monitoring, AI development.
This is a good thing in terms of establishing a collaborative discussion around AI projects and what each company should monitor and manage within the process, but at the same time, it also feels like they are a way to head off further regulatory restrictions, potentially increasing transparency and imposing more rules on what developers can and cannot do with their projects.
Google is the latest company to launch a new AI guidance group, forming the Coalition for Secure AI (CoSAI) and“We will promote comprehensive security measures to address risks specific to AI.”
According to Google:
“AI requires security frameworks and enforcement standards that can keep up with its rapid growth. That’s why last year we A secure AI framework (SAIF), we knew that this was just a first step. Of course, operationalizing the industry framework requires close collaboration with others and above all a forum to make it happen.“
This is not an entirely new initiative.This is an expansion of one previously announced that focused on guiding AI security developments and defensive efforts to help avoid hacks and data breaches.
The new effort includes a variety of leading technology companies, including Amazon, IBM, Microsoft, NVIDIA, and OpenAI, with the goal of creating a collaborative open source solution to strengthen the security of AI development.
And, as mentioned above, this is the latest in a growing list of industry associations focused on sustainable and safe AI development.
for example:
- The Frontier Model Forum (FMF) aims to establish industry standards and regulations for AI development. Meta, Amazon, Google, Microsoft, and OpenAI have signed onto the effort.
- Thorne The “Safety by Design” program isResponsibly sourced AI training datasets to protect against child sexual abuse material. Meta, Google, Amazon, Microsoft, OpenAI Everyone agreed with this initiative.
- The US government has established its own AI Safety Institute Consortium (AISIC), which includes over 200 companies and organizations.
- Representatives from nearly every major technology company have agreed to a technology agreement to combat deceptive uses of AI, with the aim of: “Reasonable precautions” to prevent AI tools from being used to subvert democratic elections.
Essentially, there is a growing number of forums and agreements designed to address different elements of safe AI development, which is a good thing, but at the same time, these are not laws and therefore not enforceable in any way, but merely an agreement by AI developers to abide by certain rules regarding certain aspects.
And the skeptical view is that they are only being introduced as a safeguard to thwart clearer regulation.
EU authorities have already begun measuring the potential harms of AI developments and what will and will not fall under GDPR, and other jurisdictions are doing the same, with the threat of real financial penalties behind government-agreed standards.
That seems like a real need, but at the same time, government regulation takes time and there may not be any real enforcement systems or structures in place until after the fact.
Once the damage is known, it will become more concrete and regulators will have more momentum to push for official policy, but until then, industry groups will ask companies to pledge to follow established rules, enforced by mutual agreement.
I don’t know if that will be enough, but for now it seems this is what we have.