The government’s intentions to promote cybersecurity for cloud-based AI are good, but its track record of successfully managing technology is poor.
The U.S. Commerce Department announced on Monday that it is proposing detailed reporting requirements for advanced developers of artificial intelligence and cloud computing providers. In an era when artificial intelligence and cloud computing are driving innovation, the latest regulatory proposals have caught the attention of many enterprise cloud users.
The initiative, led by the Bureau of Industry and Security, calls for a significant shift in developing and deploying advanced AI systems. The goal is to ensure the technologies are safe and can withstand cyberattacks. So, what does that mean for businesses that rely on cloud-based solutions, and how can enterprises prepare for these changes?
Understanding the proposal
At its core, the new proposal requires developers and cloud service providers to fulfill reporting requirements aimed at ensuring the safety and cybersecurity resilience of AI technologies. This necessitates the disclosure of detailed information about AI models and the platforms on which they operate.
One of the proposal’s key components is cybersecurity. Enterprises must now demonstrate robust security protocols and engage in what’s known as “red-teaming”—simulated attacks designed to identify and address vulnerabilities. This practice is rooted in longstanding cybersecurity practices, but it does introduce new layers of complexity and cost for cloud users. Based on the negative impact of red-teaming on enterprises, I suspect it may be challenged in the courts.
The regulation does increase focus on security testing and compliance. The objective is to ensure that AI systems can withstand cyberthreats and protect data. However, this is not cheap. Achieving this result requires investments in advanced security tools and expertise, typically stretching budgets and resources. My “back of the napkin” calculations figure about 10% of the system’s total cost.
Balancing risk and innovation
Although these regulations aim to mitigate risks associated with AI, they also present some expensive challenges. Cloud users must balance adhering to compliance requirements and keeping their innovation pipelines flowing. I suspect they will quickly learn to work around them, and the regulators will complain that they are living up to the letter of the law (regulations) but not the spirit. It’s code for, “We found a loophole, guys!”
With legislative efforts on AI stalling in Congress, the Commerce Department’s proposal could set the groundwork for future regulations. That said, you must deal with the latency in processing these regulations, the inevitable court cases, and enterprises learning to move things out of the United States if needed. I suspect that will be the move most enterprises will make since that’s how they’ve dodged other regulations. The government does not seem to understand that clouds exist in most countries. Corporations will take full advantage of their offshore options, just as they do with taxes.
The immediate concern for enterprise cloud users is how these new regulations will impact their current workflows and future innovation pipelines. As businesses increasingly rely on AI to streamline operations and enhance customer experiences, new AI regulations could disrupt existing processes.
It’s also important for enterprises to collaborate with cloud service providers to prevent compliance from hindering progress. Don’t forget that the providers are also working hard to keep up.
For enterprise cloud users, staying ahead will mean reacting to regulatory changes and proactively evolving alongside them. Get ready for new consulting practices to emerge and the cost of building AI systems to rise as enterprises have to keep up with this stuff.
How helpful is the goverment?
I doubt these new regulations will help. I try to see the positive in everything, but most governments have consistently failed to successfully regulate technology. This latest set of rules is expected to raise costs for businesses and not reduce risks enough to justify the expense. Unless the government is willing to spend billions of dollars each year to improve these regulations, including developing best practices and tools, they won’t be useful.
The reasons are simple. The government is not set up to provide dynamic and applicable rules around technology, and this latest attempt will have many unintended consequences. Some organizations may avoid using certain cloud services due to regulatory concerns or may relocate development operations to countries with more favorable AI policies.
Just remember that we’ll see many more AI regulations at the state and federal levels. Other countries are also playing the AI regulation game, with new European regulations emerging. Enterprises need to keep their systems and mindsets flexible enough to absorb the coming regulations.
Also, when the government dictates “technology safety,” it is often ill-informed and years behind the current state of technology. If you’re as old as me, you may remember the consternation surrounding the rise of the web. The government looked to regulate the use of the Internet, but ideas were outdated before they left lawmakers’ hands. In which countries should you expect to encounter this type of “dinosaur” regulations problem, you may ask? Most of them.
I suggest governments stay out of things for now. There is always the potential to create useful laws for certain types of technology. At this point, AI isn’t one of them. However, enterprises should be prepared if this latest proposal becomes a reality. The way forward for enterprise cloud users is clear: Embrace the opportunity to build more resilient and trustworthy AI systems. Learn your elected representatives’ names and offer real-world advice about this type of regulatory legislation.