The U.S. Commerce Department has proposed new reporting requirements for advanced artificial intelligence developers and cloud providers to ensure the safety and security of the technology.
The Bureau of Industry and Security (BIS) made it clear that frontiers-creating companies and engineering cluster management business houses that invest in the creation of super-computers will have an obligation to submit certain essential development activities to the federal authorities. This trend is intended to decrease security threats associated with cybersecurity, and the wrong application of artificial intelligence resources.
According to the new proposal, developers will be required to disclose detailed information regarding their artificial intelligence systems, particularly focusing on cyber security measures and red-teaming efforts. Red-teaming is a technique used in cyber security that evaluates the harmful uses of an AI, such as enabling cyberterrorism or facilitating the creation of weapons of mass destruction.
AI developers face new mandates to report cybersecurity measures and risk testing
The proposal by the Commerce Department would also need AI developers to include in their reports the results of red-teaming activities, especially those related to dangerous capabilities like cyberattacks. This method, originating from Cold War simulations identifies risks by simulating attacks or breaches.
Organizations would be required to update how well their AI systems are withstanding attacks even from foreign interests or non-state actors who may want to abuse the system for critical malicious purposes.
Generative AI, which can create text, images, or videos based on user prompts, has sparked concerns. Although the technology offers advancements in many areas, it has also increased fears about possible threats like job losses, election interference, and the risk of artificial intelligence domination over humans.
The new reporting guidelines seek to ensure the safe and reliable development of AI models, reducing the chance of these risks materializing.
Cloud providers are required to meet new reporting standards on security and AI development
In addition to artificial intelligence developers, the proposed rules will also relate to cloud service providers such as Amazon Web Services (AWS) Google Clouds and Microsoft Azure. These platforms, essential in advancing the development of artificial intelligence, will have to adhere to stringent standards on the cybersecurity of their infrastructure. The Commerce Department highlighted that this information is important to avoid the abuse of U.S. technology by other countries.
The regulatory push follows President Joe Biden’s Executive Order signed in October 2023. The order requires the artificial intelligence technology developers to submit their safety testing results to the Government of the US before releasing their technologies to the public. The draft issued by the Commerce Department is in line with this broader goal that seeks to protect the country’s national security, public health and the economy from the threats posed by artificial intelligence.
This proposal also follows the previous efforts initiated by the Biden administration to further employ measures that will prevent China from accessing U.S. technology for artificial intelligence development, suggesting increased security concerns within this ever-expanding industry.
Source: https://www.cryptopolitan.com/commerce-department-new-ai-reporting-rules/