The Californian Senate passed an AI bill that will bring safety guards to the technology’s development. While some leading industry figures feel it is necessary, others are not sold.
Californian lawmakers passed The Safe and Secure Innovation for Frontier Artificial Intelligence Models Act (SB 1047) on August 28, leaving industry figures divided on whether it will positively or negatively impact the industry. It aims to bring safe development of AI models and include “kill switch” buttons to those developed without concern for ethics and public interest or heading that way.
The bill passed the Californian Senate 29-9 and is pending vetting and approval from the state governor, Gavin Newsom. It was passed a few days after a group of whistleblowers from OpenAI wrote a letter to Newsom, warning about the implications AI tech could pose if not regulated by the government and calling out their employer for lobbying against the bill’s passage.
“We joined OpenAI because we wanted to ensure the safety of the incredibly powerful AI systems the company is developing,” the letter read. Along the way, they may create systems that pose a risk of critical harms to society, such as unprecedented cyberattacks or assisting in the creation of biological weapons. If they succeed entirely, artificial general intelligence will be the most powerful technology ever invented,” it added.
Elon Musk chimed in about the bill, stating it was a “tough call,” but he favored SB 1047. “I think California should probably pass the SB 1047 AI safety bill,” Musk mentioned. “For over 20 years, I have been an advocate for AI regulation, just as we regulate any product/technology that is a potential risk to the public.”
OpenAI’s chief strategic officer Jason Kwon wrote a letter to Newsom on August 21 mentioning how this bill would hamper the growth the AI tech in the US, specifically in California, is witnessing. “SB 1047 would threaten that growth, slow the pace of innovation, and lead California’s world-class engineers and entrepreneurs to leave the state in search of greater opportunity elsewhere.”
Decentralized AI Projects Can Facilitate Safe AI Development
Many have compared the potential effects of this bill with how the government has stifled crypto and blockchain innovation. Nonetheless, some feel decentralized AI projects hold the key to developing safe and ethical AI models by utilizing DAO-based decision-making. With active community participation, dangerous AI use cases can be weeded out to allow ethical ones to flourish.
Image by Kohji Asakawa from Pixabay
Source: https://www.livebitcoinnews.com/californian-ai-bill-splits-industry-figures/