First US safety bill for AI vetoed

1 month ago 5
ARTICLE AD BOX

The legislation would have required tech firms to test artificial intelligence models before release

California Governor Gavin Newsom has vetoed a landmark bill on artificial intelligence that would have established the first safety measures for the industry in the US. The bill, known as California Senate Bill 104, or SB 1047, was aimed at reducing potential risks created by AI.

The proposed regulation would have obliged tech companies with powerful AI models to subject them to safety testing before releasing them to the public, as well as publicly disclosing the models’ safety protocols. This would have been done in order to prevent the models from being manipulated into causing harm, such as hacking strategically important infrastructure.

In a message accompanying the veto on Sunday, the governor said that while the proposal was “well-intentioned,” it wrongly focused on the “most expensive and large-scale” AI models, while “smaller, specialized models” could potentially cause more harm. Newsom also argued that the bill does not take into account in what environment an Al system is deployed or whether it involves critical decision-making or the use of sensitive data.

“Instead, the bill applies stringent standards to even the most basic functions... I do not believe this is the best approach to protecting the public from real threats posed by the technology,” the governor stated. Newsom stressed that he agrees the industry must be regulated, but called for more “informed” initiatives based on “empirical trajectory analysis of Al systems and capabilities.”

Read more
RT Russian MPs to tackle deepfakes

“Ultimately, any framework for effectively regulating Al needs to keep pace with the technology itself… Given the stakes – protecting against actual threats without unnecessarily thwarting the promise of this technology to advance the public good – we must get this right,” he concluded.

As California governor, Newsom is seen as playing an important role in the nascent AI regulation process. According to his office’s figures, the state is home to 32 of the world’s “50 leading AI companies.”

The bill’s author, state Senator Scott Weiner, called the veto “a setback” for those who “believe in oversight of massive corporations that are making critical decisions” affecting public safety. He pledged to continue working on the legislation.

The bill had drawn mixed reactions from tech firms, researchers, and lawmakers. While some viewed it as paving the way towards country-wide regulations on the industry, others argued that it could stifle the development of AI. Former US House Speaker Nancy Pelosi branded the proposal “well-intentioned but ill informed.”

READ MORE: McDonald’s scraps AI trial after bacon added to ice cream – media

Meanwhile, scores of employees of several leading AI firms, such as OpenAI, Anthropic and Google’s DeepMind, supported the bill, because it added whistleblower protections for those who speak up about the risks in the AI models their companies are developing.

Read Entire Article