Admin

Potential slowdown in AI progress caused by California’s AI safety bill, warns OpenAI executive

AI safety bill, California, exec, OpenAI, Progress



Dear Senator Wiener,

I hope this letter finds you well. I am writing to you as the Chief Strategy Officer of OpenAI to express our concerns regarding SB 1047, also known as the Safe and Secure Innovation for Frontier Artificial Intelligence Models Act. While we appreciate the intentions behind this bill and the goal of ensuring the safety and security of AI models, we respectfully oppose its implementation in California.

Our main concern with SB 1047 is that it may hinder progress and innovation in the AI field. By imposing additional regulations and requirements, the bill could potentially slow down the development of AI technologies and discourage companies from conducting research and development in the state. As a result, California may lose its position as a global leader in AI advancements.

We believe that a federal approach to AI regulations is the most effective way forward. Rather than a patchwork of state laws, a unified set of policies driven by the federal government will provide consistency and clarity in the regulations surrounding AI. This approach will enable companies to navigate the regulatory landscape more easily and focus on driving innovation and technological advancements.

Furthermore, a federally-driven set of AI policies will position the United States to lead the development of global standards in AI. As AI becomes increasingly prevalent and integrated into various industries and applications, it is crucial for the U.S. to have a strong presence in setting the standards and shaping the responsible use of AI technology on a global scale.

While we recognize the importance of ensuring the safety of AI models, we believe that existing measures and collaborative efforts within the AI community are already addressing these concerns. OpenAI and other AI labs have been actively working on developing safety protocols and conducting safety testing for AI models. These efforts are driven by our commitment to responsible AI development and the well-being of society.

It is also worth noting that the proposed requirements of SB 1047, such as pre-deployment safety testing and whistleblower protections, are already being undertaken voluntarily by AI labs. Therefore, the bill seems unnecessary to impose these requirements through legislation when the AI community is already taking these responsibilities seriously.

In addition, the provision that grants California’s Attorney General the power to take legal action if AI models cause harm raises concerns about potential duplicative legal actions and conflicting regulations. A federal approach would help streamline legal processes and avoid unnecessary complexities.

Lastly, the establishment of a public cloud computer cluster called CalCompute, as outlined in SB 1047, may have unintended consequences. While the intention behind this proposal is to provide access to computing resources for AI research, it could potentially create a centralized system that limits competition and inhibits innovation. It is important to consider alternative approaches that foster a diverse and competitive AI ecosystem.

In conclusion, we respectfully oppose the implementation of SB 1047 in California and advocate for a federal approach to AI regulations. A unified set of policies driven by the federal government will provide consistency, foster innovation, and position the United States as a global leader in AI development and responsible use. We appreciate your attention to this matter and welcome the opportunity for further discussion.

Thank you for considering our perspective.

Sincerely,
Jason Kwon
Chief Strategy Officer, OpenAI



Source link

Leave a Comment