Admin

Veto of California AI Bill May Create Opportunities for Smaller Developers and Models to Thrive

California AI bill veto, flourish, models, smaller devs



The recent veto of SB 1047 by California Governor Gavin Newsom has sparked both praise and disappointment within the AI industry. The bill, if passed, would have required AI companies to incorporate a “kill switch” into their models, establish written safety protocols, and undergo third-party safety audits before training their models. While some in the industry believe the veto protects open-source development and promotes innovation, others argue that it neglects the need for regulation in the AI space.

Prominent AI investor Marc Andreessen commended Newsom for siding “with California Dynamism, economic growth, and freedom to compute” by vetoing the bill. This sentiment was echoed by other industry players, including Andrew Ng, co-founder of Coursera, who praised the decision as “pro-innovation” and a win for protecting open-source development. These voices in support of the veto argue that regulation should not hinder smaller developers or impede the flourishing of AI models.

On the other hand, there are those who express disappointment with the veto. Tech policy and safety groups, such as Accountable Tech, condemn Newsom’s decision, viewing it as a concession to Big Tech companies and a disregard for the potential harms caused by AI technology. These groups believe that the public’s demand to rein in AI capabilities has been ignored, and that AI development in California will remain unchecked.

Examining the rationale behind the veto, some industry experts highlight the need for a more nuanced approach to AI regulation. Mike Capone, CEO of data integration platform Qlik, emphasizes that the focus should be on the contexts and use cases of AI rather than the technology itself. He suggests that regulatory frameworks should prioritize ensuring safe and ethical usage and supporting best practices in AI.

The veto also presents an opportunity for AI companies to strengthen their AI safety policies and guardrails. Kjell Carlsson, head of AI strategy at Domino Data Lab, encourages organizations to proactively address AI risks and protect their AI initiatives through robust AI governance practices. Implementing controls over data, infrastructure, and models, rigorous testing and validation, and ensuring output auditability and reproducibility can help improve AI safety measures.

While the debate over AI regulation continues, it is clear that both sides acknowledge the need for responsible AI development. AI governance platforms, such as Credo AI, advocate for a balance between innovation and trust. They argue that companies that want to succeed with AI must prioritize trust and transparency, as customers are increasingly demanding these qualities. While the veto may not change the behaviors of developers, market forces are driving companies to present themselves as trustworthy.

In terms of broader AI regulation in the United States, there is currently no federal regulation specifically targeting generative AI. Some states have developed their own policies on AI usage, but no laws impose rules on the technology at a federal level. The closest federal policy is an executive order from President Joe Biden, which outlines a plan for agencies to use AI systems and encourages companies to voluntarily submit models for evaluation before public release. However, more comprehensive regulations are still needed to address the potential risks and impacts of AI.

In conclusion, the veto of SB 1047 by Governor Gavin Newsom has sparked a range of reactions within the AI industry. While some applaud the decision for protecting innovation and open-source development, others criticize it for neglecting the need for regulation and potentially putting the public at risk. The debate surrounding AI regulation continues, with both sides recognizing the importance of responsible AI development and the need for balance between innovation and safety. Moving forward, it is crucial to establish comprehensive regulations that address the potential risks and impacts of AI technology.



Source link

Leave a Comment