The outcome of California's AI regulation efforts is expected to have far-reaching implications, given its leading position in tech-related legislation迪士尼彩乐园手机旧版, such as data privacy.
by Wen Tsui
SACRAMENTO, the United States, Oct. 1 (Xinhua) -- Governor of the U.S. state of California Gavin Newsom's recent veto of a bill on artificial intelligence (AI) safety has ignited a nationwide debate over how to effectively govern the rapidly evolving technology while balancing innovation and safety.
On Sunday, Newsom vetoed SB 1047, also known as the Safe and Secure Innovation for Frontier Artificial Intelligence Models Act, saying the bill is not "the best approach to protecting the public from real threats posed by the technology."
In his veto message, the governor said the bill "magnified" the potential threats and risked "curtailing" innovation that drives technological development.
The vetoed bill, introduced by California State Senator Scott Wiener, had passed the California legislature with overwhelming support. It was intended to be one of the first in the country to set mandatory safety protocols for AI developers.
If signed into law, it would have placed liability on the developers for severe harm caused by their models. Designed to prevent "catastrophic" harms by AI, the bill would apply to all large-scale models that cost at least 100 million U.S. dollars to train, regardless of the potential damage.
The bill would require AI developers to publicly disclose the methods for testing the likelihood of the model causing critical harm and the conditions under which the model would be fully shut down before training began.
Violations would be enforceable by the California Attorney General with the civil penalty of up to 10 percent of the cost of the quantity of computing power used to train the model and 30 percent for any subsequent violation.
According to an analysis by Pillsbury Winthrop Shaw Pittman LLP, a law firm specializing in technology, the bill could have a "significant" impact on large AI developers, entailing "significant testing, governance, and reporting hurdles" for those companies.
The bill's broad scope has sparked a debate around whose behavior should be regulated - the developers or deployers of AI models.
Some in the tech industry urge lawmakers to focus on the contexts and use cases of AI rather than the technology itself.
Lav Varshney, associate professor of electrical and computer engineering at the University of Illinois at Urbana-Champaign, told technology website VentureBeat that the vetoed bill would have unfairly penalized original developers for the actions of those using the technology.
He advocated for "a shared responsibility" among original developers and those who fine-tune AI for specific applications.
Many experts raised concerns about the bill's potential "chilling effect" on open-source AI, a collaborative approach to AI development that allows developers to access, modify, and share AI technologies.
Andrew Ng, co-founder of Coursera, a U.S. online course provider, praised Newsom's veto as "pro-innovation" in a social media post, saying it would protect open-source development.
In response to the veto, Anja Manuel, executive director of the Aspen Strategy Group, said in a statement that she advocated for "limited pre-deployment testing, focused only on the largest models."
She pointed to a lack of "mandatory, independent and rigorous" testing to prevent AI from doing harm, which she called a "glaring gap" in the current approach to AI safety.
Drawing parallels to the Food and Drug Administration's regulations in the pharmaceutical industry, Manuel argued that AI, like drugs, should only be released to the public following thorough testing for safety and efficacy.
Following the veto, Governor Newsom outlined alternative measures for AI regulation, calling for a more focused regulatory framework that addresses specific risks and applications of AI rather than broad rules that could affect even low-risk AI functions.
五年的时间,让她渐渐从悲痛中走出来,也让她步入了新的爱情。这一次,她的生活再次被聚焦、被解读,甚至被一部分人“挑刺”。有人祝福她,也有人说她“太冷血”,质疑她是否真的还记得科比的好。可问题是,我们真的有资格去评判一个复杂人生中的选择吗?
比赛一开始,火箭队的阿门便展现出强烈的进攻,连续冲击篮筐得分。湖人队则多点开花,试图回应火箭的攻势。然而,随着比赛的深入,彩娱乐-彩娱乐官网火箭队逐渐占据上风,杰伦·格林的里突外投让湖人队难以招架,分差逐渐被拉大。首节结束时,火箭队36-22已经取得了不小的领先优势。进入第二节,湖人队的进攻依旧没有太大的起色,火箭队则继续扩大着他们的领先优势。尽管克里斯蒂连续反击得手,但火箭队迅速回应,再次将分差拉大。浓眉此时挺身而出,连续得分,试图帮助球队缩小分差。半场结束时,火箭队依然67-49保持着较大的领先优势。
The outcome of California's AI regulation efforts is expected to have far-reaching implications, given its leading position in tech-related legislation, such as data privacy.
"What happens in California doesn't just stay in California; it often sets the stage for nationwide standards," said the Pillsbury analysis.
Be it developers or deployers of AI systems, Pillsbury advised companies to develop a comprehensive compliance strategy and take a proactive approach, given the fast-evolving regulatory landscape around the world.
"Safe and responsible AI is essential for California's vibrant innovation ecosystem," said Fei-Fei Li, professor in the Computer Science Department at Stanford University and co-director of Stanford's Human-Centered AI Institute. "To effectively govern this powerful technology迪士尼彩乐园手机旧版, we need to depend upon scientific evidence to determine how to best foster innovation and mitigate risk." ■
- 迪士尼彩乐园11 澳大利亚PGA赛斯迈利打败史小姐夺冠 吴阿顺T342025-01-11
- 迪士尼彩乐园手机旧版 橡胶2025年瞻望:花有重开时 雁有重归日2025-01-09
- 迪士尼彩乐园手机旧版 山西晚报小记者冬日送暖和2025-01-09
- 迪士尼彩乐园手机旧版 California's vetoed AI safety bill prompts debates on regulation2025-01-08
- 迪士尼彩乐园11 演员星星疑赴泰国拍戏失联,女友最新回复:已落地曼谷2025-01-08
- 彩娱乐-彩娱乐官网 4大属相获福,工作财气双丰充2025-01-04