There is a divide that has opened up in Silicon Valley between artificial intelligence (AI) behemoths and efforts at legislation for reining in the rapid growth of AI technology. One example of this divide is the bill requiring an AI kill switch, currently being debated by the California legislature, which would compel a shutdown of powerful AI models that present an imminent risk. The outcry from leading AI companies over California’s proposed bill reveals a clash over innovation versus regulation.
Legislation that could drastically curtail the operational liberties of tech companies in California is crystallising around the issue of AI. The bill has passed the Senate and is now staged for discussion in the state’s General Assembly. It requires the creators of AI systems to provide a ‘kill switch’, a mechanism that’s now seen as indispensable for the deactivation of rogue AI before irreparable damage is done.
Not only does this bill threaten the existence of these AI companies – giants like OpenAI, Anthropic and Cohere, as well as the behemoths that operate large language models, such as Meta – but its requirement that they prove their AI models aren’t ‘extremely hazardous capability’ to a state authority is seen by many in Silicon Valley as a bureaucratic nightmare that threatens to kill the entrepreneurial spirit that made it great.
The criticism is frank and loud, and cuts across company lines to include some of the most respected names in AI research and development. One of those names is Andrew Ng, a prominent figure in the old guard of Google’s and Baidu’s AI operations, who has dubbed the regulatory attempt a ‘masterclass in innovation-killing’. The bill, its detractors allege, ‘creates massive liabilities for science-fiction risks’ and ‘instills chilling effects that can paralyse boldness’.
But there is also a very real pragmatic conviction underpinning the pushback against the bill in California. There’s a sense that strict laws could drive AI startups out of the golden state to territories with more welcoming regulatory regimes. Fearing that pioneering Silicon Valley firms such as Meta being hobbled from using open-source models.
The question is not whether or not to regulate AI, but how to find a balance that protects us from its dangers without shutting down its transformative potential. Regulating AI in a way that will both encourage innovation and keep us safe will be difficult, but it is vital if we are to keep up the pace of technological advancement.
The history of the tech industry is itself full of stories of how well considered regulation can in fact exist alongside vibrant innovation, and that is the sort of lesson we should be trying to learn in order to create an AI regulatory framework that encourages development of the technology while putting up barriers to its potential abuses.
A possible solution can be found in a more cooperative relationship between technologists and regulatory bodies that can facilitate a process of co-devising what I call ‘smart’ regulation, which might generate more dynamic, ‘nuanced’ and responsive policies that adapt to iterative changes in technology.
Just as AI is itself an innovation frontier, so too are mechanisms for its safe use. A ‘kill switch’ might be just the first of a suite of safety features that the industry designs to mitigate particular risks introduced by AI technologies.
The debate in California, however, isn’t happening in a vacuum. Governments around the world are wrestling with very similar issues, trying to develop rules that protect citizens without unduly hampering a national tech sector’s ability to compete internationally. The outcome of California’s legislative process, in other words, might well prove to be a bellwether for global trends in AI regulation.
The switch word in the California AI legislation debate is force — the ability of government to make companies do something they wouldn’t do otherwise, like delivering a kill switch. The idea of force expresses the sovereign power of the state to act in the public interest. In regulating AI, force is a valuable but hazardous tool. Force can be overused or underused. For example, force can be used in ways that hamper innovation, and thus stunt progress. As an alternative, force can be underused, thereby leaving the public vulnerable to the innovations of AI.
Whether we’re still around to ask these questions in another decade is open to question. But as long as the debate over how much, when and with what force regulation should be applied to the unfolding AI revolution continues to grip public imagination, that’s a discussion worth having. This article has been edited and abridged from the original.
If regulators do not slow down AI today, whatever they might choose to do tomorrow, we will already be daily living at the limit of the potential. We will be living inside that curve. And if we choose a path that values innovation over safety or other human values, then we will be living inside that curve forever. The California fight over self-driving cars is only the first skirmish of a long saga that will determine the future of AI development, not just in Silicon Valley but everywhere else.
© 2024 UC Technology Inc . All Rights Reserved.