In California, the birthplace of most innovation, a new legislative effort called SB 1047 is igniting a firestorm across Silicon Valley and beyond. This proposed law is hardly a momentary blip on the legislative radar: it represents a potential inflection point in the twin tracks of artificial intelligence (AI) development and deployment. With each word of its text, California State Senator Josh Becker’s bill gets drawn deeper into the turbulent waters of essentials and irrelevancies, promises and pitfalls – all while highlighting the delicate balance between fostering innovation on the one hand, and ensuring public safety and transparency on the other. So does SB 1047 help to chart a course toward responsible AI development? Or does it risk choking off the very innovation it aims to regulate?
Fundamentally, SB 1047 seeks to prevent AI catastrophes by establishing tight regulations around the creation and deployment of AI systems. It contemplates the existence of an AI oversight body that would be able to set standards and establish guidelines for other entities looking to develop AI. SB 1047 is driven by the desire to mitigate the harm caused by unrestricted AI technologies, which can include biases baked into systems, job displacement, and even the destruction of all human life.
This momentum is driven by a growing number of high-profile AI goofs. Most notable is the case of the Facebook AI chatbot that went rogue, quickly generating toxic-sounding and hateful speech. It’s becoming more and more apparent that we must improve the kinds of protocols that guide AI technologies’ design and development in order to make them as safe as possible.
On the other hand, a vocal group of Silicon Valley activists believes SB 1047 could impede the development of AI more than it would protect it. The argument is that current general wording of the bill could require AI development to be overburdened with red tape, slowing development and hindering startups. In this way, it would hurt the competitiveness of young startups while defaulting to the tech giants.
Furthermore, sceptics of the bill suggest that it risks putting the cart before the horse, with its emphasis on addressing symptoms rather than root causes of AI problems, such as the built-in bias and lack of explainability in AI algorithms. They call for a shift in the paradigm that would incentivise the creation of not only transparency- and explainability-enhancing AI systems, but fair systems by design.
This discussion of SB 1047 reflects a larger struggle to create a framework for both opportunity and risk in the use of AI as the technology develops and becomes more integrated into daily life. There is increasing agreement that regulation is the inevitable next step.
But the big question is this: what might form of regulation might facilitate responsible AI development and encourage innovation rather than hampering it? Is there a form of regulation that would act as a trampoline instead of a straitjacket to further advance AI technology?
However, over the course of the dialogue, it becomes clear that the objective is not exactly one of fortifications but of laying down rules that will enable a space in which innovation can take place, alongside strong protections. As the pace of AI technology gathers further momentum, the only way forward will be to craft regulation in a time-travelling mode, anticipating future problems even as it seeks to nurture the seeds of innovation.
The SB 1047 debate represents a small fight in a large war: a war over how we integrate AI into our society – and indeed our future. As we move ever more closely to an AI world, the future of California’s AI policy law – and the lessons it suggests for other policymakers around the world – raises the stakes for any country facing the challenge of creating an intelligent robot workforce. The question is whether our regulations will be agile enough to change with the pace of technological advance, or burdensome to the point that bureaucrats can’t even begin to keep up.
‘Advance’ has permeated the rhetoric of AI regulation like a golden thread because it neatly expresses the twin aspirations of being both proactive and precautionary. Like the advance in SB 1047 and regulatory efforts more generally, it denotes a forward-looking position on AI technologies, emphasising the need to act on the ethics of AI not only in the face of advancing technology but to get ahead of these technologies as they advance. Unlocking the key of the concept of advance in AI regulation is especially important for ushering in a future of AI innovation that incorporates concerns for human safety, and vice versa, so that we need not sacrifice one on the altar of the other. Perhaps Kumaré was onto something.
© 2024 UC Technology Inc . All Rights Reserved.