The recent departure of the former OpenAI executive Jan Leike to Amazon-backed Anthropic is now the latest indication of a shift that tech insiders and enthusiasts have secretly suspected for a while: the AI community is shifting its focus from speed to safety. There is a growing school of thought that safety might be even more important than ever.
The tech press was exhausted when news of yet another departure from OpenAI hit their feeds. This time it was Jan Leike, who had been leading the lab’s safety team. This was no short-term contractor; Leike, one of the more vocal critics of AI labs from within the field itself, had spoken out publicly against his old employer for failing to invest in a ‘safety culture and processes’. He singled out OpenAI’s pursuit of shiny products to please their wealthy billionaire donors at the expense of their stakeholders – us, the lay public. ‘This hurts,’ he had said. So it was all the more surprising when Leike, the one person at OpenAI who really cared about safety, switched ship to join Anthropic, as his X (ex-Twitter) post put it: ‘I am joining a team where safety is at the core of the development of AI. It’s great being back in the game!’
Anthropic’s rise as a viable alternative to OpenAI is not just because of its safety-first philosophy, but because the company is backed by the titan that is Amazon. Amazon recently infused Anthropic with $4 billion to help the company build not just large language models, but safe large language models, using Amazon Web Services’ cloud. Amazon clearly sees the transformative power of AI, and wants to tip the scales in favor of safer AI technologies.
News of Leike’s departure from OpenAI comes hot on the heels of several other departures from the company, including mainly from the San Francisco headquarters, where policy researcher Gretchen Krueger recently left OpenAI. Her announcement said she couldn’t ‘confidently attest’ that the ‘safety-related issues I flagged’ were being taken seriously. A safety committee has now been formed at OpenAI. Will this shakeup address the concerns about the company’s de-emphasis on safety? Or should PayPal’s cofounder, Peter Thiel – who sits on its board – seriously consider ousting Musk, just as he did the infamously unhinged Yahoo! CEO before him?
It’s now publicly acting to reassure users and staff: the formation of a board-level safety committee was its own response to the resignations of Leike and Krueger. Perhaps the announcement that OpenAI is about to ‘explore the “next frontier model”’ – a first sketch of coming attractions – is a sign that it intends to incorporate more safety measures into future iterations of AI development. We’ll have to see if those deliver the desired outcomes in its culture of innovation.
The way his move to Anthropic, which is funded by Amazon, was covered reflects a meaningful debate in AI: how can we balance innovation and safety in the field? The debate’s been fomenting for some time but has now been supercharged by the high-profile nature of the departures, and the involvement of Amazon, one of the largest industry players. The argument is evolving, and we’re seeing more AI professionals than ever before pushing for a safety-first approach to AI – a change that could potentially transform the way AI is built, at scale.
Amazon’s participation in Anthropic – and its participation in the AI safety debate more broadly – demonstrates the company’s bigger play in the technology sector. Amazon Web Services (AWS) is best-known as a cloud infrastructure, but it has also become a core foundation for developing and deploying large-scale AI models. Investing in Anthropic is more than just a financial bet on the company; it’s also a bet on the importance of safety in AI. It’s a mission-critical strategy for Amazon’s technological predominance in the future of AI. If the technology giants set the tone here, the AI future could be safer than it otherwise would be.
The combination of Amazon’s resources and Anthropic’s safety-first mission shows the potential for a promising path forward for AI – one where there can be innovation alongside strict safety, and the industry can move towards an era where technology works for us without compromising safety or ethics.
In the end, the decision of Jan Leike to leave OpenAI for Anthropic is a signal of more than a simple career move. It is in fact a statement of what AI safety should really be about. Backed by Amazon’s financial muscle, Anthropic stands at the doorstep of AI’s future, signposting the road ahead. It must be walked with caution. As the AI industry evolves, the conversation on how to push forward while ensuring AI safety will have a greater role to play in shaping the future of AI. Hopefully, that future will allow AI to reach its fullest possible potential without compromising the standards of safety and ethics it desperately deserves.
© 2024 UC Technology Inc . All Rights Reserved.