So surely it should be a sign of things to come when one of the biggest artificial intelligence (AI) labs in the world, OpenAI, announces a bold new step forward in the history of technology. It can’t have escaped your attention by now that OpenAI – led by CEO Sam Altman and its impressive team of AI researchers – is about to take a huge new step forward. It’s going to replace GPT-4 with its GPT-5 model, and knows it needs to take a step backward first to make that possible. Grounding itself with the actions of the board and putting AI safety and security first, the company plans to launch a new dedicated safety and security board under the leadership of Bret Taylor (the chairman of Salesforce), Adam D’Angelo (CEO of Quora) and Nicole Seligman (president and CEO of TPG Growth).
At the centre of OpenAI’s new push is the start of training for what the company has referred to as its ‘next era model’, which is positioned to go well beyond the level of what GPT-4 can do and enter the realm of artificial general intelligence (AGI) – able to power any kind of AI application, from image generators to ChatGPT. It signals that OpenAI is indeed on its way to generalised artificial intelligence, as many have long hoped it would be. Others watch closely, ready to catch up.
The creation of the safety and security committee demonstrates an ambitious structure for the protection of OpenAI’s powerful new technologies: with six technical and policy veterans including Aleksander Madry and John Schulman, this committee is there to provide a robust structure against the real dangers. Over the coming 90 days, these experts will work towards recommendations for AI safety and security at OpenAI.
OpenAI have never officially named their next model, which they are still working on. But we can call it GPT-5, and be sure that GPT-5 day will come sooner rather than later. Because OpenAI announced GPT-4o as a strategic update. That means we’ll be seeing a real breakthrough on GPT-5. It will all be determined by the fierce competition with the latest and greatest AI models from Meta, Google and Anthropic. OpenAI’s most direct competitor right now is Meta’s Llama 3, which is scheduled to be released in April, and which OpenAI is scurrying to beat, as well as Google’s Gemini and Anthropic’s Claude 3.
As evidence of this, OpenAI’s recent version of the model, GPT-4, includes improved capabilities for serving as a digital assistant similar to Amazon’s AI-powered Alexa. If it works as intended – and a recent trial run seemed promising – it will likely increase the model’s usability by introducing capabilities that consumers have come to expect from voice-powered digital assistants. While digital dynamics are evolving in unpredictable ways, it is clear that policymakers globally need to consider the ethics of how knowledge is produced and made available to consumers. How can they ensure that, as AI continues to advance, the production of knowledge remains ethical and trustworthy? One concern with the knowledge produced by intelligent machines is its transparency, particularly as concerns voice technology that has been made to sound like known and recognisable personalities.
Since coming onboard under Sam Altman’s leadership, OpenAI has demonstrated both cutting-edge AI work and a nuanced awareness about the ethical and social impact of these advances. Indeed, Altman’s investment portfolio – which spans crypto and biotech – not to mention his ongoing commitment (without taking any equity) to OpenAI’s vision, reflects a larger ambition for the future of AI.
As a leader in the field of artificial intelligence and machine learning, Google helps steer the development of AI technologies, through its ongoing innovations, its massive resources and collaboration within the AI ecosystem, its AI-focused research, and its AI applications. Google is a major force shaping the AI landscape, being a growing part of, and setting the agenda for good AI development.
‘To tell the story of the human adventure,’ says OpenAI’s website, ‘AI will be a central character.’ The next instalments of that story, written under Sam Altman and his company’s tutelage, will be those released with GPT-5 and after, perhaps after GPT-6, GPT-10, GPT-30, etc. As we approach the first rewrite of human civilisation by a nonhuman intelligence, questions of governance, ethics and safety are more important than ever. Given the lessons that OpenAI’s safety and security committee continually teach, this couldn’t be a better outcome for anyone anywhere.
More Info:
© 2025 UC Technology Inc . All Rights Reserved.