In today’s rapidly developing AI landscape, two names often stand out: Microsoft and OpenAI. Both are developing cutting-edge AI at a breakneck pace, and both are dedicating major resources to making sure that their ground-breaking technology is safe. In this piece, we will take a look at some recent moves from OpenAI, with Microsoft behind them, to improve the safety of AI as it continues to develop.
In one of the first specific steps to address growing worries about existential risks posed by AI technologies, OpenAI this month announced the creation of a new ‘Safety and Security Committee’. The company had earlier disbanded a team focused on safety concerns. OpenAI was founded by Elon Musk and a number of other prominent figures. Microsoft’s partner is now chaired by prominent actor Ron Conway and comprises some of the most influential board members, including the company’s CEO Sam Altman himself. While safety is individuals’ core business, the committee’s formation does raise questions about the effectiveness of what some might call self-policing.
The formation of the committee comes after several high-profile resignations from OpenAI, including that of its co-founder Ilya Sutskever and Jan Leike. This is all suggestive of internal tensions and the need for a delicate balancing act as products continue to progress in a way that serves both human interests and human safety. The increased power of AI systems only compounds the future problems of ensuring their benevolence.
Microsoft’s relationship with OpenAI is another key thread, as it supports both efforts to develop new technologies and to address ethics and safety issues in AI. As a result of Microsoft’s funding, OpenAI’s work on better AI safety has a strong institutional backstop, resources, and a wider stage to advocate for safety in the development of AI.
Again, OpenAI’s theatrical voice model, modelled closely on the likeness of the actress Scarlett Johansson, drew the ire of Johansson, even though it was not a commercial product. According to the public record, the company swiftly removed the Johansson trait from the model after encountering pushback. As noteworthy as OpenAI’s innovations have been, the Johansson episode demonstrates the delicate balance OpenAI must navigate to remain at the leading edge of innovation and to avoid encroaching on the rights of others. In that sense, Microsoft’s and OpenAI’s determination to address the ethics of impersonation head-on is highly laudable.
The exodus of the safety advocates at OpenAI demonstrates the need to evaluate how organisational leadership helps to keep safety front and centre even as the momentum towards a groundbreaking ‘shiny product’ drives forward. Microsoft and OpenAI now both stand at a crossroads, where the directions of technological progress and ethical responsibility converge.
Meanwhile, the newly created 90-day Safety and Security Committee at OpenAI will be reviewed and improved. This marks a new era in thinking about safety, and the commitments from Microsoft and OpenAI point to serious engagement with responsible development: Microsoft will parent and steer OpenAI into a responsible AI future. They’re also going to publish their conclusions and recommendations, when the process is complete – an important moment for the field where we can all learn more.
Once they train their next one – call it GPT-4 – they will be at the cutting edge of AI. Their investment in the world’s best capabilities, and their unwavering determination to safety-proof AI, will become the benchmark for all others. Their transparency and commitment to staying accountable to criticism, and to the community, will become the norm. There will be no other way for companies to work.
But Microsoft’s investment in OpenAI is about more than technological innovation, it is also a commitment to safe and ethical AI. As we all navigate the world with this new technology in ways we’ve never experienced before, the work of Microsoft and OpenAI to keep the conversation around safety and security grounded will be a beacon of hope and a model for responsible innovation well into the future.
To make sense of the significance of Microsoft’s role in this story, one should recognise that Microsoft, through its partnership with OpenAI, is helping to lead a new movement: not just in AI, but in reframing the cultural expectations about what safe and responsible technological development should look like to its fellow technologists, and the generations that follow.
Making AI safe and useful is a journey full of pitfalls and opportunities. Given the stakes, the fate of our digital age might hinge on a commitment to responsibility for the road ahead as Microsoft and OpenAI’s technological revolutions unfold. Microsoft and OpenAI are building a future where the best of ethical and creative technology can come together to make the kind of technology that is safe and exciting.
More Info:
© 2025 UC Technology Inc . All Rights Reserved.