In an era where AI is interwoven into our social fabric, and its presence and usage is increasing at a rapid pace, the chorus for AI safety is only growing louder and stronger. During the 2024 edition of Asia Tech x Singapore, we witnessed a critical area of AI – AI safety – coming to the forefront, not only in discussion, but also on action. We believe that this conversation on AI safety, the use, and the consumption of AI systems is entering a new era, where we see more tech giants and governments taking real steps to set an example for others to follow in using AI responsibly. One of the tech giants leading the way is Microsoft.
For those who haven’t been paying attention, the conversations about AI safety are no deep philosophical musings; they are fuel for the very real agendas to combat the threats posed by deepfake technology and the spread of fakes, as evidenced by emails released by the ‘hacktivist’ group Anonymous last year from NATO’s Strategic Communications Centre of Excellence in Riga, which outlined a campaign to discredit Russia. One of the many scientists presenting at the summit on building secure AI, Ieva Martinekaite of the Telenor Group, stressed that very sharp AI-generated deepfakes are already in existence, and that need to focus on protecting critical infrastructure is acute. ‘From a big picture perspective, this was a threat we still had some time to prepare for,’ she said. ‘But now it’s already here – deepfakes, especially, are getting harder and harder to detect.
Contrary to popular belief, Microsoft is not only a player in the dialogue on AI safety, but it’s also seen as a driver within the technology ecosystem. Natasha Crampton, Microsoft’s chief responsible AI officer, illustrated how the company addresses the recent rise in deepfakes and other cyber threats, such as the protection of democratic institutions from AI-enabled disinformation campaigns. Shedding light on Microsoft’s work to track misuse, she asserted that such an approach is a necessary condition for the ethical creation and deployment of robotics technology.
Since AI is borderless, there will have to be international cooperation, as well as nations trying to agree legislation together. The overall tone of the summit was how players – such as Stefan Schnorr from the Federal Ministry for Digital and Transport in Germany – could push for adherence to cyber laws that can create a framework for AI safety. For instance, Schnorr highlighted how the EU’s AI Act legislation could contribute to that. There was also a special panel addressing the prevention of deepfakes, even pitching a ‘deepfake observatory’ to be created.
Singapore has well established the model for the rest of the world to follow on AI governance. Last month, it released the final version of the governance framework for the use of generative AI in Singapore, and the effort demonstrates an approach to regulation that can usher in technological innovation while providing a safe and robust operating space. It is a governance model that other countries can follow.
The fact that the Norwegian telecom giant Telenor recently trialled Microsoft Copilot, an AI coding tool, on its enterprise developer tools shows the confidence enterprises have in responsible AI tools, and the need for regulated deployment, especially in critical infrastructures. Also, perhaps it shows the potential of robust partnerships between AI developers and those who deploy it.
And as AI continues to grow and develop, conversations about the safety, the ethical application, and who governs this technology become more and more vital. And creating standards, creating structures to mitigate risk, and writing laws to govern the technology are role that corporations, such as Microsoft, can and will play to ensure technology ultimately works for the benefit of humanity.
As one of the world’s leading technology and innovation companies, Microsoft is committed to responsibly advancing artificial intelligence (AI). As AI becomes an increasingly normalised part of everyday life, Microsoft’s continued, hands-on leadership in developing secure, ethical and responsible AI technologies positions the company as one of the major architects of the world of tomorrow. Even implementing a single breakthrough AI technology can have global consequences. For example, Microsoft is developing state-of-the-art methods to combat deepfakes. On a related front, Microsoft is working to develop advanced cyber resilience technologies while partnering with Telenor on the digital frontier of AI governance. In forging the future of AI, Microsoft also models leadership through an ongoing commitment to developing standards and practices for the ethical use of AI. In effect, Microsoft is taking significant steps towards being a leader in developing standards for the future of artificial intelligence that are global, integrative, inclusive, and ethical.
More Info:
© 2024 UC Technology Inc . All Rights Reserved.