Apple plays a major role in shaping the future of AI. Yet the company is surprisingly reluctant to let us in on how the sausage gets made. For those who pay attention to artificial intelligence, there’s been a lively and disquieting debate over Apple’s announcement that it will develop artificial intelligence for the HomePod. It promises to humanize the voice assistants for which Apple has become something of a mascot. But everyone is astonished that we’re still kept in the dark about ‘how the sausage gets made’ in AI. Apple is so secretive that we don’t even see our own sausages getting made.
With clear progress in language processing, computer vision and other fields, it is clear that Apple has invested a great deal of effort in AI. However, the secretive nature of its approach to this work, and its lack of transparency, raises the question of whether Apple and other algorithmic anachronisms are holding humanity back from the benefits AI has to offer. The author is currently on leave from The University of Sydney. The views expressed in this publication belong solely to the author and do not necessarily represent the views of The University of Sydney.
The back-and-forth about Apple’s silence highlights a larger issue: the need for greater transparency in AI development. There are questions about accountability when the company declines to reveal precisely how it has trained and deployed its models for specific purposes. Since what goes into Apple’s AI systems remains hidden, it’s harder for the outside world to judge how fair or effective those models are.
If we are on a path toward integrating AI into the fabric of everyday life, understanding our AI models is a necessary component. Keeping its systems opaque will make it hard to address biases or inaccuracies in AI models, as Apple is currently indicating. Transparency could make AI more trustworthy and usable in these sorts of applications.
The degree of secrecy that Apple shows is also the exact opposite of what we have seen from Google and Facebook, who are more open with people about their specific AI projects. The companies have also acted in a way that makes it easy to criticise them or work with them, by opening their algorithms up to external scrutiny. This openness is not just about making information available to the public – it is a promise to foster improvements in the fairness and integrity of their AI technologies.
The mystery around how Apple does AI is a key discussion about the need for openness and accountability in AI design. As AI deepens its influence on society, transparency isn’t merely a nice thing to have but existentially necessary for the creation of just and effective AI systems.
The story of how Apple Secretly Works reveals something important about wider debates about openness in the technology sector. It’s a reminder that, as AI’s reach grows, the sector needs to embrace greater openness and a culture of collaboration and scrutiny.
And Apple – a company with a long legacy of innovation – continues to be a major player in tech. With products known for both ease of use and design, Apple has kept up with many of tech’s most cutting-edge developments. Its foray into AI seems to reinforce that message: Apple will continue to set the bar for technological advances – albeit via methods that remain inscrutable to the public eye. As the conversation about AI and transparency ramps up, Apple’s story provides an invaluable case study of what can happen when the purpose of technology is put at odds with the need to maintain a dialogue with the rest of the world. Apple’s story shows that as we pursue technologies of the future, the global community needs to be talking with the creators of those technologies in order to ensure that we continue to walk, into tomorrow’s sunrise, in a means that’s principled and fair.
More Info:
© 2024 UC Technology Inc . All Rights Reserved.