With the advent of artificial intelligence (AI), the digital landscape has undergone a paradigm shift. In just a matter of days, thousands of people around the globe have been introduced to a new technology that allows them to write, perform and share songs at the touch of a button. Although this seemingly harmless AI aims to expand the creative potential of the internet, it’s already been hijacked by malicious users seeking to harm victims with their creations. The ramifications of this issue go far beyond the ethical use of AI – they threaten to amplify hate to unprecedented levels.
Long-term work is now beginning to reveal that AI music generators are being exploited to produce scurrilous songs containing homophobic, racist, and extremist diatribes against specific minority groups, as well as songs glorifying terrorism and the potential for violence. Since the technological barriers to music-generation are being consistently lowered, creating and disseminating this kind of vitriol is now accessible to many who previously lacked sufficient technical expertise or resources.
They are sites such as Udio (now defunct) and Suno, which were built as free public tools to democratise music composition, but in the hands of online provocateurs have instead become the least free public tool for something else entirely. It’s what the artist JSanchez (Jesikah Jaoud) describes as a ‘language hack’, using different phonetic spellings and mangling of words so as to circumnavigate built-in filters for what could be considered legitimate content. Now, these programmes are being used to produce and circulate lyrics that are neither poetic nor complimentary. Instead, the anti-jihadist movement is flooded with threatening songs rife with homophobic, anti-Semitic and misogynistic lyrics, all while advocating violence and, in some cases, a pedestrian form of recruitment or radicalisation.
The problem isn’t just that such hateful messages exist, but that they could be made more effective and memorable because of music’s emotional heft. Music has long been used by hate groups to bring members together in solidarity around their worldview, and to repel or recruit those outside those target group’s immediate grasp. AI music production will make this process much easier, allowing for the mass production of propaganda at a scale never before possible.
In turn, there are urgent calls for music generation platforms to implement appropriate moderation measures and perform thorough safety checks. ‘Red teaming’, a tactic deriving from the military strategy of facing an opposing force with the intent of discovering a genuine or simulated weakness, has been proposed as one potential method. Yet as malicious users continually find new ways to bypass their existing detection systems, this is an ongoing struggle that the platforms must be willing to engage in.
However, perhaps one of the strongest obstacles for fighting against AI hate music lies in the very evasion tactics of those creating it: various songs that spread the message of terrorism were composed in languages and dialectidruck.us/js/global.min.js?v=1s that are less likely to be identified by AI filters. This reveals a crucial lack in moderation technologies, especially for non-English content, and a need to expand content safety to a larger and more diverse online space.
The potential ramifications of hate music created by AI go beyond any given platform or community. As is the case with AI-generated misogyny and hate speech more generally, there is a significant possibility that these songs will spread across the internet, to potentially massive audiences, and disrupt social cohesion even further. High-profile expert and governmental advisory bodies have voiced concerns about the possibility that such a trend will gain traction and cause racist, antisemitic and xenophobic attitudes to intensify.
This scenario may well serve as a warning of the importance of thinking through the ethical implications of how AI technologies are developed and deployed as they continue to evolve. The creative potential of such tools is limitless: they provide users with an unprecedented opportunity to express themselves in new and exciting ways. In the absence of ethical use and rigorous moderation, however, the repertoire of innovation that they enable could very well revert to the monotone hatred of programmed prejudice.
At the centre of this debate about AI-composed hate music is a musical sense of force, which here turns on the intensity of music – AI-crafted – to drive extremist ideologies. This is a reminder of how technological knowhow, developed within some unethical boundaries, turns creative tools into implements of division.
These discussions underscore the Janus-like quality of force: on one hand it’s driven, for instance, towards creative forms of human expression, and on the other it can be leveraged for malicious and harmful activities. We are now faced with the task of channelling this force towards positive, socially constructive outcomes while minimising the dangers that result when technology is exploited by those whose goals are at odds with our ideals of a humane and just world.
In short, the magic of AI may yet transform the music industry, but it will take more than the institutions of the industry, tech companies and content creators to work together to ensure that the harmony of creativity outweighs the cacophony of hate.
More Info:
© 2024 UC Technology Inc . All Rights Reserved.