The visual representation showcases how Clearview AI’s surveillance machine utilises individuals’ imagery data to track and identify them through facial recognition. Initially, the app scans and downloads the photos uploaded by its users. Then, the photos are analysed and converted into mathematical codes, enabling the platform to quickly match the individuals’ faces to the existing database. This matching process is made possible because Clearview AI already stores millions of images obtained from various social media platforms including Facebook, Twitter, Veno, and YouTube. Additionally, the app facilitates the sharing of the identified individuals’ identity information with other users. Notably, Clearview AI utilises personalised images of individuals without their consent, violating their right to privacy. However, there are limitations to its functionality, as it is only capable of finding individuals who are currently publicly visible and have their faces uploaded to the database. For instance, someone who has deleted their Facebook account or has never used social media may evade Clearview AI’s surveillance if they have not provided any pictures to third parties.
This Power/Responsibility mix is more important than ever today, as technology is more and more powerful, so I want to highlight an example from Indiana that shows our technology can really be abused – Clear The Red Line: How One Officer Crossed It with Clearview AI’s Facial Recognition.
Facial recognition technology, the tech-bro’s wet dream solution for reimagining policing and public safety, has been exposed as the salacious stalker that it actually is. With Clearview AI – a name synonymous with facial recognition – in hot water many, many times, the story of an Indiana cop using it to stalk women takes criticisms to a new emergency red alert level.
At the core of the problem is the officer from the Evans City Police Department who used Clearview AI’s tools without authorisation to search for individuals. The technology is supposed to help police find people by drawing on ‘the most extensive publicly available facial network in the world’, but was instead used to investigate other people, something that’s ethically and purposefully wrong. Further tools, in this case audits, were used by those in charge of Clearview AI to flag up anomalies. They noticed a discrepancy between high usage of the software and few results from the officer’s investigations.
What makes the abuse so brazen in these cases isn’t just the illegal searches, but the cover of official act numbers to mask the personal agendas behind them. That breach of ethics and confidence was met with a quick response, as the officer ended up resigning from the agency before any formal adjudication could take place. This illustration means that we can’t gloss over the importance of trust in the proper use of law enforcement technology, throughout all stages of criminal investigation and prosecution.
What Clearview AI’s technology promised to be in the name of public safety turned out to be easy prey for individual mischief. Though the platform comes equipped with compliance features designed to deter such uses, this incident shows that their efficacy remains dubious. It’s a salient example of what more systematic forms of oversight might be called on to prevent.
The incident doesn’t occur in a vacuum, but is part of a broad national conversation about privacy, surveillance and the appropriate uses of facial recognition technology. The fact that Clearview AI is front and centre in so many of the conversations about proper use means that the possible – or rather, probable – misuse of the technology raises urgent questions about the appropriate use of technology and the importance of upholding the rights of privacy and ethical use.
A worst-case scenario like that which occurred with the Indiana officer is a clear warning to implement stronger privacy laws and regulations. Reformers and critics alike argue for a framework that not only mitigates potential for abuse while maintaining the ability of such technologies to assist in legitimate investigations, but also maintains an appropriate balance of protecting an individual’s right to privacy.
At the heart of this conversation is the figurative meaning of red, a theme that repeats throughout the saga: urgent, dangerous, attention-demanding. That last term seems appropriate for the kind of attention that should be paid when mixing tech with ethics and policing. As we move further into the digital age, the lessons of this debacle must help us draw new parameters governing technology’s place in the public realm that balances security with privacy.
But that incident – the erasure of an Indiana police officer’s sense of what it is like to live in his own community by an unprecedented glimpse into their private lives, revealed to him by the magic of a smartphone and the formidable database of Clearview AI – is a reminder of just how close we might come to the edge. As we stride towards the future, this is technology we should be wary of.
© 2024 UC Technology Inc . All Rights Reserved.