NAVIGATING THE AI MINEFIELD: WRITERS CAUGHT IN THE CROSSFIRE

The shift from standardised human-created texts to a hybrid of artificial intelligence (AI) written content has sparked both hope and dread for the future of the written word. AI offers the possibility of increased efficiency and productivity, but is also cast by many as a shadow over authenticity and originality. Faced with an increasing number of AI attacks, writers are being destroyed online by software errors At the heart of this paradigm shift sits a central problem for all writers everywhere: many of them, maybe even you, are being destroyed online by software written by AI. They are being ruined through text-analysis errors. It happens all the time, but rarely do people get their names cleared by the software company that caused the damage. It’s a strange paradox that seems to signal the beginning of the end of the debate for writers on AI’s role in the written word – for better or worse.

THE STRUGGLE FOR AUTHENTICITY IN THE AGE OF AI

These tools, designed to catch AI-generated text, are creating a battlefield of human casualties as AI-authored content proliferates Writers have become collateral damage to the battle against AI. After all, AI-detectors are created by humans in order to preserve the role of human authors and human creativity in writing. But like any automated systems that need to be trained, these tools, designed to catch AI-generated text, are creating a battlefield of human casualties as AI-authored content proliferates. In their relentless efforts to protect human writers, AI-detectors are further stigmatising human writing, wrongly labelling genuine human work as AI-authored. Allegations of using AI to write one’s work are now used to discredit writers, often unfairly.

THE FLAWED SENTINELS: AI DETECTORS' INACCURACY

At the root of the problem are those AI detectors. Trained to recognise patterns of artificial authorship, they sometimes botch it, snaring the human scribblers in nets targeted at their AI brethren. This learned eye for pattern is not infallible. It reads human writing as not quite human, hence a rich vein of false positives that leads to writers who have been wrongly accused of cheating their way to fame and fortune, reliant only on their wit, talent, craft and skill.

BIAS AND MISTRUST: THE HIDDEN ENEMIES

Above and beyond such technical shortcomings, AI detectors have a more pernicious flaw: they’re biased. Trained on data sets that reflect — either wittingly or unwittingly — the prejudices of their creators, or the limitations of their source material, they can be more prone to flagging material from certain traditions, or written in certain styles, that are not represented in the data used to train the AI. This serves not only to bias the detection process in itself but, more importantly, to create an atmosphere of suspicion: writers, increasingly monitored, cornered, hemmed in on all sides, encouraged to scrutinise their every single word, can help to create a culture of suspicion, where the joy of writing has been supplanted by the fear of being wrongly accused.

THE HIGH STAKES OF ERRONEOUS ACCUSATIONS

Being falsely flagged by AI detectors can mean much more than a dent to one’s ego. Writers can face criminal charges and ruin a career with a single harmful flag. They can also lose a lot of money: one study shows that, on average, authors lost £16,000 in sales every time a bot flagged their work. And, in the worst-case scenarios, a false accusation can plunge writers into a permanent state of depression as their reputation is stained and their cultural authority dissolves. This could even lead to the loss of one’s career, a loss of meaning and identity as much as income.

TOWARDS A SENSE OF FAIRNESS: THE ROAD AHEAD

It’s an untenable situation. What’s needed is two simultaneous steps: better AI detectors, and a better system for deciding what constitutes an acceptable deviation from the ‘human’ norm. I’m not saying we need to find a way to distinguish human from AI-generated writing so as to pick winners and losers on this distinction – the diverse forms of human expression are too varied to be made to fit into such a simple binary – but just so that they are treated more justly on this basis. Until these measures are put in place, writers will continue to dance a delicate step, susceptible to the will of algorithms.

UNVEILING "SENST in the context of AI detection

I’m using sense here in two different ways: to indicate the degree to which those two AI detectors can or cannot discriminate between writing that is human-written and AI-created; but also in the broader sense of judgment – the human faculty that can discriminate between the subtleties of individual expression that AI cannot, at least not yet, fully reproduce. As we navigate this digital dilemma, it’s this very sense of equity and judgment that we’ll need to maintain and magnify in defence of the creative human spirit in the age of AI.

Yet in the middle of all this tech talk lies the plight of writers whose stories get flagged by AI detectors. It calls for a debate on the values we would like to uphold in the online world. The nuances of human creativity must be protected from the cold, mathematical precision of the algorithms. Efficiency should never come at the cost of humanity, especially not when it’s what makes the stories worth telling.

Jun 13, 2024
<< Go Back