SPECIMEN #010: Detectus pseudoscientificus (The Algorithmic Witchfinder: Or How to Sell Someone a Problem, Then Charge Them to Make It Worse)
- Classification: Pseudoscience Merchant / Digital Protection Racketeer / Moral Panic Profiteer
- Habitat: University plagiarism policy pages, Reddit moderation tools, editor submission guidelines, browser extensions installed by people who have read one alarming article about ChatGPT
- Diet: Institutional anxiety, academic fear, and the persistent belief that a number generated by an algorithm is more reliable than a pair of functioning eyes
- Threat Level: To students, to writers, to anyone who constructs a clear sentence, and to Charles Dickens, who is 95% AI according to at least five separate detectors and has been dead since 1870

There is a detector.
It analyses text.
It produces a number.
The number tells you how much of the text was written by artificial intelligence.
The number is wrong.
Not occasionally wrong. Not wrong in edge cases. Systematically, demonstrably, hilariously wrong, in ways that would be funny if universities were not using the results to accuse students of cheating, editors were not using them to reject submissions, and platforms were not using them to ban writers who have committed no offence beyond writing clearly.
Charles Dickens wrote A Christmas Carol in 1843 with a quill pen.
Five detectors scored it 95% AI.
One scored it 100%.
Charles Dickens. Dead since 1870. Apparently a robot.
If that does not tell you everything you need to know, nothing will.
The Origin Story
It began, as these things often do, with a genuine problem and an entrepreneurial response that made the problem considerably worse.
ChatGPT arrived. Students used it. Teachers panicked. Institutions demanded solutions.
The solution, someone decided, was a detector. A piece of software that would analyse text and determine, with scientific authority, whether a human or a machine had produced it.
The detector was built. It was sold. It was adopted enthusiastically by institutions that needed to be seen to be doing something.
Whether it worked was a secondary consideration.
It did not work.
This was not allowed to interrupt the business model.
The Science
AI detectors claim to identify patterns associated with large language models. Certain phrases, sentence structures, transitions, rhythms that supposedly distinguish machine writing from human writing.
The problem is straightforward.
AI learned to write from humans. Good humans. Dickens, Orwell, Hemingway, generations of journalists, academics, and competent professionals whose work was absorbed, processed, and reproduced.
When AI writes well, it uses the patterns of good human writing.
When humans write well, they use the same patterns.
The detector cannot tell the difference because there is no difference to detect. The signal it is looking for exists equally in both places.
The result is thirteen detectors scoring the same human-written article at results ranging from 0% to 80% AI. Not a range that suggests calibration. A range that suggests guessing.
A weather forecast written by a Labrador would show more consistency.
The Patterns
Here are phrases AI detectors flag as suspicious:
"Let's be clear." "It's important to note." "Furthermore." "In conclusion." "Delve into."
Standard English. Transition phrases in use for centuries. Flagged as evidence of artificial intelligence because ChatGPT also uses them.
If you write clearly, with proper structure and appropriate transitions, you will be flagged.
If you write in a disjointed, awkward, unnatural style, you will pass.
The detector is not identifying AI.
It is penalising competence.
The Protection Racket
Here is where the business model reveals itself.
The detection service is free. Very generous.
The humanisation service is not.
For a monthly subscription, typically ten to twenty pounds, the detector that flagged your writing as AI will rewrite it to pass its own test.
Let us be precise about what is happening here.
A company has built a detector that falsely identifies clear human writing as AI. This creates fear. The fear drives users to the paid humanisation service. The humanisation service charges you to fix the problem the detector invented.
"Nice essay you've got there. Shame if someone thought it was written by a robot. Pay us £20 a month and we'll make sure that doesn't happen."
This is not a service.
This is a protection racket.
Create the threat. Sell the solution. Profit from both ends.
The Humanisation
Curious about the service, one writer ran a passage through a humaniser and recorded the results.
The original: "We're living through a moral panic about artificial intelligence, and like all moral panics, it's making people stupid."
The humanised version: "We see a mad rush now 'bout smart minds made by tech. Like all such scares, this one makes folk quite dim."
The original: "AI detectors, which turn out to be about as reliable as a chocolate teapot."
The humanised version: "Those AI spot checks. Which turn out to be as good as a wax cup for hot tea."
A wax cup for hot tea.
This is what twenty pounds a month buys you.
Clear, readable prose converted into the linguistic equivalent of a stroke.
Competent English rendered unrecognisable by software specifically programmed to make good writing worse, because good writing is what the detector flags.
The truly insane part: the mangled version would pass.
Because the detector is not looking for quality. It is looking for clarity. And clarity, it has decided, is suspicious.
You are paying to make your writing worse. Deliberately, measurably, embarrassingly worse. So that an algorithm designed to generate revenue from your anxiety will leave you alone.
The Consequences
Universities are using these detectors to accuse students of cheating.
Editors are rejecting submissions.
Platforms are banning writers.
Reputations are being damaged. Work is being dismissed. People who have written every word themselves are being told, by a piece of software that cannot reliably distinguish Charles Dickens from ChatGPT, that they are frauds.
The software is not protecting academic integrity.
It is vandalising it.
Profitably.
The Actual Question
Somewhere in the moral panic, a more interesting question has been lost.
If you read something, and it informed you, entertained you, made you think, made you laugh, or made you glad you read it, does it matter how it was produced?
Nobody finishes a novel and thinks: yes, but what word processor did they use. Nobody puts down a well-argued essay and demands to know whether the author used a thesaurus, consulted an editor, or absorbed enough of other writers' styles to produce something worth reading.
Writing has always used tools. Editors, researchers, dictionaries, style guides, the accumulated influence of every writer who came before. AI is a more powerful tool. It is still a tool.
The question that matters is not: was a machine involved?
It is: was it worth your time?
If the answer is yes, the method of production is nobody's business and nobody's problem. The reader who enjoyed the piece and then discovered AI was involved has not been deceived. They have simply learned something about how it was made, which is about as consequential as learning the author used a fountain pen.
The detector does not ask whether the writing is good.
It asks whether it looks like a machine wrote it.
These are not the same question.
The Language
A brief glossary:
"AI-generated" - produced with AI involvement, used here as an accusation rather than a description.
"Humanised" - degraded to the point where the detector's own algorithm cannot recognise it as competent prose.
"Trusted by Cambridge, Stanford, and Harvard" - a claim appearing on the website of a detector that scored A Christmas Carol 95% AI.
"Perplexity score" - a measure of how predictable the text is, used as a proxy for AI involvement, which also penalises clarity, consistency, and good style.
"Protection plan" - a subscription that solves the problem the free tier created.
"A wax cup for hot tea" - a chocolate teapot, as improved by a humaniser charging £20 a month.
The Testimonials
"I submitted my dissertation. It was flagged as 73% AI. I had written every word. I spent three weeks rewriting it to pass the detector. It passed. It was also considerably worse. I got a 2:2. I believe I deserved a 2:1. The detector does not have a refund policy." - Cordelia, 41, Recently Graduated
"I ran my article through thirteen detectors. I received thirteen different scores. I concluded either that I contain multitudes or that the detectors are nonsense. I have published the article. Several people have told me it was worth reading. I have not refunded them." - Fenella, 38, Apparently 47% Artificial
"I posted the Dickens results to an academic forum. The response was instructive. Several people defended the detectors. One person suggested Dickens might have used AI. I did not engage further." - Gerald, 67, Banned From Two Subreddits
Field Notes
The detector industry has achieved something genuinely impressive.
It has monetised a moral panic it did not create, sold a solution to a problem it cannot solve, and constructed a business model in which the failure of the product drives revenue to the premium tier.
The free detector flags your writing.
The paid humaniser fixes it.
The fixed version is worse.
The worse version passes.
You have paid to make your writing worse.
The company has profited from your anxiety, your competence, and your entirely reasonable desire not to be accused of something you did not do.
Charles Dickens would have failed the detector.
He would not have paid for the humaniser.
He had better things to do.
Advisory
If you encounter Detectus pseudoscientificus in the wild, do not be alarmed.
The anxiety is real. The moral panic is real. The genuine concern about AI-generated content flooding academic and professional spaces is real and not unreasonable.
The detector is not the solution to this concern.
It is a business that profits from it.
If you want to know whether something was written by a human, read it. Does it have a voice? Does it reference specific knowledge? Does it have personality, inconsistency, a point of view that could only have come from somewhere in particular?
That is your detector.
It is free.
It has never scored Charles Dickens 95% AI.
It does not offer a humanisation service.
It cannot be fooled by a wax cup for hot tea.
Use it.