Be First, Be Smarter, or Cheat

Mar 23, 2026 6 min read

In February 2023, Clarkesworld, a science fiction magazine that pays ten cents a word and has won four Hugo Awards, received over 500 AI-generated story submissions in under twenty days. About 700 human-written submissions arrived in the same window. Editor Neil Clarke shut the portal for the first time in the magazine's seventeen-year history. The generated works, he wrote, ranked "among the worst submissions we've ever received and sometimes bad in entirely new ways."

People had copied the magazine's submission guidelines, pasted them into ChatGPT, and sent in whatever came out. The motive: a handful of YouTube videos promising easy money from fiction markets. The slush pile contaminated so fast the humans reading it couldn't keep up.

I remember the week that story broke, and what it confirmed: something I'd spent a decade building had just lost its floor. The magazine that published stories about technology's consequences had become a story about technology's consequences.

In the 2011 film Margin Call, Jeremy Irons plays John Tuld, a CEO modeled on the kind of man who orders wine by pointing at a price on the menu. His firm has just discovered its mortgage-backed securities are worthless. Tuld listens to the math. He understands none of it. He doesn't need to. He survived forty years on Wall Street by understanding something else entirely.

"There are three ways to make a living in this business," he tells the room. "Be first, be smarter, or cheat."

Three strategies, and only three. What happened to writing after 2023 follows Tuld's taxonomy with an accuracy that would make him smile.

Be First

Speed paid first, the way it always does. AI tools could produce passable copy at scale, and within months a new class of operator built businesses around volume.

Content farms spun up thousands of articles a day. A single AI slop website could pull up to $40,000 a month in advertising revenue, according to media watchdog NewsGuard. Operators pumped out hundreds of articles daily, needing only a fraction to catch algorithmic traction. YouTube filled with AI-generated children's content. Horses hatching from eggs. Alphabet lessons taught by characters with the wrong number of fingers.

"Slop" became Word of the Year for 2025, crowned by both Merriam-Webster and Australia's National Dictionary. Meltwater reported a ninefold increase in mentions of the term compared to 2024. Kapwing estimated that 21 to 33 percent of YouTube's recommended feed consisted of AI slop or "brainrot" videos, generating around $117 million annually in advertising revenue.

The speed advantage lasted about eighteen months. Google search traffic to news publishers dropped by a third globally in 2025. AI Overviews swallowed the informational queries that had driven traffic for two decades. The content farms that survived learned to generate and discard faster than any platform could moderate. But the margin keeps shrinking. When everyone owns a printing press, the press stops being the advantage.

I watched this happen to blog posts in my own niche. Writing advice, book reviews, craft essays. Overnight the search results filled with 2,000-word articles that said everything and meant nothing. The same points in the same order with the same examples. You could feel the absence of a person behind the words, the way you can hear the difference between a recording and a voice in the next room.

Be Smarter

The smarter play arrived late. It required something that defies automation: the willingness to be wrong in public, to commit to a position that a more cautious voice would soften, to write a sentence that could only come from one person's specific encounter with the world.

Publishers Weekly reported that 63 percent of publishers used AI in some capacity by late 2025. Most shrugged through it. Some editors turned their refusal of AI into a selling point. The Authors Guild launched a "Human Authored" certification program, drawing a line: you could use AI for spell-checking or research, but the literary expression had to come from a human.

The audiences moved faster than the institutions. A report from influencer marketing agency Billion Dollar Boy found that consumer preference for AI-generated content had dropped to 26 percent, down from 60 percent three years earlier. The Sprout Social Q4 2025 Pulse Survey confirmed it: audiences in 2026 treated authenticity as a condition for engagement.

In 2023, AI-generated content carried novelty. People shared it because the existence of the output seemed remarkable regardless of its quality. The novelty burned off. What remained was the content itself, judged on the only metric readers have ever used: whether it made them feel something, learn something, see something they hadn't seen before.

The Reuters Institute's 2026 report on journalism trends captured the strategic response. Publishers shifted toward original reporting, contextual analysis, and human-centered stories, specifically because this work resisted commoditization by AI chatbots and aggregators. When the cost of producing adequate text drops to zero, adequacy stops paying. What pays is the thing the machine cannot fake: a living intelligence that has been somewhere, seen something, and is willing to stake a claim on what it means.

I use AI in my own workflow, for research, for stress-testing logic. I don't use it for the sentences. The sentences are the thing I've spent years learning to write in a way that sounds like me and nobody else. Outsource that and you've handed over the one part of the work the reader actually came for. The readers can tell. They always could.

Or Cheat

The cheating started in the obvious places and migrated somewhere harder to see.

Carlos Chaccour, a physician-scientist at the University of Navarra, published a paper on controlling malaria with ivermectin in The New England Journal of Medicine in July 2025. Forty-eight hours later, the editors received a letter raising "robust objections." The letter cited two references to support its critique. Both had been authored by Chaccour himself. Neither said what the letter claimed they said.

Chaccour recognized the hallmark of a large language model: confident citation of sources it had never read. He and Matthew Rudd, a statistician at the University of the South, investigated the letter's author. A physician from Qatar, the author had published zero letters to the editor before 2024. In 2025, he published 84, across 58 different scientific topics. The author's initials, Chaccour noted with some amusement, were B.S.

Chaccour and Rudd then analyzed over 730,000 letters recorded in PubMed between 2005 and September 2025. They found a surge of "prolific debutantes," new authors who appeared in the top 5 percent of letter writers starting in 2023. One published 234 letters in a single year after none the year before. These authors made up only 3 percent of the active letter-writing population but contributed 22 percent of all letters published in that period.

The motive wrote itself. In academic culture, publishing volume determines career survival. Letters to the editor count on a CV. As Chaccour put it: "As long as perish is the alternative, people will take whatever is on the other side. Do you want pear juice or perish? Pear juice. Do you want rotten tomatoes or perish? Rotten tomatoes, no doubt."

The same logic drove Clarkesworld's slush pile, and the flood of AI-generated books on Amazon's Kindle Store, and the SEO content farms churning out articles on topics they'd never researched. The system's incentives reward volume. Detection remains unreliable. The risk-reward calculus favors the machine.

And the cheating evolved. Clarke's 2025 submission data tracked the shift. The early AI submissions arrived fully generated and easy to spot. By 2025, the trend moved toward partially generated work, harder to identify, originating from established writing communities. The US, not the developing world, now produced the majority of slop submissions.

AI detection tools in 2026 claim around 94 percent accuracy at the paragraph level in English. Bruce Schneier and Nathan Sanders, writing for The Conversation in February 2026, called the detection effort a "no-win arms race." Neil Clarke offered the clearest comparison: email spam filters have been in development for over thirty years and still make mistakes. We've learned to live with the error rate.

What Remains

Nieman Lab's 2026 prediction essay named the real danger: the flood of machine-made text flattening everything, readers losing the ability to recognize the difference, the distinction between human-made and machine-generated ceasing to matter to the people consuming the work.

The data points the other way. Audiences can feel the absence of risk, the way generated prose hedges every claim and lands on the safest possible version of every sentence. They notice the missing texture.

I've read thousands of AI-generated paragraphs, in my own experiments, in slush piles shared by editor friends, in the SEO content that fills every search result. The tell lives in the stakes, or the lack of them. A human writer making a claim has something to lose if they're wrong. That vulnerability sits in the sentence, invisible and load-bearing, the way tension lives in a cable. Generated prose carries no such weight. It predicts the next word. It risks nothing by choosing it.

Google's AI Overview rewrote Clarkesworld's own description to falsely claim the magazine published AI-generated stories. Clarke spent weeks fighting to correct it. The machine generated a lie about the one publication most famous for refusing the machine's output. Tuld would have recognized the move. The system cheated, and it cheated against the people who had chosen to be smarter. Nobody programmed this irony. It emerged from the same logic that generates everything else: pattern-matching without comprehension, prediction without understanding, fluency without a single thing to say.