On the death of the author and the birth of the detector
AI detection is the latest in a long line of purity tests that pretend to protect a craft while excluding who gets to practice it. Dumas faced this in 1845. Jim Thorpe faced it in 1912. The pattern is older than AI, and it always collapses. Sometimes too late.
In 2023, Stanford researchers tested the most widely deployed AI-detection tools — GPTZero, Turnitin’s detector, and five others — against real writing samples. The tools flagged essays written by non-native English speakers as AI-generated 61% of the time. They flagged essays by native English speakers at 5%. A twelve-fold disparate impact, on tools being used to decide whether students get expelled, whether journalists get published, whether writers get hired.1
The detectors weren’t responding to AI. They were responding to the phrasing patterns that second-language English instruction produces — formal register, consistent tense use, textbook constructions. The same patterns that make academic writing “look polished” to one reader look “machine-made” to another. The tool didn’t need to be designed racist. It only needed to codify an existing aesthetic about what real English sounds like, and let the numbers fall where they fell.
This is what a purity test is for. Not protecting the craft. Protecting the aesthetic class that already owns the craft.
The contemporary panic about “AI slop” is prejudice wearing the mask of discernment. It’s the intellectual equivalent of refusing to read a book because you don’t like the author’s accent, or dismissing an argument because of who made it rather than what it says. And it’s particularly ironic given that literary theory solved the underlying problem sixty years ago, and history has already run this exact purity-test pattern twice in the last two hundred years. Both times it collapsed. Both times too late to save the people it hurt.
What Barthes actually meant
Roland Barthes told us in 1967 what should have been obvious: meaning doesn’t live in the author. It lives in the reader. “The birth of the reader must be at the cost of the death of the Author,” he wrote.2 The text is the thing. How it got there matters far less than what you do with it when it arrives.
Barthes’s “The Death of the Author” was a direct assault on biographical criticism — the practice of interpreting texts through the lens of who wrote them. What did the author intend? What experiences shaped them? Barthes called this a failure of reading. The text, he argued, is “a tissue of quotations,” a weaving of cultural references that no single author controls. Once written, the text becomes independent. It speaks for itself.
“The reader is the space on which all the quotations that make up a writing are inscribed without any of them being lost; a text’s unity lies not in its origin but in its destination.”
This was a democratization of meaning. If the author doesn’t control interpretation, every reader becomes an equal participant in constructing what a text means. The New Critics had already established the intentional fallacy — Wimsatt and Beardsley’s argument that “the design or intention of the author is neither available nor desirable as a standard for judging the success of a work.”3 Stanley Fish pushed further: “Interpretation is not the art of construing but the art of constructing.” Meaning emerges through reading, not from some authorial intention we can never fully access.4
Apply this to AI and the implications are immediate. If meaning resides in readers rather than authors, then the author’s identity — human, AI, or human-AI collaboration — becomes irrelevant to interpretation. A text that helps you think clearly is valuable regardless of origin. A text full of errors is worthless regardless of whether a human produced it. The work is the thing. Judge it as work.
Yet here we are, in 2026, with “AI slop” selected as the Word of the Year by both Merriam-Webster and the American Dialect Society, and entire communities building elaborate detection systems to identify and exclude content based not on what it says but on who — or what — they suspect said it. We’ve regressed from Barthes’s liberation of the reader back to a primitive authorship fixation, only now in reverse: instead of privileging certain authors, we’re excluding suspected non-authors.
The prejudice exposed
Consider Ben Congdon’s definition of “slop” from his January 2025 essay “AI Slop, Suspicion, and Writing Back”: content that is “mostly-or-completely AI-generated that is passed off as being written by a human, regardless of quality.”5 Regardless of quality. The definition explicitly admits that origin, not value, is what’s being policed. Even if the writing is good — the origin disqualifies it.
Or look at the moderator of r/explainlikeimfive, who explained the subreddit’s AI ban by saying that “even if [AI] does give an accurate answer, the purpose of this site is for people to write in their own words.” Even if it’s accurate. Even if it serves the ostensible purpose of the forum. The origin disqualifies it.
This is the structure of prejudice: pre-judgment based on category membership rather than individual merit. We wouldn’t accept “I won’t hire you because of where you’re from” as legitimate reasoning. We shouldn’t accept “I won’t read this because of where it’s from” either.
I should be transparent here. I use AI tools in my writing. I’m using them now. Not as a replacement for thinking but as a thinking partner — something between a research assistant, a sounding board, and an occasionally annoying editor who points out when my arguments don’t hold together. Every sentence you’re reading has passed through my judgment about whether it says what I mean and whether it says it well. But many of those sentences were shaped in conversation with Claude, and I’d be lying if I claimed I could always tell you which phrases originated with me and which emerged from the collaboration.
This isn’t a confession. It’s an invitation to notice your own reaction. If learning that changes how you evaluate the argument, you should ask yourself why. The argument either holds or it doesn’t. The evidence either supports the claims or it doesn’t. My use of AI tools changes none of that. What it reveals is whether you’re evaluating the work or the worker.
The impossible purity test
The “AI slop” accusation fails on its own terms. If AI assistance disqualifies writing as authentic, where exactly do we draw the line?
Does using spellcheck count? Spellcheck is computational — a system making decisions about your text. Does grammar checking count? Grammarly uses machine learning to suggest rephrasing. Does using a thesaurus count? That’s outsourcing vocabulary decisions to an external tool. Does talking to someone about your topic before writing count? That’s incorporating non-self inputs into your thinking. Does editing count? Having someone else improve your prose means the final product isn’t purely yours.
By any strict definition of unaided human authorship, no one in history has ever written anything. Writing itself is a technology — Socrates criticized it for creating “forgetfulness in the learners’ souls” and offering the appearance of wisdom without true understanding.6 If writing is allowed, why not typewriters? If typewriters are allowed, why not word processors? If word processors with autocomplete are allowed, why not AI assistants?
There is no principled line. The purity test becomes absurd almost immediately because the premise is absurd. Every writer uses tools. Every writer incorporates external inputs. The question isn’t whether to use tools but which tools and how.
Paul Ricoeur warned about the hermeneutics of suspicion — reading texts skeptically to expose hidden meanings.7 Suspicion has its place. But Ricoeur also noted the need for a suspicion of that suspicion, because interpreters too easily substitute one prejudiced understanding for another. A categorical dismissal of anything suspected of being “AI slop” is not sophisticated judgment; it is one naivety in the place of another. Worse: the hermeneutics of suspicion, once adopted as a default, has no stopping rule built in. Any verification system becomes a new surface to suspect. Watermarks can be forged. Signed commits can be staged. Biometric typing patterns can be recorded and replayed. The regress doesn’t bottom out, because suspicion rejects the very concept of bottoming out. At some point, the choice becomes either to read the text or to perform the refusal to read — indefinitely.
This has happened before: Dumas and the factory
In 1845, a French journalist named Eugène de Mirecourt published a sixty-four-page pamphlet titled Fabrique de romans: Maison Alexandre Dumas et Compagnie — “Novel Factory: House of Alexandre Dumas & Co.”8 The pamphlet accused Dumas of not writing his own novels. Which was — by one definition of “writing” — true. Dumas had a stable of collaborators, most famously Auguste Maquet, who drafted plots and outlines for The Three Musketeers and The Count of Monte Cristo. Dumas did dialogue, polish, and the name on the cover. His output — more than a hundred thousand published pages in a single lifetime — was mathematically impossible to produce alone.
Two things happened.
Dumas sued Mirecourt for defamation and won. Mirecourt went to prison for six months. The Three Musketeers and The Count of Monte Cristo are still read a hundred and eighty years later. The factory didn’t corrupt them. The books are the books. Maquet is a footnote, credited in specialist literature, known to the kind of reader who looks for him.
The attack on Dumas had a second register that the defamation case couldn’t reach. Dumas was the grandson of a Haitian-born enslaved woman. His father was the first person of color to reach the rank of general in any modern European army. Mirecourt’s attack was class contempt with a racial undercurrent: how could a man of his background really have written these novels? The French word nègre, which at the time meant “slave” or was used as a racial slur, began shifting in popular usage toward its modern French meaning of “ghostwriter” partly through the Dumas controversy itself.9 The purity rule against collaborative authorship and the racial question about who gets to be a real author were not separate arguments. They were the same argument with different clothes on.
Mirecourt’s pamphlet could be published tomorrow with the word “AI” substituted for “Maquet” and almost nothing else would need to change.
This has happened before: the amateur
At the 1912 Stockholm Olympics, Jim Thorpe won gold in both pentathlon and decathlon. King Gustav V of Sweden told him, in a line that survives in every Thorpe biography: “You sir, are the greatest athlete in the world.” Eighteen months later the IOC stripped him of both medals. The violation: Thorpe had played two summers of semi-professional baseball for twenty-five dollars a week, to cover expenses. Under the amateurism rules of the time, that made him a professional. Professionals could not compete.10
Thorpe was Native American. He had attended the Carlisle Indian Industrial School. He had made the twenty-five dollars a week because he was not wealthy. The amateurism rule, imported into the Olympics by Pierre de Coubertin from the British sporting tradition, had been explicitly designed to keep working-class men out. The 1878 Henley Regatta rules declared: “No person shall be considered an amateur oarsman or sculler…who is or has been by trade or employment for wages, a mechanic, artisan, or labourer.” The same exclusion ran through the Amateur Athletic Club, founded 1866, which barred from amateur eligibility any man who was “a mechanic, artisan, or labourer.”11 It was not subtle about its purpose. The purpose was that rowing and athletics should remain the domain of gentlemen, who by definition did not work for money, who by definition were the kind of men who had access to rowing clubs in the first place.
Coubertin himself held “little allegiance to the concept of amateurism,” per the recent academic consensus — but he recognized that he would need to indulge his British contemporaries’ belief in it to build their support for his Olympic revival. He baked the class rule into the Games not because he believed in it but because it was the price of admission to the social world that would fund and populate the Olympics. The rule was always instrumental to gatekeeping, never to the sport.
The rule ran for roughly a century. In 1986 the IOC began allowing professionals in individual sports. By 1992 the Dream Team played basketball in Barcelona, and the amateurism rule was effectively dead. The sport did not collapse. The Olympics did not become less meaningful. The moral panic about “professionalism ruining the games” turned out to be about something other than the games.
Thorpe’s medals were restored in 1983, thirty years after he died. In 2022, the IOC went further and declared him the sole winner of both events — until then he had been listed as a co-champion with the athletes who’d been elevated when his victories were voided. Seventy years too late, and still, the correction had to happen.
The NCAA played the same story to the same end. Amateurism was the rule for college sports until the Name/Image/Likeness ruling in 2021 finally permitted athletes to earn money from their own labor. The moral panic before, the collapse during, the normalization after — same arc. Same kinds of athletes most affected by the rule. Same people insisting it was about the purity of the game.
What purity tests actually protect
The Olympics did not need amateurism to remain the Olympics. Dumas’s novels did not need him to have written every sentence alone to remain what they are. Writing does not need to be produced without assistance to be worth reading. The thing each purity test claimed to protect was never the thing being threatened.
What each test actually did was encode a social hierarchy as a quality rule. The people who could afford to train full-time without pay were the only ones who could pass the amateurism test, so amateurism became the standard of real athleticism. The authors who had the prestige to publish under their own name alone were the only ones who could pass the solo-authorship test, so solo authorship became the standard of real writing. The writers whose native language, class, and education produced the unmarked aesthetic of “good English” are the only ones who reliably pass the AI-detection test, so “good English” becomes — once again — the standard of real writing.
The tool announces itself as a protection of the craft. The tool is actually a protection of who gets to practice the craft.
The detection problem, and why it’s already worse than wrong
If the prejudice could be operationalized — if we could reliably identify AI content — it might at least be consistent. We could separate texts by origin and be done with it. But detection doesn’t work, and the specific way it fails is the argument for why the whole project is already doing harm.
The Stanford study is the headline, and it is damning, but the pattern runs wider. Professional writers who polish their prose, eliminate awkwardness, and achieve stylistic consistency are more likely to be flagged than sloppy writers. The qualities we value in good writing — clarity, precision, varied vocabulary, logical structure — trigger detection systems because they resemble AI output patterns. As one analysis noted, professional writers “may find their content flagged precisely because it lacks the imperfections that detection systems associate with human composition.”
Real people are already paying the cost.
Louise Stivers, a UC Davis political science major about to graduate and attend law school, was flagged by Turnitin as having submitted AI-generated work. She was able to prove her innocence by reconstructing her writing process from Google Docs version history. William Quarterman, also at UC Davis, had a history exam flagged by GPTZero, was referred to the university’s honor court, experienced panic attacks, and was eventually cleared after providing evidence he hadn’t used AI. Kimberly Gasuras, a working journalist, was kicked off a freelance platform when detection software flagged her human-written work.12 Vanderbilt University disabled Turnitin’s AI detector after recognizing its unreliability; they estimated that if the tool had been active when they submitted seventy-five thousand papers, roughly seven hundred and fifty would have been incorrectly flagged as AI-generated.13
Those are the cases that surfaced. The accused-and-quietly-punished rate — students who didn’t fight, didn’t have Google Docs history, didn’t have the cultural capital to defend themselves — is the part of the iceberg we don’t see. The Stanford numbers suggest the accused-and-quietly-punished are disproportionately non-native speakers and students from institutions that don’t have the resources to appeal.
This is the structural argument and it holds without any appeal to intent. No individual enforcer has to be a conscious bigot for the tool to produce bigoted results. The disparate-impact numbers are the argument. A detector with a twelve-fold racial gap is not a detector that needs reform. It is a detector whose operating principle — polished writing equals machine writing — reproduces whichever social hierarchy already decides what polished writing sounds like. That was true when “polished English” meant “upper-class English” and it is true now that “polished English” includes the unmistakable fingerprint of professional editing and ESL instruction.
I will say this plainly because the soft version lets the hard truth off the hook: AI puritanism is racist in effect, whether or not it is racist in intent. It reproduces exactly the pattern the Olympic amateurism rule reproduced, exactly the pattern the attack on Dumas reproduced. It uses a neutral-sounding technical standard to exclude the people who would already have been excluded by the older, more openly prejudiced standards. The tool is the alibi. The tool is what lets the old exclusion speak in the language of quality assurance.
The assistive technology angle
There is another dimension that deserves attention. For many people, AI tools are assistive technology. They are the difference between being able to write at all and not being able to write.
People with dyslexia use AI to organize their thoughts and catch errors that spellcheck misses. People with ADHD use AI to maintain focus and structure. Non-native English speakers — who, remember, get flagged as AI-generated sixty-one percent of the time — use AI to express ideas they understand clearly in a language whose idioms don’t come naturally. People with disabilities affecting motor control use AI to transcribe and refine their thoughts.
When we dismiss “AI-assisted” writing categorically, we’re not just being lazy readers. We’re potentially excluding the voices of people who depend on these tools to participate in written discourse at all.
This should sound familiar. We don’t dismiss the work of writers who use dictation software, or screen readers, or any other assistive technology. We recognize that accessibility tools don’t invalidate the thinking behind the words. Why should AI assistance be different?
The answer, I suspect, is that we’ve drawn an arbitrary line based on an intuition about what counts as real thinking versus mere mechanical assistance. That intuition doesn’t survive scrutiny. What matters is whether the final product represents genuine intellectual engagement — and that can’t be determined by examining the tools used to produce it. It can only be determined by reading the work.
The pattern we keep repeating
We’ve been here before. Many times.
When Gutenberg invented the printing press in the 1450s, scribes formed guilds to resist it. Hand-copied texts were considered more authentic, more spiritually meaningful. Johannes Trithemius protested the “invasion of the library by the printed book” because mechanical reproduction lacked the devotion of “preaching with one’s hands.”14
When typewriters spread in the late 1800s, recipients of typewritten letters felt insulted. Martin Heidegger argued that the typewriter meant “the word no longer passes through the hand as it writes and acts authentically but through the mechanized pressure of the hand.”15
When photography emerged, critics insisted it couldn’t be art because it was “made by a machine rather than by human creativity.” Baudelaire warned that photography would “supplant or corrupt” art altogether. The Museum of Fine Arts, Boston didn’t collect photographs until 1924 — nearly a century after the medium was invented.
When calculators entered classrooms in the 1970s, educators warned that students would become dependent on machines, their computational abilities ruined. Parents believed their children would forget how to do math. The debate went on for decades.
When recorded music emerged, critics feared the phonograph would kill live performance. Theodor Adorno argued that recording distorts authenticity. Walter Benjamin worried that broadcasting removed music from the “concert ritual” that gave it meaning.
Notice the pattern. Every time a new tool emerges for making or manipulating symbols, we panic. We create impossible purity standards. We claim the new technology threatens authenticity, ruins cognition, destroys creativity. And then we accept the tool, evaluate its products on their merits, and forget we ever worried.
Photography is unquestionably art now. Recorded music is a dominant art form. Calculators are in every classroom. Word processors are how everyone writes. The work matters, not the tool.
And here is the part that connects the tool-panic pattern to the purity-test pattern we saw with Dumas and Thorpe: the cycle of tool panic has always been a cycle of gatekeeping. When printing came in, the question wasn’t just “is a printed book as good as a manuscript?” It was “do we let peasants read?” When typewriters came in, the question was whether secretarial work, increasingly done by women, really counted as intellectual labor. Photography’s “is it art?” question was inseparable from the fact that the photographer was often working-class while the painter was traditionally bourgeois. Each new tool made a new group of people capable of producing the craft — and the purity response was to move the craft’s definition to exclude them.
The AI debate follows this exact arc. We are in the panic phase. The resolution will come when we accept what should be obvious: evaluate the work.
When authorship legitimately matters
I am not arguing that authorship never matters. It clearly does in some contexts.
Attribution matters for compensation and credit. Writers deserve payment for their work. Academics need proper citation for career advancement. If AI generates content that someone claims as their own for payment or credit they didn’t earn, that’s a legitimate problem — not because the content is worse for being AI-generated, but because someone is claiming unearned reward.
Attribution matters for legal responsibility. If a text makes defamatory claims or incites violence, someone needs to be accountable.
Attribution matters for certain inherently autobiographical genres. A memoir’s value depends partly on being a genuine account of the author’s experience. First-person testimony matters because the witness actually witnessed what they describe.
These are legitimate concerns about authorship. They’re about ethics, fairness, accountability. They are not about meaning. Foucault, following Barthes, made this distinction explicitly: the author-function persists as a social, legal, and economic construct even when the author is dead as an interpretive category.16 Pay the human if you bought their labor. Credit the human if you’re citing. Hold the human liable if the text is defamatory. Those are separate questions with their own answers. None of them is an argument for reading the text with less attention.
The AI-detection panic is what happens when a society refuses to make this distinction. It collapses the hermeneutic question (does this text hold together?) into the administrative question (who typed it?) and handles neither well. It demands certainty of origin before it will read, which means it never reads. It treats every polished sentence as a suspicion instead of an argument. And it punishes, with remarkable reliability, the people whose polish came from somewhere other than the expected schools.
The work of being a reader
There is a deeper issue here, one that connects to what I think of as the work of being — the irreducible effort required to live a considered life rather than a reactive one.
The “AI slop” dismissal functions as an avoidance strategy. It offers a way to reject content without evaluating it, to sort the world into acceptable and unacceptable without doing the cognitive labor that actual discernment requires. It outsources judgment to origin labels instead of doing the hard work of reading and thinking.
Reading is work. Real reading — the kind that engages with arguments, tests claims against evidence, allows a text to change your mind — requires attention and effort. Checking the byline is easier. The author becomes a shortcut, a heuristic for quality that substitutes for thought.
But heuristics fail. Human authorship doesn’t guarantee quality — most human-written content is mediocre, much is wrong, some is actively harmful. AI involvement doesn’t guarantee worthlessness — AI can synthesize information, clarify ideas, catch errors, and generate text that serves readers well. Any particular text has to be evaluated as what it is, not what category it belongs to.
Stanley Fish was right: interpretation is construction, not discovery. We build meaning through engagement with texts. The meaning of a text isn’t determined before it is read — it is constituted in the act of reading. Dismissing content on the basis of suspected authorship is a refusal to participate in meaning-making. The dismissal hasn’t found the content wanting; the content hasn’t been read.
I spent twenty years in consulting before ChatGPT existed, and I watched this same pattern repeatedly. Good ideas rejected because of who proposed them. Bad ideas adopted because they came from the right person or the right firm. I’ve seen companies ignore transformative insights from junior employees while implementing mediocre recommendations from expensive consultants — purely based on the perceived authority of the source. The content didn’t matter. The label did.
This is the same cognitive failure. We use origin as a shortcut to avoid the hard work of evaluation. When those shortcuts become rigid categories that override evidence, they stop being useful heuristics and become prejudice. The question is never really “who said this?” The question is “is this true, and is it useful?”
Read it like a grown-up
Barthes told us sixty years ago that the author is dead, that meaning resides in readers, that the text’s unity lies not in its origin but in its destination. The purity test for “real” authorship is impossible — everyone uses tools. The historical pattern is clear — we’ve panicked about every new writing technology, and every time the panic was also about who got to use the technology, and every time we were eventually wrong. The detection systems don’t work — they harm non-native speakers and careful writers disproportionately. The prejudice is real — it substitutes origin for evaluation, avoidance for engagement, and it reproduces the same racial and class exclusions it has reproduced in every earlier round.
What remains is a choice — at the level of culture and at the level of each act of reading — about what to do with this inheritance.
The Barthesian move has always been available: kill the author, let the text speak for itself, do the work of reading with attention and charity and critical judgment. Evaluate arguments on their merits. Accept good thinking regardless of its source. Reject bad thinking regardless of its pedigree.
This is harder than dismissal. It requires engagement rather than categorical refusal. It requires forming judgments rather than outsourcing them to origin labels. It is the work of being a reader.
So here is the offer I want to make: judge this essay. Not by whether I wrote it or whether Claude helped me write it — I’ve told you the answer — but by whether the arguments hold. Does meaning reside in readers rather than authors? Does the historical pattern of purity tests (Dumas, Thorpe, the NCAA) suggest anything about our current moment? Is the prejudice I’m describing real, and is it harmful?
Those are the questions that matter. The question of who typed which words is a smaller one, with its own narrow answers in the narrow contexts where it applies — compensation, citation, legal responsibility, autobiography. For the rest, Barthes’s insight holds: a text’s unity lies in its destination, not its origin. In an age when the origin of text is increasingly uncertain, that insight isn’t a philosophical preference. It’s the only intellectually honest ground left to stand on.
Stop asking who wrote it. Start asking whether it is true. The author is dead. The detector is how we pretend otherwise. The prejudice has run its course twice already in living cultural memory — with Dumas, with Thorpe, with every gatekeeping purity standard that turned out to be about who got to cross the gate. This one will collapse too. The question is how many careers it ruins before it does.
Notes and citations
Weixin Liang, Mert Yuksekgonul, Yining Mao, Eric Wu, James Zou. “GPT detectors are biased against non-native English writers.” Patterns (Cell Press), Vol. 4, Issue 7, July 2023. arXiv:2304.02819 · ScienceDirect · Stanford HAI summary. ↩︎
Roland Barthes, “La mort de l’auteur,” 1967. English: “The Death of the Author,” in Image-Music-Text (Hill and Wang, 1977). ↩︎
W.K. Wimsatt and Monroe Beardsley, “The Intentional Fallacy,” Sewanee Review, 1946. ↩︎
Stanley Fish, Is There a Text in This Class? The Authority of Interpretive Communities (Harvard University Press, 1980). ↩︎
Ben Congdon, “AI Slop, Suspicion, and Writing Back,” January 25, 2025. benjamincongdon.me/blog/2025/01/25/AI-Slop-Suspicion-and-Writing-Back/. ↩︎
Plato, Phaedrus, 275a-b. ↩︎
Paul Ricoeur, Freud and Philosophy: An Essay on Interpretation, 1965 (English edition Yale University Press, 1970). ↩︎
Eugène de Mirecourt, Fabrique de romans: Maison Alexandre Dumas et Compagnie, Paris, 1845. BnF catalogue record: ark:/12148/cb309526003.public. Dumas successfully sued for defamation; Mirecourt was sentenced to six months’ imprisonment. ↩︎
The 1845 Mirecourt pamphlet is the pivot point in the semantic history of nègre as “ghostwriter.” Before the Dumas affair, French used plume or prête-plume. Mirecourt’s pamphlet (and a companion pamphlet by Jean-Baptiste Jacquot the same year) explicitly compared Dumas’s collaborators to enslaved Black workers on plantations — the analogy being that the named author takes the credit while the uncredited laborers do the work. The derogatory usage stuck in French literary culture for over 170 years. In June 2017, on the recommendation of the General Delegation of the French Language and Languages of France, the Dictionnaire de l’Académie française removed the “ghostwriter” sense of nègre and restored prête-plume as the preferred term. See Wiktionary: nègre for a concise summary of the etymology and the 2017 Académie decision. Gallica’s BnF blog on the Dumas-Maquet collaboration provides additional context: gallica.bnf.fr/accueil/fr/html/auguste-maquet-ecrivain-et-collaborateur-dalexandre-dumas. ↩︎
Jim Thorpe’s disqualification: 1912 Stockholm Olympics gold medals stripped by IOC in 1913; restored 1983; reinstated as sole champion in 2022. NPR on the 2022 decision · Smithsonian Magazine on the 1983 restoration. ↩︎
1878 Henley Regatta exclusion rule and Amateur Athletic Club (AAC, founded 1866) class-based eligibility documented in the history of the sport. See Kelly Charles Crabb, “The Amateurism Myth: A Case for a New Tradition,” Stanford Law Review 28.2 and Vice, “For Love or For Money: A History of Amateurism in the Olympic Games”. For the canonical academic treatment see Matthew P. Llewellyn & John Gleaves, The Rise and Fall of Olympic Amateurism (University of Illinois Press, 2016). The Coubertin “little allegiance” quotation and the strategic-concession account are drawn from the Llewellyn & Gleaves work, summarized in their idrottsforum.org review: Olympic amateurism from de Coubertin to Samaranch. Note: the Amateur Athletic Association (AAA, founded April 24, 1880) actually removed the mechanic/artisan/labourer exclusion from its constitution — the explicit class rule lived in the AAC and Henley, not the AAA. See World Athletics: Remembering the pioneering AAA on its 140th anniversary. ↩︎
UC Davis student cases (William Quarterman and Louise Stivers) plus journalist Kimberly Gasuras documented in Rolling Stone, “Student Wrongly Accused of AI Cheating By New Turnitin Detection Tool”. ↩︎
Michael Coley, Vanderbilt University Center for Teaching, “Guidance on AI Detection and Why We’re Disabling Turnitin’s AI Detector,” August 16, 2023: vanderbilt.edu/brightspace/2023/08/16/guidance-on-ai-detection-and-why-were-disabling-turnitins-ai-detector/. The 750-false-positives estimate is derived from Turnitin’s own stated 1% false-positive rate applied to the 75,000 papers Vanderbilt submitted in 2022. The Vanderbilt announcement cites three reasons for disabling: lack of transparency from Turnitin about what patterns the detector flags, documented bias against non-native English speakers, and fundamental questions about detection effectiveness. ↩︎
Johannes Trithemius, In Praise of Scribes (De Laude Scriptorum), 1492. Famously, Trithemius had the text printed. ↩︎
Martin Heidegger, Parmenides lecture course, 1942-43. ↩︎
Michel Foucault, “Qu’est-ce qu’un auteur?” 1969. English: “What Is an Author?” in Language, Counter-Memory, Practice (Cornell University Press, 1977). ↩︎
Why customer tools are organized wrong
This article reveals a fundamental flaw in how customer support tools are designed—organizing by interaction type instead of by customer—and explains why this fragmentation wastes time and obscures the full picture you need to help users effectively.
Infrastructure shapes thought
The tools you build determine what kinds of thinking become possible. On infrastructure, friction, and building deliberately for thought rather than just throughput.
Server-side dashboard architecture: Why moving data fetching off the browser changes everything
How choosing server-side rendering solved security, CORS, and credential management problems I didn't know I had.
The work of being available now
A book on AI, judgment, and staying human at work.
The practice of work in progress
Practical essays on how work actually gets done.
Tag things the way you'd order them
Most taxonomies are built for the classifier, not the person doing the thing. The cheap test that separates one from the other.
The ghost in the git config
We spent three hours exorcising a dead bot from our deployment pipeline. The lesson wasn't about git.
The day the fleet shipped everything
One session. Three products. Seventy-plus features. What happens when you stop planning and start dispatching.
Collaborative intelligence: Harnessing AI to amplify human potential
Discover how to leverage AI to boost efficiency and productivity, transforming everyday tasks and amplifying human potential in the workplace.
The ethics of black-box algorithms
Explore the ethical implications of black-box algorithms and discover why decision-making matters more than the reasons behind it.
The semiotics of networked content
Explore how networked content transcends platforms, focusing on the significance of signs over replication in the evolving digital landscape.