AI Authorship Accusation
Dismissing someone's argument by accusing them of using AI to write it, rather than engaging with the content.
"Was that written by ChatGPT? It reads like AI."
"This has serious LLM energy."
"Okay Gemini."
"My AI detector flagged this, so I'm not engaging."
Why It's Unproductive
Whether a comment was written by a human, assisted by AI, or fully generated, the argument either holds up or it doesn't. Accusing someone of being a bot shifts the conversation from substance to authorship forensics, forcing them to defend their humanity instead of their point. It's becoming the new ad hominem: a way to dismiss anything you disagree with without doing the work of explaining why it's wrong.
The Better Move
Respond to what was said, not how it was written. If the argument is weak, say what's weak about it. If it's strong, engage with it. The origin of the words doesn't change whether the point is valid.
Caveat: if it really is nonsensical AI slop then ignore or comment "AI slop". Don't waste time engaging with it.
Why It's Better
Keeps the discussion on substance. If a comment really is low-quality AI slop, that will be obvious from the content itself, and you can point to the specific problems. If it turns out to be a real person making a real point, you haven't poisoned the conversation for nothing.
Examples
OP: "The proposed regulation could backfire because compliance costs would push smaller players out of the market, concentrating power among the companies it's supposed to regulate."
Antipattern: "This reads like ChatGPT wrote it. The paragraph structure is a dead giveaway."
Better: "Interesting point about compliance costs. Do you have examples of where this has happened before? Telecom maybe?"
OP: "Here's a breakdown of why end-to-end encryption matters for journalism."
Antipattern: "My AI content blocker flagged this article. Not worth reading."
Better: "The argument about source protection makes sense, but how does this square with the metadata issue? E2E doesn't help if the contacts are still visible."
OP: "I think the real bottleneck in self-driving isn't the model, it's the edge cases in infrastructure like faded lane markings and inconsistent signage."
Antipattern: "This comment has big LLM energy. Are you a bot?"
Better: "Faded markings are a real issue. I wonder how much of this could be solved with V2I communication instead of relying on visual parsing."