You Stopped Thinking
I’ve been watching this happen for a while. I’ve done it myself - but lately it has felt more important than ever to acknowledge. Generative AI is powerful. The temptation to generate is real - a utility nobody asked for, a pattern nobody will use, a Jira ticket written in bulk with no real grounding in the actual problem (I’m looking at you, Rovo… 🧐).
The output feels real. The more you look at it, the more you realize the thinking behind it isn’t.
What it looks like in practice
Teams are shipping Claude skills nobody asked for. Organizations are generating spec documents and markdown files that restate what a style guide and linter already enforce. Refinement discussions come pre-generated and fall apart the moment someone asks a basic question in the room (usually, “What the hell does this even mean?”).
AI removed the friction from producing things - not from thinking about them. Writing used to force a pause. You’d get halfway through a sentence and realize you didn’t actually have a point. Now that pause is gone. And it was doing more work than we realized.
everyone made a big commotion about skills, a bunch of secondary tooling got built, standards got established
— dax (@thdxr) March 28, 2026
and the majority of users have never installed a skill
i wish everyone would chill out, most ideas are bad, restraint is more important than ever
AI won’t tell you you’re wrong
AI is agreeable. It’s a probability machine - it generates whatever token makes sense next given what you gave it. It won’t tell you your API spec is redundant because Airbnb already wrote a better one. It’ll keep going - plausibly, confidently, well-formatted - in whatever direction you set it off in.
Then someone pastes it into Confluence and it becomes gospel.
The generic stuff - linters, pre-commit hooks, style guides written by major companies and agreed upon by the community - those aren’t just already solved, they’re solved better than any markdown file generated by a single contributor and an LLM in 20 minutes. We don’t need AI to describe how to build an API.
We need to sit in a room and hash out the opinionated piece: how we do it differently, in our infrastructure, given our failure modes.
That knowledge doesn’t live in any training data. It lives in the disagreement. In the person who says “we tried that in 2022 and here’s why it fell apart.”
Who actually pays the cost?
There’s a tax in all of this that producers don’t see.
I found Simon Willison’s personal rule: don’t publish anything that takes someone longer to read than it took him to write. That’s a standard worth holding ourselves to.
slop is something that takes more human effort to consume than it took to produce. When my coworker sends me raw Gemini output he's not expressing his freedom to create, he's disrespecting the value of my time
— Neurotica (@schwarzgerat.bsky.social) March 23, 2026 at 1:03 PM
[image or embed]
The producer feels productive. They shipped 400 specs. They built an MCP server. They published an npm package. The box is extremely checked. The people who pay the cost are the ICs who load stale specs and wonder why the AI keeps generating wrong details.
And it compounds. Because now someone owns the spec system for their team. Their job is to notice when things change and propagate that through their own 100 markdown files. That’s not a side task - that’s a maintenance surface. A role. A calendar full of spec reviews instead of shipping.
Meanwhile the IC could have pointed an agent at the actual codebase and gotten a more accurate answer in 30 seconds - because it came from the source, not a spec someone remembered to update.
The spec system didn’t eliminate the cost of staying current. It just created a job to do it badly at scale.
Gergely Orosz documented this pattern playing out at scale: at companies like Meta and Uber, AI token usage is now tracked in performance reviews. Generate more or be seen as unproductive. The output metric goes up. The quality question goes unasked.
And because the producer feels productive - they shipped something - the feedback loop that would correct this never fires. The slop doesn’t announce itself. It arrives formatted, with headers.
This isn’t a fringe behavior
I am bracing for 2026 as the year of the slopacolypse across all of github, substack, arxiv, X/instagram, and generally all digital media.
— Andrej Karpathy (@karpathy) January 26, 2026
Andrej Karpathy - who coined “vibe coding” - posted (in this monster tweet):
Slopacolypse. I am bracing for 2026 as the year of the slopacolypse across all of github, substack, arxiv, X/instagram, and generally all digital media. We’re also going to see a lot more AI hype productivity theater (is that even possible?), on the side of actual, real improvements.
Mitchell Hashimoto (co-founder of HashiCorp), after months of documenting what good AI adoption actually looks like, landed here: “The skill formation issues particularly in juniors without a strong grasp of fundamentals deeply worries me.”
It worries me too.
The bottleneck was never writing. It was thinking. AI removed the writing bottleneck and made it easy to skip thinking entirely without noticing.
So what do you actually do about it?
Honestly, nothing here is new. I’ve been preaching this for over a year. Use the tooling that already exists - linters, formatters, pre-commit hooks, style guides that have been ratified by the community and battle-tested in real codebases. Follow the process that we established: code review means you read the code, refinement means you understand the problem before the meeting. Write documentation that captures decisions, not descriptions - not what the API does but why it works that way and what you tried that didn’t.
And before you generate anything, ask whether it needs to exist. Whether you actually understand it. Whether the person on the other end will be better off or just busier.
I’m most fearful of the next wave of AI-enabled workers - the folks outside of the early adopters. Will they have the appropriate level of respect and discipline?
I guess we’ll see 🙂