You Stopped Thinking
You’ve done it. I’ve done it. We’re all doing it.
The temptation to generate endlessly with AI is real. It comes in so many forms. A utility nobody asked for. A pattern nobody would read. The output feels real. The thinking is always thin.
That’s why what I’m watching right now worries me.
The Pattern
I’ve seen teams ship Claude skills nobody asked for. I’m watching organizations create spec documents that restate what a style guide and linter already enforces. I’ve seen Jira tickets generated in bulk with no grounding in the actual problem. I’ve sat through AI-generated refinement discussions that fall apart the moment someone asks a basic question in the room.
The output is real. The thinking isn’t.
AI removed the friction from producing things - not from thinking about them. Writing used to force a pause. Now it doesn’t. And the pause was doing more work than we realized.
everyone made a big commotion about skills, a bunch of secondary tooling got built, standards got established
— dax (@thdxr) March 28, 2026
and the majority of users have never installed a skill
i wish everyone would chill out, most ideas are bad, restraint is more important than ever
The Averaged-Out Answer
AI is agreeable. It’s a probability machine - it generates whatever token makes sense next given what you gave it. It won’t tell you your API spec is redundant because Airbnb already wrote a better one - especially when you’re tokens are positive. It’ll keep going - plausibly, confidently, well-formatted - in whatever direction you steered it.
Then someone pastes it into Confluence and it becomes gospel.
The generic stuff - linters, pre-commit hooks, style guides written by major companies and agreed upon by the community - those aren’t just already solved, they’re solved better than any markdown file generated by a single contributor and an LLM in 20 minutes. We don’t need AI to describe how to build an API.
We need to sit in a room and hash out the opinionated piece: how we do it differently, in our infrastructure, given our failure modes.
That knowledge doesn’t live in any training data. It lives in the disagreement. In the person who says “we tried that in 2022 and here’s why it fell apart.”
Karpathy said it well on the [Dwarkesh podcast last October]
They know, but they don’t fully know. They don’t know how to fully integrate it into the repo and your style and your code and your place.
The Cost Shifts
There’s a tax in all of this that producers don’t see.
slop is something that takes more human effort to consume than it took to produce. When my coworker sends me raw Gemini output he’s not expressing his freedom to create, he’s disrespecting the value of my time
— Neurotica (@schwarzgerat.bsky.social) March 23, 2026 at 1:03 PM
[image or embed]
The producer really feels productive. They shipped 400 specs. They built an MCP server. They published an npm package. The box is extremely checked. The people who pay the cost are the ICs who load stale specs and wonder why the AI keeps generating wrong details.
And it compounds. Because now someone owns the spec system for their team. Their job is to notice when things change and propagate that through their own 100 markdown files. That’s not a side task - that’s a maintenance surface. A role. A calendar full of spec reviews instead of shipping.
Meanwhile the IC could have pointed an agent at the actual codebase and gotten a more accurate answer in 30 seconds - because it came from the source, not a spec someone remembered to update.
The spec system didn’t eliminate the cost of staying current. It just created a job to do it badly at scale.
The producer checked a box. Everyone else paid for it.
And because the producer feels productive - they shipped something - the feedback loop that would correct this never fires. The slop doesn’t announce itself. It arrives formatted, with headers.
This Is Just the Beginning
This isn’t a fringe behavior. Everyone is doing it.
Andrej Karpathy - who coined “vibe coding” - said on the Dwarkesh podcast last October:
Overall, the models are not there. I feel like the industry is making too big of a jump and is trying to pretend like this is amazing, and it’s not. It’s slop.
The person who told everyone to embrace the vibes is worried about the output. Mitchell Hashimoto, after months of documenting what good AI adoption actually looks like, landed here: “The skill formation issues particularly in juniors without a strong grasp of fundamentals deeply worries me.”
It worries me too.
The bottleneck was never writing. It was thinking. AI removed the writing bottleneck and made it easy to skip thinking entirely without noticing.
I don’t have a clean answer. But the question worth sitting with is simple: what actually changed for the people who were supposed to benefit?
If nothing - if the skill went unused, the spec got ignored, the ticket got groomed into something unrecognizable - then it wasn’t adoption.
It was performance.
The Answer Isn’t New
Software engineers have been figuring this out for decades. The answer isn’t buried in a prompt somewhere - it’s in the practices we already have.
Use the tooling. Linters, formatters, pre-commit hooks, style guides ratified by the community and battle-tested in real codebases. These exist so you don’t have to generate the same answer twice.
Follow the process. Code review means you read the code. Refinement means you understand the problem before the meeting. Planning means someone thought about the work before the ticket was written.
Write documentation that reflects decisions, not descriptions. Not what the API does - why it works the way it does, and what you tried that didn’t.
And think critically. Before generating anything, ask whether it needs to exist. Whether you understand it. Whether the person who receives it will be better off or just busier.
AI is a powerful tool. But it doesn’t replace judgment - it exposes how much of our work depended on it.