How I Reworked My Agents To Stop AI Slop
I didn’t set out to become the guy ranting about “AI slop.” Honestly, I thought I was above it. I was building agents—shiny ones, clever ones, the kind that make your inner Solution Architect do the smug eyebrow raise.
And then one day I realized they were talking like they’d been raised on a diet of SEO oatmeal and LinkedIn cringe.
My fault. I’d trained them that way.
Not explicitly—no one says “yes, please write like a chirpy HR bot at a corporate retreat”. It just happens. A thousand tiny defaults, a hundred tiny shortcuts, and suddenly every output feels like it smells faintly of PowerPoint.
So I tore them down. Reworked the whole stack. Rules, tone, templates, scaffolding—the works. And in the wreckage, I found the real enemy: not the AI, but the bland gravitational pull of “safe” language.
Let me tell you the exact moment I knew things had gone off the rails.
One of my agents was supposed to summarize a messy internal thread. High-context. Sharp edges. A little emotional even. Instead, it coughed up something like:
“It is important to acknowledge that communication plays a vital role in collaborative environments…”
I actually groaned. Out loud. In my office. Where my dog looked at me like: “You built this?”
That’s when it clicked: AI slop isn’t a glitch—it’s the default. If you don’t actively fight it, your agents slide straight into that uncanny-zone tone that sounds like an NPR host trying to sell SaaS.
Reworking them became less about “tuning settings” and more about giving them a backbone. Real constraints. Real personality. Real expectations.
I stripped out the generic hedges. The pseudo-official tone. The training wheels.
I gave them permission to have opinions—bounded ones, human-ish ones.
I forced them to be concrete.
No more “in today’s fast-paced world.”
No more “leveraging synergies.”
No more frantic over-explanations trying to pad safety.
I forced them to be concrete.
No more “in today’s fast-paced world.”
No more “leveraging synergies.”
No more frantic over-explanations trying to pad safety.
I wrote rules like I was training a new hire who I actually liked:
- If you can say it in eight words, don’t take twenty.
- Don’t lecture me.
- Don’t pull the emotional fire alarm over trivial stuff.
- Don’t hide behind abstraction when a specific example exists.
- You’re allowed to sound human; you’re not allowed to sound like a sentient brochure.
And something shifted.
Agents that used to cough up oatmeal started giving me actual texture. A little unpredictability. A little humanity. Real writing instead of mashed-together content paste.
Funny thing is: I wasn’t trying to make them “more human.”
I was trying to make them less bland.
I was trying to make them less bland.
Turns out those two things are cousins.
If there’s a moral here, it’s this:
AI slop doesn’t happen when the model is too strong—it happens when the instructions are too weak. When everything is “okay” and nothing is required. When you let the system write like it’s afraid of getting in trouble.
AI slop doesn’t happen when the model is too strong—it happens when the instructions are too weak. When everything is “okay” and nothing is required. When you let the system write like it’s afraid of getting in trouble.
Give an agent constraints, and it finds a voice.
Give it boundaries, and it becomes sharper.
Give it a point of view, and suddenly it stops generating slop.
Give it boundaries, and it becomes sharper.
Give it a point of view, and suddenly it stops generating slop.
Which, funny enough, is the same advice I wish someone had hammered into me back when I started writing for actual readers: Be specific. Have a take. Don’t be boring.
Everything else is fixable.
Comments
Post a Comment