The Work Slop Problem Isn't AI — It's Misuse of AI Modes
AI has two modes: Cognitive and Operational.
The work slop problem comes from treating everything like Cognitive Mode.
I use AI as a thinking partner all the time. To organize ideas and challenge my own assumptions. That’s high-leverage work, and it should stay flexible and conversational.
But once it’s time to turn thought into something that touches money or decisions, everything changes.
Most teams will prompt a version of: “Here’s a deal. Build me a deck.”
And AI generates a nice-looking document based on vibes and probability. This is how work slop is born. Content that sounds right but has no verifiable grounding source.
Here’s the dangerous part: that content then gets reused as input for more AI generation. Slop compounding slop. Teams end up making decisions based on second- and third-generation output with no verifiable source behind it.
Here’s how I’ve built similar automations instead:
- Pull actual data from our warehouse
- Feed it into a structured deck template with defined fields and logic
- Only then allow AI to generate narrative that is strictly bound to the data
Result: narrative that can be audited line by line against the source of truth.
That’s AI in Operational Mode. Deterministic, source-linked, fully inspectable.
I apply a simple filter before plugging AI into any workflow:
- Cognitive Mode: invite AI into the conversation
- Operational Mode: constrain AI to data-backed execution
- Mixing the two blindly: work slop
This will be the philosophy behind the Agent Mode we’re building into our underwriting platform. Not “AI, think like an underwriter,” but “AI, run our underwriting logic on our data, and return a live model.”
AI should not replace human judgement. It should compress the distance between truth and action, if we design it that way.