A senior consultant is preparing for a major client presentation. One key section is a competitive analysis—who are the main players in the client's market, what are their strategic positions, where are the vulnerabilities? She assigns this to her junior analyst, figuring it should be straightforward enough for someone with decent research skills.
The analyst delivers a 12-page document that looks impressive. Clean formatting, proper framework structure, competitor profiles with logos and data tables. The senior consultant glances through it, thinks "great, this looks solid," and plans to drop it into the presentation deck.
Two days before the client meeting, she sits down to review it carefully.
That's when she realizes it's useless.
The analysis is entirely generic—synthesized from publicly available information with no actual insight about competitive positioning. It includes companies that aren't really competitors. It misses the competitive dynamics that matter most to this specific client. The "strategic recommendations" are textbook platitudes that could apply to any industry.
She's now scrambling to redo the entire section from scratch, pulling internal research and applying actual strategic judgment. The "finished" work she received didn't save time—it cost time.
This is "workslop": content that appears polished but lacks real substance.
The Core Problem
AI removes content creation as a signal of quality.
Before AI, producing a comprehensive competitive analysis required effort that demonstrated some level of understanding. If a junior analyst could produce a solid 12-page analysis, they probably knew something about competitive strategy and research methodology. The work itself was evidence of capability.
After AI, anyone can produce a 12-page document that looks professional in 60 seconds. The effort signal is gone.
The key insight: AI gives coherence, not expertise.
It can make shallow research sound comprehensive. It can organize generic observations into strategic frameworks. It can produce outputs that pass surface-level review but collapse when an expert examines them closely.
This creates a specific failure mode: - Someone without sufficient expertise produces AI-generated work - It looks polished enough that recipients delay careful review - Problems only emerge late (near deadline) when experts finally dig in - Now it's too late to fix properly, and senior people waste time redoing work that appeared finished
And it's not just junior people. 18% of AI users admitted sending AI-generated content that was "unhelpful, low effort or low quality." Even experienced people produce workslop when they take shortcuts—hitting "generate" without adding judgment or context.
Why This Matters
Research from BetterUp Labs and Stanford found that 41% of workers have encountered workslop, costing nearly two hours of rework per instance.
But the costs go beyond time: - 54% of recipients viewed workslop senders as less creative, 42% as less trustworthy - Trust between colleagues erodes - Productivity "gains" evaporate because work just moves from "create" to "fix"
This connects to broader AI adoption challenges I've written about: - Shadow AI: When people hide their AI use to avoid scrutiny, workslop becomes invisible until it's too late - The AI Dividend: Organizations can't capture productivity gains if senior people spend their time fixing junior people's AI-generated output
Workslop is what happens when AI adoption outpaces organizational readiness.
The Solution
This isn't about banning AI or being dismissive ("I knew AI was just a fad"). It's about building infrastructure for AI adoption that actually works.
1. Name it
Make "workslop" part of your company vocabulary. When people can name the problem, they can identify and discuss it before it becomes a crisis.
Be clear: AI can be a massive workslop generator, though people have been producing it without AI for decades. What's new is how easy AI makes it to produce work that appears polished but lacks substance.
2. Set expectations clearly
- AI gives coherence, not expertise - Polished output ≠ quality output - Everyone can produce workslop (not just junior people) - It's not acceptable to submit AI output without adding genuine value
3. Teach the value-add test
If someone can't articulate the value they're adding beyond "I put my supervisor's request into AI," they're not adding value. They're just an expensive middleman.
Example: That competitive analysis from the opening story.
Pre-AI, it might have taken a junior analyst 4 hours to create: - 1 hour researching competitors using internal databases and industry reports - 1 hour identifying relevant strategic frameworks from their training - 1 hour analyzing positioning specific to this client's market - 1 hour synthesizing into a coherent document
With AI, it should take about 2 hours: - 30 minutes gathering the right internal research and context - 30 minutes working with AI to structure the analysis - 45 minutes reviewing and editing to ensure insights are specific and relevant - 15 minutes quality-checking that it actually addresses the client's situation
That's a 50% productivity gain. That's good AI use.
If it took 60 seconds—just dropping the assignment into ChatGPT and hitting send—it's probably workslop. The analyst added no value beyond being a middleman between the senior consultant and AI.
4. Normalize disclosure and humility
Make it safe—expected, even—for people to say: "I used AI for this and I'm not confident in the quality. Can you review?"
This signals self-awareness and professionalism, not weakness. It also gives reviewers a heads-up to check more carefully before relying on the work.
5. Distinguish the cause
When workslop happens, diagnosis matters:
Lack of expertise? → This is a coaching opportunity. The person may not understand what "good" looks like in your context, or they may overestimate what AI can do for them. Help them develop judgment about when they're out of their depth.
Laziness or shortcut-taking? → This is a performance issue. The person knows better but chose the easy path. Address it directly: this isn't productivity gains, it's a net drag on organizational productivity.
Different causes require different responses. Don't confuse the two.
The Bottom Line
AI adoption is accelerating whether we're ready or not. The question isn't whether to use AI—it's whether to use it well.
Companies rolling out AI tools without addressing workslop are setting themselves up for: - Eroded trust between colleagues - Wasted time fixing AI-generated work - Productivity "gains" that never materialize - Frustrated senior people who become bottlenecks or leave
The companies that win won't be the ones with the most AI adoption. They'll be the ones who figured out how to adopt AI thoughtfully: - Clear expectations about what AI can and can't do - Culture that values expertise and judgment over output volume - Processes that catch workslop early - Training that teaches people to add value, not just generate content
This requires more than tools. It requires strategic guidance on the organizational infrastructure needed to make AI adoption actually work—the governance frameworks, cultural norms, and quality standards that prevent workslop before it starts.
Because right now, AI isn't making your workforce more capable. It's just making incompetence harder to detect—until it's too late.
---
*Research citations:* - Niederhoffer, K., Rosen Kellerman, G., Lee, A., Liebscher, A., Rapuano, K., & Hancock, J. T. (2025, September 22). AI-generated "Workslop" is destroying productivity. Harvard Business Review.
Let's Talk About Your AI Strategy
If these ideas resonate with challenges you're facing in your organization, I'd welcome a conversation about how to address them.
Get in Touch