Most AI Slop Is a Workflow Failure, Not a Model Failure
2026-05-06 , Ballroom

Most teams blame model quality when AI output is weak, but the bigger issue is usually workflow design. In this Ignite talk, I use a fast compare-and-contrast format to show how common AI usage patterns create slop, then contrast them with lightweight workflow shifts that produce better outcomes. The focus is practical: clearer task framing, better handoffs, verification loops, and one extra iteration before giving up. This is not a deep implementation tutorial. It is a concise, field-tested framework to help teams reduce rework and improve output quality quickly. Attendees will leave with simple behavior changes they can apply immediately.


This talk will open with a simple compare-and-contrast: how teams commonly use AI today and where slop shows up, versus how small workflow changes can produce materially better outcomes. I am not trying to cram a full implementation tutorial into five minutes. The goal is to give people a practical lens they can apply immediately: define outcomes before prompting, break work into clearer stages, tighten handoffs, and add lightweight verification before accepting output. The core argument is that weak AI output is usually not proof that the model failed, it is usually a workflow design issue. Attendees should leave with a few concrete workflow adjustments they can test right away to improve quality and reduce rework.

Patrick is a Staff Software Engineer at Lean Techniques, specializing as an AI Technical Coach who helps teams use AI tools like GitHub Copilot, Claude Code, etc. to improve developer effectiveness and product outcomes. He focuses on practical workflows that help engineers think more clearly, move faster, and deliver real business value. Patrick advocates for the Full Product Developer mindset, encouraging engineers to take ownership of outcomes, not just code.