The Systemic (Security) Failures of AI-Built Software
2026-05-06 , Ballroom

Coding agents are already committing code, opening pull requests, and pushing features into production across modern development workflows. AI coding tools promise velocity, but introduce a repeatable class of security failures that evade traditional detection. This talk examines the systemic risks quietly propagating across codebases and draws on findings from multiple studies, including original research conducted at DryRun Security using real development workflows. Across this body of work, consistent failure patterns emerge regardless of model, framework, or application design, revealing structural weaknesses rather than isolated mistakes.

These failures stem from how coding agents construct software. They optimize for functional correctness along the happy path and lack a coherent understanding of how security controls must persist and evolve across the lifecycle of an application. Protections applied early are not reliably enforced as new features are introduced, and security assumptions erode across successive modifications. This talk distills the practical principles leading AI-driven engineering teams are using to close these gaps, with an emphasis on verification models and practices.

James Wickett is CEO and Co-Founder of DryRun Security. His work building end-to-end delivery pipelines convinced him early that security only works when it is native to the development process, not bolted on after the fact. He has spent 15+ years at the intersection of application security and software delivery, taught more than a million professionals through his DevOps and DevSecOps courses on LinkedIn Learning, and spoken at RSA, OWASP, and SXSW. He founded DryRun Security to operationalize that belief for the age of AI-powered engineering, building the industry's first AI-native code security intelligence platform to find the risks that pattern-based scanning cannot see and to do it at the speed modern teams actually ship.