How to Get Started With Automated Testing In the Age of AI
2026-05-05 , Ballroom

AI is notorious for "solving" test failures in questionable ways. Ask AI to fix a failing test and watch it delete the test entirely; ask it to write new tests and get hardcoded values that always pass. Teams often end up wasting time "babysitting" the AI to get it to work right, or outright stop trusting AI altogether, never taking advantage of its full potential as a collaborator. At worst, teams may accept faulty tests from AI that give false confidence in their production system.

The good news is that there are techniques you can use to get AI to give you both consistent and high-quality test results. Not only that, but the same practices also help make your code more maintainable.

This session covers strategies that make test generation with AI actually work. You'll leave with test structures, coding standards, prompts, and specific AI features to apply immediately on your own projects that help turn AI from liability to a lifeline.

Daniel is a Microsoft .NET MVP and software consultant at Lean TECHniques. He helps teams deliver high-quality software while adopting modern practices such as effective CI/CD, automated testing, AI usage, and product management.

With experience spanning multiple industries, including finance, retail, and agriculture, he has served as a technical coach, agile coach, and tech lead, with a primary background as a software developer.

Outside of work, he enjoys playing piano and guitar, swing dancing, and game development.