What happens when AI writes code faster than anyone can test it?
In this episode of Eye on AI, Craig Smith sits down with Dan Faulkner, CEO of SmartBear, to explore one of the most underappreciated risks of the AI coding boom. As tools like Claude Code and Codex push software development to unprecedented speed, the systems built to validate that software are being left behind. Dan makes a distinction that every engineering leader needs to hear: clean code passing unit tests is not the same as an application that actually works.
Dan introduces the concept of application integrity, continuous and measurable assurance that your software does everything it was intended to do and nothing it was not. He explains why the gap between what AI builds and what teams actually validate is already creating hidden risk in production, and why that risk compounds the faster you ship.
We also get into the new failure modes that agentic AI is introducing. Slop squatting, instruction inversion, cascading errors. These are not theoretical. They are happening now, at scale, in codebases that no human has fully read.
Dan also walks through SmartBear's autonomy ladder framework and their newest product BearQ, a team of AI agents that explores your application, builds a knowledge graph, authors tests, runs them, and updates everything as your app evolves. The key distinction: it is built to augment human teams, not replace them.
Finally, Dan shares his honest take on the future of software engineering. The fallacy was always that coding was the hard part. The hard part is knowing what to build. That skill is not going anywhere.
Subscribe for more conversations with the people shaping the future of AI and emerging technology.
Stay Updated:
Craig Smith on X: https://x.com/craigss
Eye on A.I. on X: https://x.com/EyeOn_AI
(00:00) Introduction and Dan Faulkner's Background
(01:05) What SmartBear Does: Testing and API Lifecycle Management
(03:27) AI Is Outpacing Application Testing
(07:51) Slop Squatting, Instruction Inversion and New AI Failure Modes
(17:31) Black Boxes, Technical Debt and the Expertise Crisis
(22:00) How to Avoid Self-Validating AI Systems
(24:11) The Autonomy Ladder and BearQ
(31:30) Why Testing Must Be Continuous and Everywhere
(36:31) Infrastructure Risk and Automation Bias
(44:11) The Future of QA and New Specialist Roles
(50:44) How Teams Use SmartBear Tools Today
(58:57) The Future of Software Engineering and Human Roles