LLMs and AI are everywhere in testing, and there’s no shortage of innovation. From open-source frameworks like Selenium and Playwright introducing MCP-style interfaces, to established vendors integrating LLM prompts into their tools, the momentum is real and exciting.
But with all this energy and experimentation, one question remains: are these AI-powered capabilities actually useful for test automation in practice? Can they deliver the determinism, speed, maintainability, and cost-efficiency teams need to run tests at scale? Can non-developers really rely on these tools day-to-day, without constant developer oversight?
In this talk, we’ll explore where LLMs meaningfully enhance testing, and where they introduce risk, overhead, or false confidence. We’ll outline the core requirements any effective AI-assisted automation tool must meet, and discuss how human feedback and careful design are essential to avoiding flaky or brittle outcomes.
We’ll conclude by showcasing Applitools Autonomous, an industry-leading autonomous testing platform, built from the ground up with LLMs in mind. Already used successfully by numerous enterprise teams, Autonomous combines LLMs, Visual AI, and intelligent orchestration to deliver robust, scalable automated tests that anyone can author and maintain. No code. No hallucinations. Just real-world automation you can trust.