How to train your AI dragon

  • As AI-powered tools rapidly reshape the testing landscape, they can feel as formidable—and unpredictable—as dragons. Left unchecked, they may overwhelm teams with complexity, bias, or misplaced trust. But when guided with discipline and human intelligence, AI becomes a powerful ally in delivering faster, smarter, and more reliable testing outcomes.
    This talk explores practical strategies for “training your AI dragon” in the world of software testing: how to tame large language models, generative test design, and predictive analytics so they serve testers rather than replace them. We’ll cover common pitfalls (hallucinations, overfitting, loss of human judgment), show frameworks for safely adopting AI into test workflows, and highlight real-world examples of AI augmenting—not supplanting—the creativity and critical thinking of testers.
    Attendees will walk away with actionable insights on balancing automation and human oversight, setting governance guardrails, and cultivating the skills testers need to harness AI responsibly. By the end, you’ll not only understand how to make peace with your AI dragon—you’ll know how to ride it.