Session: Defining Success: Measuring Human Impact in AI Products
AI is reshaping how products get built, but who's defining success? Too often, design and research teams watch from the sidelines as engineering metrics (speed, accuracy, efficiency) become the default measures of AI product performance. Meanwhile, the metrics that matter most (usability, trust, meaningful human outcomes) go unmeasured or get deprioritized.
Design and research teams can change this dynamic by establishing responsibility for how AI success is measured, ensuring human impact is central from the start. Drawing on an approach I presented at the Federal Reserve Bank's Design Summit, this talk provides three essential practices and one critical mindset shift for measuring what actually matters in AI-driven experiences. In this talk we'll cover the four parts to this approach:
Three essential practices:
• Define success upfront - Establish human-centered success criteria before AI shapes the solution
• Know your models - Understand what your AI optimizes for, its limitations, and where it might fail users
• Build rigor rituals - Create regular practices for measurement, learning, and course-correction
A key mindset:
• Advocate for transparency - Champion visibility into how AI systems work and how their impact gets assessed
Each practice addresses a specific risk: when teams don't define success early, engineering metrics become the default; when teams don't understand their models, AI can optimize for the wrong outcomes; without measurement rituals, there's no learning or course-correction. I'll share real examples from organizations who automated too quickly and lost customer trust, teams who measured their AI's limitations as rigorously as its capabilities, and others navigating these challenges.
This 40-minute session will give you practices you can apply immediately and the language to advocate for measuring what matters in your organization.
Bio
Stacy is a design and research leader with more than 25 years in the field and a Master's in Human-Computer Interaction from Carnegie Mellon. She has led design and research organizations at Walmart, Wish, and SurveyMonkey, managing teams up to 35 people and shaping product strategy and measurement practices in large-scale, data-driven organizations.
Her work focuses on experience strategy, design leadership, and helping teams define what success actually means, especially as organizations navigate AI-enabled products. She advocates for keeping human impact central to how products are built and measured.
Stacy currently works as an independent consultant, supporting companies with experience strategy, design leadership, and responsible measurement in AI systems.