Skip to main content
Featured: Women in Tech Global Conference 2026 Virtual-first
Fri, 12/19/2025 - 23:58

Popular
Demand!

⏰ Extended: Super Early Bird Tickets Until This Friday (Dec 19)

Days
Hours
Minutes
Seconds
Women in Tech Conference

12-15 May 2026
Virtual & In-Person*

Toggle menu
  • Why Attend
    • Overview
    • Meet Ambassadors
    • Media & Community Partners
    • Convince your manager
    • Code of Conduct
    • Register Interest
  • Program
    • Schedule
    • In-Person Networking Events
    • May 12 - Tuesday - Chief in Tech Summit
    • May 13 - Wednesday - AI & Key Tech Summit
    • May 14 - Startup & Innovation Summit
    • May 15 - Friday - Career Growth Summit
    • Tracks & Topics
  • Speakers
    • Overview
    • Apply to Speak
    • Executive Women
    • Women in AI and Data Science
    • Women in Product Development, UX & Design
  • Companies & Careers
    • Overview
    • Companies hiring at WTGC
    • Job Opportunities at WTGC
    • Career Profile
    • Mentoring Program
    • Career Growth Summit
  • Partner
    • 2024 Edition
    • 2023 Edition
    • 2022 Edition
    • 2021 Edition
    • 2020 Edition
    • Sponsor
  • 🎫 Tickets
    • Book Tickets
    • Group Tickets
    • Apply for Scholarship
    • Volunteers
  1. Speaker
  2. Stacy
  3. Speakers
  4. Speakers
WOMEN IN TECH GLOBAL CONFERENCE 2026

Stacy All

Design & Research Director, at Consultant

headshot---website---square_0.png


"Defining Success: Measuring Human Impact in AI Products"

Get Tickets


Don’t miss out and join visionaries, innovators, and thought leaders from all over the world at the Women in Tech Global Conference.


Vote by Sharing

Unite 100 000 Women in Tech to Drive Change with Purpose and Impact.



Do you want to see this session? Help increase the sharing count and the session visibility. Sessions with +10 votes will be available to career ticket holders.
Please note that it might take some time until your share & vote is reflected.

Session: Defining Success: Measuring Human Impact in AI Products

AI is reshaping how products get built, but who's defining success? Too often, design and research teams watch from the sidelines as engineering metrics (speed, accuracy, efficiency) become the default measures of AI product performance. Meanwhile, the metrics that matter most (usability, trust, meaningful human outcomes) go unmeasured or get deprioritized.

Design and research teams can change this dynamic by establishing responsibility for how AI success is measured, ensuring human impact is central from the start. Drawing on an approach I presented at the Federal Reserve Bank's Design Summit, this talk provides three essential practices and one critical mindset shift for measuring what actually matters in AI-driven experiences. In this talk we'll cover the four parts to this approach:

Three essential practices:
• Define success upfront - Establish human-centered success criteria before AI shapes the solution
• Know your models - Understand what your AI optimizes for, its limitations, and where it might fail users
• Build rigor rituals - Create regular practices for measurement, learning, and course-correction

A key mindset:
• Advocate for transparency - Champion visibility into how AI systems work and how their impact gets assessed

Each practice addresses a specific risk: when teams don't define success early, engineering metrics become the default; when teams don't understand their models, AI can optimize for the wrong outcomes; without measurement rituals, there's no learning or course-correction. I'll share real examples from organizations who automated too quickly and lost customer trust, teams who measured their AI's limitations as rigorously as its capabilities, and others navigating these challenges.

This 40-minute session will give you practices you can apply immediately and the language to advocate for measuring what matters in your organization.


Key Takeaways

  • Establish human-centered success criteria before AI shapes the solution
  • Develop practices for tracking AI model behavior and limitations as they evolve
  • Build evaluation into regular design and research practice with transparency as a guiding principle


Bio

Stacy is a design and research leader with more than 25 years in the field and a Master's in Human-Computer Interaction from Carnegie Mellon. She has led design and research organizations at Walmart, Wish, and SurveyMonkey, managing teams up to 35 people and shaping product strategy and measurement practices in large-scale, data-driven organizations.

Her work focuses on experience strategy, design leadership, and helping teams define what success actually means, especially as organizations navigate AI-enabled products. She advocates for keeping human impact central to how products are built and measured.

Stacy currently works as an independent consultant, supporting companies with experience strategy, design leadership, and responsible measurement in AI systems.

019be00c-ac73-75d8-8427-8d1476fd35bb_0.jpg

Don't miss out on the latest Women in Tech events, updates and news!

Stay in the loop by subscribing to our newsletter.

Powered By​​​​​​​

Women in Tech
Coding Girls

Women in Tech Network

About Women Tech
Career & Hiring
Membership
Women in Tech Statistics

Women in Tech Conference

Why Attend
Tickets
Sponsor
Contact

Tech Women Impact Globally 

Women in Tech New York
Women in Tech London
Women in Tech DC
Women in Tech Berlin

Women in Tech Barcelona
Women in Tech Toronto
Women in Tech San Francisco
All Women in Tech Countries

Privacy - Imprint  -  Sitemap - Terms & Conditions

Follow us

  • facebook
  • linkedin
  • instagram
  • twitter
  • youtube
sfy39587stp18