Skip to main content
Featured: Women in Tech Global Conference 2026 Virtual-first
Sat, 02/07/2026 - 23:58

Secure
Your Ticket!

🔥 WTGC Early Bird: Save on Conference Tickets — Limited Time Offer!

Days
Hours
Minutes
Seconds
Women in Tech Conference

12-15 May 2026
Virtual & In-Person*

Toggle menu
  • Why Attend
    • Overview
    • Meet Ambassadors
    • Media & Community Partners
    • Convince your manager
    • Code of Conduct
    • Register Interest
  • Program
    • Schedule
    • In-Person Networking Events
    • May 12 - Tuesday - Chief in Tech Summit
    • May 13 - Wednesday - AI & Key Tech Summit
    • May 14 - Thursday - Career Growth Summit
    • May 15 - Friday - Startup & Innovation Summit
    • Tracks & Topics
  • Speakers
    • Overview
    • Apply to Speak
    • Executive Women
    • Women in AI and Data Science
    • Women in Product Development, UX & Design
  • Companies & Careers
    • Overview
    • Companies hiring at WTGC
    • Job Opportunities at WTGC
    • Career Profile
    • Mentoring Program
    • Career Growth Summit
  • Partner
    • 2024 Edition
    • 2023 Edition
    • 2022 Edition
    • 2021 Edition
    • 2020 Edition
    • Sponsor
  • 🎫 Tickets
    • Book Tickets
    • Group Tickets
    • Apply for Scholarship
    • Volunteers
  1. Speaker
  2. Swati
  3. Speakers
  4. Speakers
WOMEN IN TECH GLOBAL CONFERENCE 2026

Swati Tyagi

Senior Manager, AI/ML at Tredence, Inc

headshot-(64.png


"Model Calibration: The Hidden Key to Trustworthy AI"

Get Tickets


Don’t miss out and join visionaries, innovators, and thought leaders from all over the world at the Women in Tech Global Conference.


Vote by Sharing

Unite 100 000 Women in Tech to Drive Change with Purpose and Impact.



Do you want to see this session? Help increase the sharing count and the session visibility. Sessions with +10 votes will be available to career ticket holders.
Please note that it might take some time until your share & vote is reflected.

Session: Model Calibration: The Hidden Key to Trustworthy AI

In high-stakes domains like finance and healthcare, getting the right answer isn't enough—your AI system needs to know how confident it should be. This presentation explores model calibration, the critical but often overlooked bridge between statistical predictions and real-world decision-making.

We'll examine why a model with 95% accuracy can still cause catastrophic harm when its probability estimates are unreliable. Through concrete examples from credit risk management, fraud detection, clinical decision support, and cancer screening, attendees will understand how miscalibration leads to billions in financial losses, unnecessary medical interventions, and life-threatening delays in treatment.

The talk covers:

Core Concepts: Understanding calibration vs. accuracy vs. discrimination, and why all three matter
Visualization Techniques: Reading reliability diagrams and interpreting calibration curves
Measurement Metrics: Brier Score, Expected Calibration Error (ECE), and Maximum Calibration Error (MCE)
Real-World Impact: Case studies from Basel-compliant credit risk models, fraud detection systems, sepsis prediction, and hospital readmission forecasting
Practical Implementation: Step-by-step Python code examples using scikit-learn, with before/after comparisons showing dramatic improvements (ECE reduction from 0.184 to 0.012)
Calibration Techniques: When to use Platt Scaling, Isotonic Regression, or Temperature Scaling, with pros/cons for each approach
Production Best Practices: Data splitting strategies, monitoring for calibration drift, stratified fairness checks, and recalibration schedules
Attendees will leave with actionable knowledge to audit their existing models for calibration failures, implement calibration fixes in production systems, and establish monitoring frameworks to maintain calibration over time. Whether you're deploying ML for regulatory compliance, clinical decisions, or customer-facing applications, this presentation provides the tools to build AI systems that know what they don't know.

Target Audience: Data scientists, ML engineers, risk managers, healthcare AI practitioners, and anyone deploying machine learning in regulated or high-stakes environments.

Key Takeaway: Discrimination tells you who is at risk. Calibration tells you how much risk there actually is. In finance and healthcare, you need both.


Key Takeaways

  • Trustworthy AI
  • Model Calibration


Bio

Swati Tyagi is an AI/ML leader and researcher specializing in responsible AI, generative AI, and data-driven decision systems for highly regulated industries. With a PhD in Statistics and extensive industry experience, she has led impactful work in bias mitigation, model evaluation, and large-scale AI deployment. Swati is an active speaker, mentor, and community builder, contributing to global tech forums, academic research, and professional communities to advance ethical and trustworthy AI.

019bce2a-63a5-7a76-9bf9-cc8b31a3dbe6_0_0.jpg

Don't miss out on the latest Women in Tech events, updates and news!

Stay in the loop by subscribing to our newsletter.

Powered By​​​​​​​

Women in Tech
Coding Girls

Women in Tech Network

About Women Tech
Career & Hiring
Membership
Women in Tech Statistics

Women in Tech Conference

Why Attend
Tickets
Sponsor
Contact

Tech Women Impact Globally 

Women in Tech New York
Women in Tech London
Women in Tech DC
Women in Tech Berlin

Women in Tech Barcelona
Women in Tech Toronto
Women in Tech San Francisco
All Women in Tech Countries

Privacy - Imprint  -  Sitemap - Terms & Conditions

Follow us

  • facebook
  • linkedin
  • instagram
  • twitter
  • youtube
sfy39587stp18