Session: Overcoming Gender Bias in AI
Artificial Intelligence refers to a set of concepts, tools, and techniques that allows machines to simulate human intelligence and mimic human-like tasks through analyzing data and identifying patterns, and insights. Although it may seem like these machines have a mind of their own, they actually mirror human experience and behavior since AI models are trained with real-life data. Being a reflection of human perception, the historical data that AI models use are not free from inherent social biases and more data is not always better specially when it is skewed. In this talk, I will discuss one of the major biases internalized by such models which is gender bias. This is a crucial problem since there is a tendency to think that machines are always perfect, and their decisions are free from human flaws which is clearly not the case here. Most of the AI-powered voice assistants use women's voice. Popular apps/services using Natural Language Processing (NLP) constantly assign feminine pronouns to inferior attributes. Image recognition services very often associate appearance-based attributes with women's photos but relates personality-based attributes to men's photos even if both of them belong to the same profession. Women are 47% more likely to be seriously injured and 17% more likely to die than a man during a car accident just because of the fact that the headrests and airbags are designed using data collected from dummies with physiques of men, and women’s bodies are far different from this so called "standard" measurements. Health apps tend to suggest female users that their arm or back pain results from depression since cardio-vascular diseases have historically been considered a men's disease. These examples show how using such biased data can be harmful to women instead of bringing any good. Gender gaps can actually widen with misinformed algorithms, and impact women negatively.
The first step towards overcoming gender bias in AI is acknowledging the problem. Although the training data merely reflects a bias that already exists in the real world, we need to use technology efficiently to bring a positive change and start afresh. We have to remember that numbers cannot speak for themselves specially when they derive from outdated views influenced by unjust status quo. Data is called the "new oil" because although raw data is not valuable in itself, accurately processed and refined data has the power to influence an entire civilization. Hopefully, women, together with men can play a vital role in shaping the future of a bias-free AI world.
This recording is not available yet.
Bio: Tahmida Mahmud
Tahmida Mahmud is currently working as a Deep Learning Engineer at Nauto, Inc. She is responsible for developing and implementing AI-based algorithms for external safety features and driver behavior improvement and currently leading the Predictive Collision Alerts (PCA) project. She received her Masters degree and Ph.D. degree in Electrical Engineering from University of California, Riverside in 2017 and 2019 respectively. She received her Bachelors degree in Electrical and Electronic Engineering from Bangladesh University of Engineering and Technology (BUET) in 2013. Her broad research interests include computer vision and machine learning with more focus on activity recognition and prediction, video captioning, frame reconstruction, object detection etc. Tahmida wants her research to have real impact on people’s life. In her free time, she loves to write and travel.