Session: How to build transparent AI to enable more equitable products
Artificial intelligence and deep learning technologies have the potential of large-scale, positive change and are already revolutionizing entire industries. However, they aren’t without controversy. In a recent survey by FICO and Corinium of 100 AI-focused leaders from across the globe, almost 70% could not explain how specific AI model decisions or predictions are made, and only 35% said their organization made an effort to use AI in a way that was transparent and accountable.
Ethics and AI have become a central conversation in the tech industry, driven by the lack of understanding of data models, what information they are trained on, and the risk of bias. In this talk, Dipanwita explains the role of transparency in building equitable AI products, and, pulling from her public policy background, how to develop a framework for identifying and overcoming inherent biases in data sets to ensure that AI is driving more equitable products.
- Understand the inherent challenges in utilizing artificial intelligence in a way that is transparent and accountable.
- Why you need to pay more attention to your data models and how you are are using & training AI and deep learning.
- How to develop a framework for identifying and overcoming inherent biases in data sets to ensure that your AI is driving more equitable products.
Dipanwita Das is an award-winning technology entrepreneur and AI innovator. Prior to founding Sorcero, a language intelligence company for the STEM sector, she was the founder and CEO of 42 Strategies, managing digital transformation projects for Richard Branson's Virgin United, Al Gore's Climate Reality Project, and Bloomberg Philanthropies. An Atlas Corps Fellow and later Board Member, she designed the Global Leadership Lab, training global leaders from 60+ countries.