Session: Algorithmic Greenlining: The Case for Race Aware Algorithms
With the advent of algorithms being used to make decisions at all levels of society, definitions and explorations into algorithmic bias have deepened in the past decade. Algorithms, artificial intelligence have been used to make automated decisions in major outcomes including employment, credit and lending, housing healthcare and more.
However, despite popular belief, algorithms can still create and perpetuate long-standing biases and harms against groups of people, including those with protected class status such as race and gender. This impact on protected classes on algorithms needs to be assessed to prevent the perpetuation of discrimination. However, when we incorporate race as a factor in analyzing or responding to these biases, we are treating people differently on the basis of a protected class; a civil rights violation, known as disparate treatment.
In order to effectively respond to these challenges, we need to incorporate a race-aware framework in our analysis and in responding to data. This workshop will explore the precedents set in exploring a race-aware framework and how algorithmic auditing and review can be applied to these frameworks. It will draw knowledge from participants in a wide variety of fields where tech intersects, including education, health, and more. We will map out different opinions through structured activities with Jamboard, word clouds, and other interactive tools. These activities will involve 1) a prompt-and-respond activity in which participants will name key challenges involving race as a part of reviewing algorithms, 2) an activity in which participants will identify and sort new technologies being developed with the corresponding industry precedents (for instance, an education algorithm that predicts grades with ways the education system has historically discriminated against BIPOC students).
- Identify how the civil rights precedents of disparate impact and disparate treatment affect algorithmic bias and regulation.
- Understand the need for algorithms that are race-aware in their evaluation step of development (how well an algorithm performs on certain subsets of data).
- Synthesize precedents around bias identification and mitigation in other sectors or policy areas, including education, healthcare, housing, employment and how they can be applied to algorithms in these areas.
- Connect individuals to organizations doing ongoing AI work in regulation and documentation
Christine Phan (she/her) is a Vietnamese American from University Place, Washington. She is a Technology Equity Fellow at the Greenlining Institute, where she addresses how the digital divide and algorithmic bias impact communities of color. She works on building algorithmic accountability and on Greenlining’s Town Link program, which partners with Oakland community organizations to provide digital literacy programs and address gaps in broadband affordability.
Previously, Christine has worked on dis/misinformation, Census outreach, coalition building and civic engagement in Asian American spaces. She is passionate about envisioning what community safety and resilience looks like for refugee and immigrant communities.