Algorithmic Greenlining: The Case for Race Aware Algorithms by Christine Phan


Video Transcription

OK. Amazing. It is 930. So I'm going to slowly get started. Um I see Chris in the chat since I'm from New York. I also go by Chris. Um and I am calling in from Oakland, California. So let me hop on and start presenting these slides.Oh my gosh, we have people from everywhere. I, I'm excited. Thank you. It is really lovely. Um Can everyone let me hop on to presenter mode? OK. All right. Making sure that folks can mostly see the presentation. You can just give me thumbs up in the chat if that's the case. So OK. Um Amazing. All right, I am seeing, I am seeing at least a clapping hand. So I'm gonna take that as uh where everything's functional. So with that said, thank you so much for coming today. Um I'm, this is uh if you're in the right session, this is the case for race aware algorithms. I'm Christian Fan. Um I am an economic Equity Fellow at the Green Lighting Institute in Oakland. Um If you're not familiar with that is, don't worry, that's what the introductions for. So, thank you so much for joining today and I'm really excited to talk to all of you about this. Um So to start off really, what is the Green Lighting Institute like? What, what are we, what do we do? Um And the Green Lighting Institute's mission is to envision a nation where communities of color thrive and race is never a barrier to economic opportunity.

We approach this mission in a number of ways because that mission shows up in so many different places. Um in both climate, main, mainly in both climate and economic equity. This involves addressing inequities in energy and transportation, health care, technology, banking, really just to name a few of the things that we do when it comes to technology. Um because I know that's what we're here today for uh our work in technology equity focuses on maybe two major items. One closing the digital divide through internet, access and affordability. So really saying who has access to internet us, access to these digital tools and what does that imply about um access to opportunities and resources they have? And the second one is developing policies that ensure A I doesn't redline communities of color and works to close the racial wealth gap. Um So those, that's our mission, that's the work that we do. Uh And now that I've talked a little bit about why I'm here. I am super curious on why you're here, why you're here at Women Tech, why you're here um at this workshop? And so with that in mind. Uh I would love for you to put in the chat just to kind of start our day off. Uh your name.

Um, pronouns if you'd like to share them. And then why are you here today? Or alternatively? Um What do you think of when you think of algorithms that are race aware? I'll give a little bit of time in the chat too and I know that's a lot to type in. So again, just feel free to say hi um and share whatever you're feeling comfortable with. Amazing. Um Yeah, I will keep moving us along. But yeah, um these are some questions, especially the last one to just keep reflecting on as I kind of talk through the work that we do what this um concept might mean. So to start off, because this is really very much about where like um like we all, we all have to know our history to know why we're doing this work. Um especially at the Green Lightning Institute and Green Lightning Institute's work is built off a response to the history of red lining, which is the practice of denying services um legally to communities of color. Um The explicit practices of this was made illegal with the Civil Rights Act, but the impact of some implicit practices still continue today. So, um historically, banks would deny loans to areas with high communities of color.

So you see in this background, this is a map of Oakland, a red line map of Oakland where areas that had high concentrations of communities of color were just blocked out, like don't like offer loans. Um And again, this practice was made illegal, but if you got a loan back in, um let's say the twenties or thirties that those benefits trickle down today in terms of what access to what wealth we have. And so we really have to be conscious of those benefits and or not benefits, but like those practices and making sure we don't reinforce that historic inequity. And why does that matter is because the legacy of redlining lives on now um As algorithms and digital services become more and more important to our lives, we also have to recognize how technology can reinforce these existing um like these existing inequalities and these systems of discrimination that are very much again, builds off the legacy of red lining.

That's what we're going to be jumping into. So with that in mind, um uh feel free to show your hands or make notes if y'all are familiar with algorithmic bias, I was at the terms um again, for those like because I know there's a lot of different language being used around, but in short, an algorithm at its very base is um a set of rules that forms a task.

Um It's an A and an automated decision system or an A DS or you might have heard. Um Also in that subset. Uh A I is a type of algorithm and it uses statistical analysis and existing data to make these decisions in this context. And in this framework and the algorithmic bias that the green lighting institutes focuses on and is most concerned about uh is when these decisions create unfair outcomes that unjustifiably and that's kind of the key underlying word here that I'm gonna keep returning to unjustifiably privilege certain groups over others.

And you know, these decisions are important because algorithms are only becoming more and more common. Um They show up in a lot of major life opportunities, they show up in bank loans, they show up in employment, access to health care and education and public benefits. And I mean, you can see the whole list here. This is really me just naming a few off this list, they're gatekeepers to economic opportunity. So now they talk a little about bias and why it's so important to think about how does this bias happen. Um And I know some of us might be uh like um A I developers in the room. So really there's a lot of different, we all know there's a lot of different ways this can show up. Um it can be implicit or even explicit by from the developers. So maybe someone because humans aren't the ones creating these systems, someone making assumptions about what someone's class or income level or gender or race or other kind of like these data points might say something about that individual, those are the implicit and explicit biases.

It can be valuing some data points over others. So for example, if there was an algorithm that helps decide promotions at a company, um maybe the algorithm could value aggressiveness over collaboration as a measure of leadership. Um And which could disproportionately impact women.

Collaboration is a really important part of leadership, but women tend to be more commonly socialized as kids to collaborate over being aggressive. Um So how does that show up there? It could come from the data itself, the data could have subjective human judgments. So for example, let's say um in that part of that evaluation uh the day, there's a data set that contains scores from a performance review from your manager. So then we can sit back and think, oh what biases does my manager have? What biases do managers and like some managers general have given that access to promotion pipeline? Um No, we can just keep thinking from there like where does that bias show up? Um Or, and this is kind of a subset of the data conversation, the data itself, the training data, the evaluation data, it that could be unrepresentative. So for example, let's say we're developing A I um this training data set uh with hired candidates mostly contains white men.

Um What does that say about how we're training the algorithm and what we're teaching it, right? So yeah, in short, uh pulling data from existing, these existing sources can be helpful but it can also reproduce bias and historical patterns of systemic disadvantage present in our societies.

So these systemic disadvantages, um I know we have some international folks. So I would encourage you to think about how that might show up similarly or differently depending on where you're from. Um I'm calling in from the United States. Uh but that could include redlining, that could include over policing, that can include gender expectations and standards. Um And a lot more. So that's just I know this is a big slide. So I'm also going to go through a couple of examples that did happen um where many of these sources uh did show up in all of these different contexts that I've talked about. So the first one I'm going to go through, um and these are a lot of news, uh a lot of articles but there we, there was a Dutch tax fraud Algo algorithm, sorry. Uh That would automatically rate ethnic minorities as higher risk for fraud, which would lead to. And this is where the impact comes in. It's not just an algorithm making a mistake, a mistake, it's people's lives being affected, but that would lead to fraud accusations that would lead to investigations and that would lead to loss of benefits. Um And how did that happen here?

The bias came directly from the bias of humans who designed the algorithm because they, you know, encoded their own biases in the algorithm. You're like ethnic minorities. Um This is just uh plain clear racism, ethnic minorities are more likely to um commit fraud. They thought that they encoded it in the algorithm. Um We all know that's not true. And although there was a human manual review process, so someone who would look at who was flagged, um these humans reviewing the algorithm were not the developers and even looking at these external scores, they had no idea how these risk scores were being calculated. They had no idea that the algorithm just internally flagged these people just because um they were ethnic minorities. Um And I really want to highlight in this um I would encourage you to also look these articles and check them out afterward. This example really highlights the need to one know how the A I makes these decisions. Not just the black box, you need to understand the decision making process and also the limits of human review, you can always have someone come back and check the algorithm. But if you don't have that transparency and you don't understand how it came to that decision. How helpful is that, what are the limitations there?

I mentioned hiring examples of a couple of times and also I would encourage if you um are familiar with some of these to also share your thoughts. Um Yeah, but for this one, this is an example of where training data can replicate bias. So Amazon was developing an algorithm that would evaluate job candidates resumes. Um This algorithm was trained on decades worth of resumes and hiring data at Amazon. And without any explicit prompting, this algorithm trained itself to downgrade women's activities. So if you had a resume, for instance, um if you listed um an activity that was coded as gendered in some way, you as women's soccer, um it would downgrade those activities. And in two college cases, people who went to two specific um all women's colleges were evaluated as less qualified candidates. So that's pretty egregious. Um That has no, that is no um basis in, in terms of job performance or evaluating a job candidate. Um And the data that this algorithm was trained on, it is gonna go into how that happened. The data that this algorithm was trained on was a decade's worth of resumes and hiring data stem. Um As we know there is an overrepresentation of men in stem roles. I think that's part of the reason, at least why some of us are here. And this revealed the non represents data patterns um or non representative data, the patterns within Amazon, subjective human judgments.

So from the past decades of hiring folks, those are the subjective human judgments that brought on maybe more male stem um like candidates. And then, so this also reinforced those historical patterns. Um the status quo which resulted in disadvantages for some folks and advantages for others at the source of this, what does using historic, like a data set that already has biases in it? Um And historical patterns um say about how algorithms are reinforcing status quo these days.

Sometimes an algorithm is used and specifically how we define what an algorithm can aim for it can define bias. So this is an interesting one because when I uh when an algorithm is asked, um how do you define a good job candidate or how do you define a good manager? Um that can shape bias. Uh So that's the example I talked about with aggressiveness and um collaboration. In this example, an algorithm sold by Optum was used to identify how to treat and prioritize patients. This algorithm dramatically um underestimated the needs of black patients and gave healthier white patients the same rankings hospitals could would use this algorithm to identify patients that you know, needed additional care and they would assign staffers based on this algorithm.

So it seems like it's doing what it's supposed to do on the surface level. However, the reason why this bias showed up was because the algorithm predicted how much patients were projected to spend at the hospital as a definition of how sick a patient was on. If you don't get it too closely, I guess it makes sense because oh if you're sick, maybe you use more medical expenses. Um But the reason why this bias showed up because the algorithm underestimated black patients because the developers of this algorithm failed to realize that there were historic inequities in health care um including one who has access to health care, right? Just lack of access to historical distrust in medical institutions. So maybe under reporting pain or not, um or not um like trusting hospitals as much um because of historical experimentation and medical uh like um medical neglect um and also wealth inequality just who simply has more resources to spend at the hospital. So in this example, I think what I really want to stress is how an algorithm has a target variable that it's trying to optimize for.

But how you choose to use the algorithm or how you choose to define that target can create a huge impact in terms of how it impacts different communities. And this one, I think here's I really want to stress that some other bias shows up when algorithms ask questions that reproduce historical patterns. I said this a little bit before, but this is a really um notable example. So here a risk assessment algorithm um so predicting maybe the likelihood of someone committing a crime again. Um This one predicted a higher risk of committing a future crime uh of an 18 year old black girl with some misdemeanors than a white 41 year old man who had been previously convicted of armed robbery multiple times. Um This algorithm was more likely to falsely label black defendants as future criminals and more likely to mislabel white defendants as low risk or not future criminals. And so the error rate um basically skewed different ways for different communities. Um And this, how did this happen?

This, the scores here were derived from questions that were either re answered by the defendants to the people who are subject to the algorithm or pulled from criminal records. And it doesn't include race. That's the thing like this algorithm never asked about um someone's race, but it did include questions like for example, and I'm just going to highlight two here. Um One was one of your parents ever sent to jail or prison or two? Were you ever suspended or expelled from school? Um These questions replicate historical patterns of bias and racism uh which includes the school to prison pipeline. So for instance, black students are suspended or expelled at higher rates from schools. Um because of again, uh ongoing and historic uh racism in education institutions um to the simple fact that black communities are over policed compared to white communities. And so as a result, these families are more likely to have a parent that has been sent to jail or prison.

And so these questions um like the questions that are being asked are just reinforcing um these historical patterns and saying, well now because of that, you're more likely to commit a crime. So that's just something. Um I know I went through a ton of examples. Uh So I just want to pause and process some of them really quickly. I will kind of, I'll see if I can scrub back to just do a little overview of summary. So we had the Dutch childcare example, we had the Amazon hiring example. Um That was on uh training data and bias there. This was on how we optimize target variables, how we optimized for output. Um And this was on historic reinforcing um historical patterns of bias and racism. So I know I had a lot, I just dumped a lot of information there. So um just gonna pause really briefly uh and let us process that. And I also um I think as a side note, uh I, I don't know if our technical moderator is here, but I don't think I can see the chat right now. So just checking in on OK, never mind it's been resolved. Yeah, I'm seeing, OK, seeing a lot of notes here. Um Amazing. So I know I was seeing a lot of resources too.

So as I know we're all learning about this as well, so if you have resources, resources that you want to share, you want to drop in the chat for other folks definitely feel free to do. So um there's a lot of knowledge out there in this room. OK. So I've talked a lot about algorithmic bias and I'm gonna go a little bit more into what we all came here for um or I think, I hope you all came here for which is the case for race aware algorithms? Um Yeah, I talked to a lot, a lot of these examples, not all of them, but a lot of them didn't explicitly factor in race. So what, what is the case for race aware algorithms? Um And I think coming back to that initial checking question uh like how do you feel when you hear race aware algorithms? Like what does that make you think of? Um So just checking in with yourself there, I know we've a little bit of a midpoint. What does that mean? I think the first thing that we have to talk about is where we are now and I know I've just gone into it, but right now, most algorithms and companies do not typically collect race or ethnicity data or other protected attributes like gender or disability.

Um in part due to the anti discrimination laws that we at least have in the United States. So again, I um if you like are like, oh there's a different standard in my country. Uh Definitely super interesting. We love to hear more. I know that we've been looking at some of the eu standards as well. Um But we can already see from these examples that that doesn't prevent bias by any means the presence of proxy data like zip codes. Um Last name, I already talked about the implications on the hiring resume. Um make this race blindness pretty ineffective, but there are some proposed regulations. So this is a um this is pretty important to talk about. There's an important civil rights uh framework uh that explains why race rare algorithms could matter. Um And why there's, there's also a challenge with it in this legal framework. Uh Discrimination shows up in two ways. So the first one is disparate treatment when a decision maker has a discriminatory intent or uses protected attributes like race, gender disability as a basis for decision. So for example, that's making a hiring decision because someone is black or white anti discrimination laws, at least in the United States focus a little bit more on disparate treatment.

And in algorithmic bias companies can do things like say, oh we didn't make this decision based off race because our algorithm doesn't even take that into consideration. That's never been a factor. Um where things get a little bit more interesting is on the argument for disparate impact.

So from basically, this is when a facially neutral policy or decision or a neutral, facially neutral algorithm um has disproportionately has a disproportionately adverse impact on a protected class of individuals. So for instance, hiring employees based on height may have a disparate impact on women because on average they tend to be shorter. Um What's interesting is that disparate impact can be legal if a business has a legitimate business interest in justifying a discriminate, discriminatory practice, apologies.

That's a lot of syllables. Um And this is where, so for example, let's say you are working in a warehouse job and you're um hiring based on height because you need someone to lift boxes from tall shelves. That's a business interest, that's a legitimate business interest. But for instance, let's say, you know, we're sitting at our laptops, you're in a job where you're sitting at your laptop for most of the day. Um And they are still hiring based off height. Is that a legitimate business interest? Does that impact the performance of the job? The answer there is no. Um So I'm seeing a note in case appearing, how do most of these cases appear or even become public since most companies keep their data sets A and A I technology practices secret. Thanks so much Mackenzie. That's actually an amazing segue to the next slide. Um And I would say a lot of them actually are from hard working journalists uh um who are or researchers who are doing this uh work independently. So they might be doing case by case testing, they might have a, you know, a report come in and be like, hey, there's like a weird discrepancy here, let's check it out. Um But that's actually a really good segue to um bias testing. Um And I think what I really wanted to take away from the last slide on disparate impact in particular is that if you don't have race. If you can't use race at all in you, you can't find disparate impact race.

Aware testing can give insight on it a data set as representatives. So as we talked about the training data bias, the evaluation data bias or how accurate an algorithm is, so how it performs after it's been developed. So you can see here this is a um uh some data set, um some tables just based off the first two, the I think teal and blue on my screen, apologies. This is the highest resolution I could get um is just the training and the evaluation data for um an algorithm. And you can see if you, if you can see uh the white bar, the bar for like white people in this data set is much higher than any other um any communities of color. Um So what does that say about what the data set comprised of? And then on the far right side with the purple data set that is on accuracy. So how well the algorithm performs? Um It's disaggregated by both race and gender um broadly speaking. So you can see it's like white male, uh white female, so on and so forth. And so just how the algorithm performs, that gives a lot of insight. Um But to Mackenzie's um point specifically, this is where things can get interesting because this information can help develop reports which are known as algorithmic impact assessments. Um some uh regulators.

So folks who are holding these companies accountable are pushing for companies and developers to create and provide to the public and regulators. Um This information so that can include a lot of questions about how the algorithm is developed, what variables it factors in all the things that we just noted can contribute bias. And this is really to ensure compliance with anti discrimination and privacy laws and a lot of other concerns that a lot of folks have around A I, it pushes accountability for these businesses and also it as it pushes them to justify that disparate impact when it shows up. So going back all the way back to almost the first few slides, that unjustifiable reason um having this data available to regulators would give insight into um that process and to clarify this isn't, this is a standard that is being pushed for and uh developed now, but it's not in the practice, which is why unfortunately, as public members, we can't just look at an algorithm that is affecting our lives just yet and be like, how does this impact me?

But that's something that we really want to emphasize. Like that is really important to have that accountability for companies. What's notable about having this is that it will help regulators to treat standards and guidelines on disparate impact right now, there's no consensus on what a fair algorithm is or a fair decision is in a statistical sense fair is a very nebulous concept. Um But regulators do have a rule of thumb generally, for example, in the employment context, uh we have the 4/5 rule which states that, you know, if the selection rate for a certain group is less than 80% of the group with the highest selection rate, there's a potential adverse impact on that group.

Um you know, for better, for worse, that's become the standard or the rule of thumb for fairness, employment algorithms. Um But with all this new quantitative information that could come out with algorithmic impact assessments and bias testing, we can decide what fair decisions look like across different sectors or different contexts. So employment fairness might look different than health care fairness.

But we can't even do that without race aware algorithms in that data to begin with. And for a lot of these, uh you know, I really want to stress that this, these tradeoffs are not new. Um People have been making these tradeoffs or decisions or rationales for a minute. But now with an algorithm is importantly, the transparency of how the algorithm turns out. You can quantify these tradeoffs and you can set these standards. So for instance, here, um let's say if you remember the the desperate treatment argument, businesses have to be able to justify it. They have to show their work, they have to justify their reasoning. So in this instance, they would have to explain maybe why they chose algorithm A over algorithm B from a business perspective. Um And regulators can really see those numbers and hold them accountable. You know, you didn't just say, oh we wanted more productivity. So we chose this algorithm.

You have to say, look why, what quantifiable, why was 7% better than 2%? Why was 10% better than 1% here? And I really want to say I'm not going to dive too into depth of it, but um like any proposed system, this isn't a cure. All race aware algorithms do have some limitations and major concerns. Um Privacy is a really key one. I think checking in with yourself um protected classes like race or gender is sensitive information. You know, um just from a consumer perspective, how would you feel if public or private institutions have that information? If um if a hiring algorithm knew you knew that they used your race or gender or a bank loan, they knew that they factored in your race or gender. Um So really, I without I know that I want to leave some time for discussion. So just really briefly on um you know, for the example on the left, one third of black patients, which is more than 700 people would have been placed into a more severe category of kidney disease if their kidney function hadn't been estimated using the same formula for white patients.

Um The algorithm used a formula that boosted the kidney function scores for black patients by 16% and ended up deprioritizing black patients for kidney transplants. Um So ignores the idea that race is a social construct. I'm not gonna dive into because I know I've been um going in depth with a lot of biased examples. But on the, on the NFL side, a group of black retired NFL players sued the league claiming that it used an algorithm that assumes that white people have higher cognitive function to decide who gets compensation for brain injuries. So again, really saying, like what is like, what is the limitations? What are the limitations and concerns about using race aware A I like um keeping that in mind and but I do want to end with how race aware algorithms outside of bias testing can promote fairness. Um And this one, this one is a really interesting example here, there's a lot of potential for where these space aware algorithms can go. And a study actually reviewed that um an initially base blind algorithm that would predict which students were more likely to graduate. Um This, sorry.

Um There's an algorithm that was initially baselined and the goal was to predict uh college GPA if it was going to be higher than 2.75 and if they were going to graduate on time. So this is kind of a we got information from high school students and we're deciding which what makes it for a good college student. Right. It factored in a lot of input data, um, that included high school grades, you know, high school test scores, um, uh, any credits, extracurricular hours and standardized tests like the SAT and AC T. So, um, if you went to college in the United States, you might have recognized some of these, I apologize for bringing you back to your college application days. Um, but, and on the surface again, same with table on the surface, I guess this makes sense. These are factors already, but these factors can have different impacts on different groups of students. So for example, who has more resources to take prep classes for the SAT or who has the time to take on more extracurriculars. Um And so with that in mind, the relationship between these factors and the predictors of college success differ from white to black students in other communities of color. So I think I want race aware algorithms really shifted.

Um in this study, the race blind version of this algorithm admitted only 7% black students into this hypothetical uh school, the hypothetical college. Um but the race aware version without lowering the admission standards set. So again, that GPA of 2.75 that predic predicted graduation likelihood were able to admit 12 percent more black students in the race blind algorithm for a total of 19%. And so that was really making sure that you know, these factors, they don't predict college success at the same rate or at the same accuracy level that they do for white students. And that's so that's something to keep in mind like based on where algorithms can acknowledge um these inequities and not reinforce these biases. Um So, yeah, it's, it's interesting stuff. I again, um encourage you to, I know I've shared a lot of examples throughout this presentation and really just want to emphasize like um there's a lot of potential here, bias testing is really critical and even having that information for regulators and the public to understand what's going on.

Um But there's a lot of potential here. And so I know that we have about five minutes left. Um And I really wanted to leave it up to um a closing discussion and reflection um or just open it up to questions. So for me, I'd love to know like, how do you again from the beginning, how do you feel about race aware algorithms at this point, given all of this information? And then also um if race aware algorithms are going to be used, what additional stakeholders and safeguards should be part of the design and development of those algorithms in particular, the base aware components and how should they be involved? So A I policy is still being developed and the, you know, the concept of incorporating race awareness into A I policy and regulation is even more so like still relatively even newer. So I am just super curious for your thoughts, impressions um and questions if you want to scrub back you like let me take another look at that slide one more time. But thank you so much for coming uh and listening to this uh workshop, really appreciate it. Mhm Thank you so much. Um Yeah, I'm seeing uh you know, saying important talk about inequality but need to be so careful that it doesn't cause unintended negative consequences. Yes, it's definitely not a um again, definitely not a cure all.

And there are like places where we have to be really deliberate and really careful. Yeah, absolutely. Um I don't know if I'm seeing like let's connect. Um And absolutely, I think uh I can, I know that my um presentation or slides um and like information is up on the Woman Tech presentation but can also drop my contact information in here if anyone is interested in reaching out and hearing a little bit more king. Um I hope it's OK that I'm reading these out for those who maybe don't have the chat open. Um I think that there could be a bit argument for using algorithms for some business topics. Quantitative analysis might not be the fit for all situations. Yeah, absolutely. Again, there's somewhere it's a really more holistic process. And if you're, for example, reading um going back to education, example, reading essays is not the same as maybe a random score that one might assign an essay. Um So, quantitative analysis. Again, I like we're talking about this in the algorithmic context. So that's what we're focusing on, but it is definitely not a cure. All, definitely not where we should apply everything to um human element team building, for example. Sure, it could be measured but not all historical data can prove future actions. So perhaps businesses should limit their dependencies on a is for certain decisions. Um Yeah, again, I think we talked about the historical bias. Uh There is um so many limitations with using historical patterns.

I don't think that the last 20 years of hiring decisions should reflect the next 20 years of hiring decisions, for instance. Um Let me see. I say unstructured data will also provide insights that you would not normally see in quantitative data only. It's a combination that is needed to correctly build an algorithm that reduces bias. Yeah, I'm super curious on um how like I am like very familiar with how like the way we um you know, classify data, how we tag things like um bias can show up there. So, super curious on what um yeah, like how that could um reduce bias? Um I went to a great presentation about indigenous governance and Neuroscience in Canada. Uh They had great insights on how into they had great insights into their own governing bodies and how they interface with federal government entities. Um Yeah, I think that's actually a really good um Point Mary, uh A lot of, I think a lot of um let me collect my thoughts, a lot of this, like we haven't even talked about um the last question, which is what additional stakeholders and safeguards should be part of this design, like who is giving feedback, who has the autonomy to say, like we don't want this dictating this part of our lives.

Um And also going back to the point about bias testing, you know, I talked about regulators mostly but um should these uh you know, should this information about bias be made available to the public? So they can decide if they want um they want these algorithms in their communities, right? So how the communities are integrated, how much autonomy they have in making these decisions? Um Yeah. Oh And back to Eileen. Um again, I'm just reading these out for folks who are um maybe don't have the chat open, which is totally fine uh using N LP and categorizing phrases and survey data for instance. Yeah, that's a really good point. I know that um especially how language um like how folks like end up, like how folks talk, for example, maybe in immigrant communities uh N LP can, if you like, aren't careful, can structure data um and invalidate some responses. So making sure that um just making sure like what insights are there and then also when we classify and clean up and like post process, what can show up there. So, yeah, I see. Do you believe citizens commissions might be an opportunity to educate um and provide solutions? Um I'm not sure what you mean by citizen commissions, but um I am always down for educating uh more and sourcing potential solutions. Yeah.

Um Yeah, I think um I'm not sure about the moderation, but I think we're just at time, I'm seeing a lot of other comments like, thank you so much for chiming in. Again. These are really reflection questions that we're all thinking about and learning from. So um if you're looking for more resources to learn about this, happy to uh happy to connect, happy to send you some of the examples from these articles that we lifted and some of the research papers that we pulled our knowledge from so very much appreciative of all of you coming to join and listen in on this.

So, yeah, thank you. Thank you so much again. Um And I'm seeing again, I'm seeing a lot of like knowledge in the chat as well. So if you want to drop your linkedin, your email, whatever works for you to connect with other folks on here. Um Would that would be amazing. So, thanks so much again. Um And I think I'm going to close out or check in with our moderator, but I'll drop my information one more time in the chat for anyone who wants to say hi.