Narjes Boufaden at WomenTech Global Awards 2020

Automatic Summary




Narjes Bufferin on AI and Bias: A New Perspective

Introduction

Narjes Bufferin, founder and co at Key Text AI, firmly believes that artificial intelligence is transforming every facet of life, from banking to ecommerce. But, she questions whether these advancements, including artificial intelligence, come with ethical, diversity, and inclusion challenges. In her view, AI could help organizations foster harmonious relationships with the people they serve. This is why she founded Key Text, where she offers a disruptive platform that empowers brands to leverage customer feedback for a superior customer experience.

The Impact of AI & the Question of Bias

Narjes weighs the pros and cons AI in daily life. She loves how AI helps her beat traffic on Waze or find a new song to enjoy on Spotify. But as a woman entrepreneur with an immigrant Muslim Arabic background, she recognizes how AI can introduce unconscious biases into our society.

With the goal of stimulating a discussion on AI diversity and AI bias, she offers a fresh perspective on how we can use AI to counteract bias and build a better society.

The Conundrum of Bias in AI

She explains that AI can result in biased decisions, mirroring human biases. She raises questions like:

  • Is the unfairness stemming from humans or machines?
  • How can we use AI's benefits but prevent the reproduction of bias?
  • Can we train AI to see the world as we want it to be, rather than as it is?

Addressing Human Bias in AI

Narjes takes a deep dive into these important questions and proposes innovative solutions. Tapping into her experience and expertise in the sector, she stresses that the responsibility for bias in AI lies with us, and that improving diversity in the AI community can offer new perspectives on bias.

The Role of Diversity and Fairness in AI

Organizations like Microsoft Feed and AI for All are already taking strides in increasing diversity in the AI industry. She believes that having diversity among those who create AI models helps to prevent bias. More diverse teams can implement multiple perspectives and approaches to mitigate bias.

Transforming AI Bias into Positive Impact

Narjes also champions the idea of using inherent AI bias to positively impact society. For instance, by skewing data used in AI models, these models could challenge their own biases to reach a fairer outcome. This approach, termed "counterfactual fairness", tests an AI model's predictions against a hypothetical world where sensitive, bias-prone attributes are changed.

She concludes by posing more thought-provoking questions that challenge traditional notions of diversity and bias.

Conclusion

Narjes's perspective on AI and bias invites us not only to see the risks but also the opportunities. While technical solutions are part of the answer, she believes that the human element in AI development is crucial. It is only through diverse teams, striving for fairness and employing innovative strategies, that we can ensure AI is a tool for positive transformation rather than reinforcing established biases.



Video Transcription

And the next

speaker that I'm going to announce is Narjes Bufferin, founder and

co at key text artificial intelligence

is transforming every walk of life from banking to telecommunications, social media to ecommerce. But

as with any

technological advancements, each development comes with questions and challenges about ethics,

diversity and inclusion. Since A I mimics human

cognition, do these technologies reproduce? The human bias is observed in the society. NJ has a vision of a world where A I helps organizations and companies foster harmonious relationships with people. They serve. She founded K Attacks to offer a disruptive platform that helps brands leverage customer feedback to create powerful customer experience.

So being a human entre a woman entrepreneur with an immigrant Muslim Arabic background in the A is tech space, I had my share of anecdotes and cliches that come with the combination of biases that are attached to all these attributes. So today, to contribute to one of the most important and impactful discussion in A I diversity and A IA I bias, I will try to bring a different perspective to how A I can help mitigate bias and contribute to building a better society. So artificial intelligence is already changing society at a faster pace than we realize. This is not always as visible as robots or self driving cars. In fact, most of the time it's algorithm embedded in the technology we use every day. It's impacting our lives habits behavior steadily as it sneaks its way into critical industries such as transport, transportation, entertainment, retail, financial services or health care. Most of the time we enjoy the benefits of A I though. So think of applications like wave which helps us navigating the traffic to get to our appointments faster. How many times ways helped me make it on time to my meeting? Because it suggested me roads that I never thought of to avoid heavy traffic. Think of Spotify that learns our music and listening habits to create personalized recommendations to explore new songs that were that we would probably enjoy it.

Can it can even suggest music with the right beat for my morning journey jogging or Amazon recommendation engine that shows products I might like using my clicks on the site and comparing them with other customer selection. So using A I, it was possible to transl to streamline the browsing and check out experience helping me place order faster and move on with my busy day. So A I can assist us in achieving our goals faster, easier and more effectively. But the truth is A I technology is an incredible achievement because it can mimic human cognition and decisions at scale. So how did we and how and why did we end up surrounded by A I scandal such as women tossed away by Amazon's recruiting tool, minorities denied mortgage by financial institutions or chatbots being taught hate speech by the users. So in this talk, I will try to answer three important questions is and furnish the result of human or machine. How do we get the most of our most out of A I without reproducing bias? And can we make A I learn the word as we wish it were instead of the word as it is today? So let's start with the first question is in fairness, the re the result of human or machine.

This last two decades, we have witnessed scandals that have shown that A I can be biased in the same way that the human brain is or can be. We learn from these scandals that A I models can, can not only learn to reproduce the same biases that exist in our words. In fact, it is deploying them at scale. The implications can be devastating especially in sensitive areas such as hr with matters such as hiring criminal justice or health care. If we are letting our own human biases into the artificial intelligence, we create, can we really blame the A I? So these algorithms have the power to make choices on their own. But at the end of the day, they are human productions. So as I talk about the many ways in which bias can be introduced in A I, you will see that it's not so easy to draw boundaries between the role of human and machine in creating bias. So first, I'll start by understanding how we develop A I models and the stages in which bias can be introduced. I'll be focusing on one particular scandal surrounding Amazon recruiting tool software to provide some examples.

So in 2014, Amazon has been developing an A I to optimize the hiring process by automatically sorting through resumes and ranking candidates. Nearly a year after its launch, Amazon realized that the A I was tossing out resumes for women candidates. So Amazon shut it down in 2015, but it still reveals today how, how easy it is to introduce mice into A I. So let's have a look to the first stage of the development of an A I models. So the first place where BI bi bias can be introduced when developing AM models is at the data collection stage. Data is the input that is fed to the A I. It's the information it uses to make a decision based on a combination of criteria. So why is data so important in potentially creating bias? It's because this is the representation of the word that as human we provide to the A I model and the A I model will make up criteria to make decisions based on that representation So let's take the example of Amazon again, which was dismissing women candidates. The A I model was trained on resumes that Amazon received over the past years. And for the majority of these resumes were for men, the world representation submitted to the A I MODEL so we can learn decisions, criteria was mainly represented by men.

So the A I model learned to recognize good male candidates and could it recognize female candidates Because its representation of the word didn't account for ma for women. In other words, the data it used was skewed to favor men. Now, bias can also be introduced at the data preparation stage which involves selecting attributes for the algorithm to consider or ignore in the decision making process or training stage. Again. In the case of A, in the case of Amazon recruiting tool, engineers taught A I thousands of terms that appeared on the resumes terms that held the most significance for the A I algorithm were verbs such as executed, captured which were more me more common on men's resumes. So in addition to skewed data, the attributes provided to the algorithm weren't the ones represented in female arhythmic. This means that the selection process was creating another layer of bias part because the skew data that was fed to the A I model, but also because the attributes or features used it to prepare the data for training purposes were selected using a process that by definition was prone to introduce bias.

Then in the third stage, which is perhaps the most complex to improve as it's the pure result of mathematical and statistical of statistical algorithms. So at the training stage, it's hard to explain and audit the logic of N model decisions regarding fairness, most machine learning algorithms are black boxes which means that we do not know the inner mechanics. They used to make predictions. The algorithm selects and uses attributes that will optimize the model and reduce errors. But as machine is not, it has no social awareness of the attributes it selects that means it cannot have any regard to fairness. So again, in the Amazon case, the discovery that the recruiting tool was dismissing women and basing decisions on gender was only one piece of the problem. So not knowing how the device was introduced, made it even harder to correct it. And that's why Amazon ended up shutting the tool down. And now finally, the last stage where bias can be introduced, it is the validation stage. This is where humans are involved in assessing the outcomes of an A I for fairness. So for instance, Amazon tool was not used to evaluate candidates as we've explained a bit earlier. It was simply providing recommendations which were assessed for fairness to de to determine the success of the A I model.

So without any validation of the results, if there wasn't any people involved in that process, the discrimination would have been repli replicated at scale. So it is at the validation stage that outcomes can be analyzed and friends, friends assist. And this is the place where people can have a, a strong weight in deciding whether the model, the model is fair or not. We've been talking a lot about fairness. So, but trying to define what is a fair prediction or decision by an A I uh uh model, it only brings up more questions such as who decide what is fair, how does we measure it? And at what point do we say this? Is this A I is fair enough? So to explain the complexities around these questions, let's look to another concrete example. In 2013, researchers at the University of Washington discovered that women were not shown as often as men when searching uh job titles like CEO on Google images, even though 27% of CEO S are were women. At that point, only 11% of the images of the image results showed women. So in this case, the question would be what is the fair person that that Google images images to show? Is it 27%? The actual number of women CEO S back then?

Or should it be 50% provided that we, it would be a fair answer even if the real world at that time wasn't showing that statistics. So at the end of the day, we are responsible for the presence of bias in A I. But with so many possibilities for the introduction of bias and the com complexity of define, defining fair outcomes for NA I the path forward is not all clear. But at the same time, we want to move forward with A I technology because we know that the benefits can be huge. So how do we get the most out of A I without reproducing by perhaps technical solutions are not enough. And we need to focus on a human element in A I building a more diverse community of A I professionals who will change the the status quo and approach the and and the approach used to issue uh to the issue of bias with new perspectives. As we mentioned in the Amazon recruiting example, the lack of diversity and perspectives had a devastating impact at each stage of the process of the A I development. And in these technical challenges, the human element cannot be understated. And that's why many groups such as Microsoft Feed and nonprofits like A I for all are already working to increase diversity in the A I field.

And even in other spheres, the idea is that people from diverse backgrounds are better equipped to recognize, understand and engage with bias and ultimately build an A I that achieves fairness. So if Amazon have worked with a more diverse team of engineers in building its recruiting tool, we can imagine a different scenario. One that doesn't uh doesn't end with a scandal. This kind of team could have foreseen that building the A I model of a majority of men's resume would become a problem further down the road. So, so far we've, we have answered our first two questions who is responsible for in fairness and how we can get the most out of A I without reproducing bites. Now, let's get to our last question. Can we make a, I learn the word as we wish it were instead of the word as it is today? To answer that question, we will need to re to start by rethinking our approach to the issue of bias in A A. Can we accept that bias in some ways in, in some ways is inevitable or very difficult to overcome if we know that bias exist in A I instead of trying to remove it, remove it, perhaps we can find a way to use it to impact the society in a positive way.

For instance, we could skew the data used to build A I models so that these models ultimately cancel out their own bias and achieve a fair result. An example of this kind of approach that is already being developed is counterfactual fairness. It tests the model's predictions against a counterfactual word where sensitive attributes like gender, for instance, that are prone to bias are changed. A fair model would make the same decisions even with this criteria cha changed. So again, if we apply the counterfactual framework to Amazon recruiting tool.

For example, we would reverse the gender attributes in the data to train the model to become insensitive to the gender and give women an equal chance. Now, if we can take this idea a step further to make A I better represent the world as we wish. It it were not simply the word as it is today. How would that look? Well, this isn't something new either keeping up with the hiring example. In order to give women a fair chance for representation in C suite positions, many companies and institutions are putting in place explicit rules to maintain a 5050 representation of women and men. So by creating and enforcing demand, we are opening up a world of possibilities and providing opportunities for more women to have an equal chance to contribute. Moreover, beyond skewing skewing, the data used to build A I models, we could envision a whole different way of developing A I models each built and optimized to serve different social considerations and diversity altogether. Working to eliminate or compensate for biases in such a case, no single A I would ever make a decision alone, but instead it would be part of a committee weighing individual decisions, combining results to better capture specifics and preventing the bias of one model from impacting the entire decision making process.

So altogether, we've seen how both humans and machines can introduce to and contribute in fairness. A I can improve our lives at scale. But it is a two sided coin because it can reproduce bias two even in a more damageable way, even though it's our responsibility to solve the problem of bias in A I. The very idea of ensuring a fair outcome brings into question much more than technical solutions. Knowing all this, could we think differently about the bias that A I can introduce? Is it possible to use bias to positively impact? So society could creating A I models by it to generate positive outcomes to minorities be a temporary solution to accelerate positive changes in our society restore balance and compensate for all history of unfairness. And most importantly, is our society prepared to take that leap to change the types of diversity? So, thank you.

Thank you. Are just, they were excellent questions which gave us food for reflection and good to leave our audience with our time is up. But we would be glad to have you again, speak at our event. Thanks a lot for this really interesting presentation and I really love the insights of it. Thank you very much. Stay with us for the networking if you have some time and

see you. Thank you.