From awareness to action: Contributing to a mentally supportive workplace by Silvia Heredia Minthorne de Kläner
Silvia Heredia Minthorne de Kläner
Vice President People and OrganizationReviews
Understanding Responsible AI: Balancing Innovation with Ethics
Why the Growing Passion for AI Ethics?
The world is buzzing about Artificial Intelligence (AI) right now—it's featured prominently in newspapers, magazines, and our email inboxes. But amidst this excitement lies a crucial topic: the intersection of AI and ethics. My passion for this subject ignited at a CognizantX conference in London about ten years ago, where the dialogue surrounding technology ethics was at the forefront.
What struck me the most was the **diversity** of the attendees. We had a balanced representation of genders, various age groups, and different cultural backgrounds—something rarely seen at tech conferences. This conversation around ethics, particularly concerning robotics and AI, remains relevant and pressing today.
Defining Responsible AI
But what does **responsible AI** mean? It represents a blend of three core principles:
- Ethical: Ensures that technologies align with societal values.
- Transparent: Promotes openness in how AI systems work.
- Accountable: Holds developers and organizations responsible for AI outcomes.
Given the global diversity and differing value systems, aligning these principles becomes a formidable challenge.
The Impact of AI on Enterprises
AI holds the potential to revolutionize industries by driving innovation and efficiency. However, it can also perpetuate biases and erode trust through lack of transparency. Here’s why understanding responsible AI matters, especially in enterprise settings:
1. **Innovation vs. Bias:** AI can introduce biases if developed by homogeneous teams or if trained on unrepresentative data.
2. **Localizing AI:** As AI applications are developed, localization becomes critical. AI that works in one culture may fail in another due to varied contexts and values.
3. **Data Dependency:** The future of AI lies in the data that drives these systems, making data management a significant concern.
How to Innovate Ethically in AI
To innovate ethically with AI, organizations must prioritize a few essential practices:
1. **Build Diverse Teams:** A diverse team contributes varied perspectives that can help identify and mitigate biases in AI systems.
2. **Emphasize Critical Thinking:** Encourage a culture where questioning and discussing AI outcomes is not only acceptable but expected.
3. **Select Right Use Cases:** Choosing the appropriate applications for AI is critical. For instance, while autonomous vehicles attract much attention, industrial applications, like mining or warehouse robots, may provide greater social benefits and safety.
Common Pitfalls: Learning from the Past
Several past initiatives highlight the repercussions of neglecting responsible AI practices:
- McDonald's AI Drive-Thru: Scrapped due to poor performance with diverse accents.
- Microsoft Tay: The AI chatbot was taken offline within 24 hours due to biased learning from users' interactions.
- Amazon's AI Recruitment Tool: Scrapped for perpetuating gender bias after using ten years of male-centric data.
These cases emphasize that it is **not just about technology but data and human insight.** Ethical considerations must guide all stages of AI development.
The Future of Enterprise AI
As AI transforms enterprise technology, new business models and structures will emerge. We may see AI-driven **augmented workflows** where human oversight is paired with AI agents or teams.
However, challenges such as data security and ethical dilemmas will intensify. Transparency in AI operations and decision-making processes must be prioritized.
Key Takeaways for Implementing Responsible AI
To cultivate a responsible AI ecosystem, consider these steps:
1. **Start with Data:** Look at customer, employee, product, and partner data comprehensively.
2. **Engage Stakeholders:** Collaborate with all relevant stakeholders to enhance AI's effectiveness.
3. **Promote Critical Thinking:** Encourage discussions that challenge assumptions about AI implementations.
4. **Treat AI with Respect:** Recognize the impact of AI and avoid rushing its integration without proper assessment.
Conclusion
As we stand at the crossroads of AI innovation, the opportunity to create ethical, accountable, and transparent AI systems is within our grasp. By focusing on responsible AI, we not only protect societal values but also harness the true potential of technology for good.
Thank you for engaging with the conversation on responsible AI. Let's continue to ask the vital question—**why?**—as we navigate this rapidly evolving landscape together.
Video Transcription
Why am I so passionate about this topic? And everybody's talking about AI right now. It's it is literally what it is. We open every newspaper, every magazine, every email.AI is in there. I got interested in AI and ethics about ten years ago where I attended a CognizantX conference in London, and I was absolutely taken by a number of things. of all, the conversation around ethics and technology and data in particular was absolutely in the forefront. The thing that really caught my attention was the diversity of the attendees. At the attendees, we had all the demographic representation. We had fifty fifty men and women. We had various different ages, various different backgrounds. It was the best tech conference representation I have seen. What I did also spot, and two things other things also stuck into my mind, focused on robotics.
There were a lot of conversations around ethics and drones and swarms, and look at where we are ten years later. So that definitely stuck in my mind. But what also stuck in my mind was people have been thinking about this thing, AI and ethics, for a very, very long time. And it might have burst into our remit, into our kind of sphere in the past couple of years, but I take comfort of the fact that a lot of people have been putting a lot of effort into talking about ethics, AI, data, and technology, and how they come together now that we're at this point.
Okay. So let's start, with with the presentation. I've only got one slide. I like to hold the agenda up so that people can see through what we're going to be talking about. So if you flip on to the next one. Right. So why responsible AI? What is responsible AI? Balancing innovation and ethics and the impact on enterprises. I'm a very practical person. There's a lot of theory around AI still, and there will continue to be a number of years. I'd like to give some practical actions that I I I believe are going to be working on it. So responsible AI. How do you even define what responsible AI is? And that's because there's three words that come together, ethical, transparent, and accountable.
Ensuring that all of our technologies are designed and used in a way that align with societal values and ethical principles. Now aligning societal values is a challenging cause given the difference we all have if we start breaking humanity down into the various tribes that we have. Do you adhere to the top global values such as do not kill and stick with that and leave political, religious, specific cultural values and beliefs out of the mix. Now we know we can't. The content that AI uses, especially in the public domain and the consumer domain, uses any content it can get its proverbial hands on. It just it can't help it. So why why does this matter? Why does this matter in the enterprise sector as well? AI has a massive potential to be driving innovation and efficiency. Not only that, AI has got a huge opportunity, a potential to drive parity or dividers.
So we do need responsible practices because there could be some serious unattended consequences. For instance, everybody's heard about biased algorithms. They can perpetuate discrimination, and a lack of transparency can erode trust. We've seen this in multiple cases. Now taking it into something real. So take an enterprise application that is absolutely loaded with AI capabilities, designed from a and developed from a single geographical location or from a single industry perspective or by a very specific group of individuals who are very similar to each other. Try and take this to another region and the industry is just multiplied because enterprise applications from now on, it's not just about the technology. It's about the data. You add on top of that the array of different AI legislations that are currently coming to bear, and suddenly you have got this piece of data, systems, legislation, processes combined with our global world. It's no different than what we've experienced before.
So, you know, if you if you think about some of the tech that we currently do, and how we look at things. So right now, we often get blamed and said that, tell you what, if a system is implemented by a single function, finance or IT, it will automatically have an IT or a finance bias built into it. How do you start localizing AI? What does that know? How how do we deal with that? We already know that we can do legislative localizations, for example, for Brazil picking a country there. But how do you then do the AI localization pace? So here, it is absolutely key absolutely key that you have a diverse and representative team. It's more critical now than it's ever ever been before to actually build lasting, well thought through creative solutions.
And that is because it's more about the data than it is about the technology. We cannot put all the emphasis, though, on on the people building the solutions. I think that will be quite an unfair statement. I think there's also go going to be an increasing emphasis on us as individual consumers, but also as enterprise consumers accessing the solutions that are coming out. We need to emphasize critical thinking, and we need to increase the value of the question why. You know, now is not the time to be silent in the meeting room, when we're looking at these solutions. Now is the time definitely to be heard, and the question why needs to be raised on a pedestal, and anybody asking that question should also be elevated.
Now balancing innovation and ethics. It's a little bit about same conversation that we've had about balancing innovation and compliance. There's always the this is what you can do and this is what you can't do. Now we know that AI is going to transform industries. It's going to transform industries by the automating of processes, enhancing decision making, but it's also going to be creating, most importantly, new opportunities. For example, we talk about AI and predictive maintenance in manufacturing. Everybody focuses on it and says it can significantly reduce downtime and cost. But take it to a different level. Predictive maintenance in manufacturing reduces the number of emergency shutdowns or emergency situations that you have. Emergency situations are highly stressful and they often put people and assets at risk.
So being able to calm the situation down by using AI and using preventative maintenance, It's all about people and how we protect, ourselves. So it is effectively what we're talking about here. It is a new cycle of an old challenge that we all have. And I've mentioned that alluded to a couple of times there already. Now how do we balance the innovation and cost control? I mentioned balancing innovation and compliance. We all know that if we don't get this right now, we are going to be creating a huge amount of technical debt because the debt that we'll be creating is not just technical. It is data debt. And data debt tends to, as we all know, sticks around beyond the system having gone. So let's talk about a couple of good examples about the data debt.
And and then and these are these are really good examples because, actually, they were exceptionally well intended initiatives, that were done around, you know, five, eight years ago. We had the McDonald's AI drive through initiative, and that was all about automating a process of ordering. However, the background noises and the accents, individuals' accents that people have in their in their the way they speak, it it it just couldn't understand it. So it was scrapped. It was scrapped literally only last year. I remember you I certainly remember, and I'm sure all of you do as well, Microsoft Tay. Do you remember when Tay was launched in 2016? It was a huge funfair, and it was going to be the AI taught by humans. Well, it needed to be brought down nearly twenty four hours later because those individuals who started to teach Tay created responses by manipulating it because it was offensive and racist.
So there's the whole piece around it is not about the technology, it is about the data, but it is also about us as humans. And that's why holding everybody in check and having that balance in place is the piece that is the ethics bit that we really need to focus on. Final example, Amazon AI recruitment tool, the the initiative that that that really kinda rolled out in Global Attention. A tool types of tools we use every day because it now makes complete utter common sense to have AI to have the look at CVs. Now the challenge here was that when Amazon rolled it out in, '20 I think it was 2016 or something like that. It was scrapped in 2018. They used ten years worth of data to train the model. Ten years. That sounds really, really good.
But what they did, they had a gender bias in it, and it was trained on male CVs. And as a result of that, they had to take it offline, because it was not representative of what Amazon wanted the future to be. And, again, it is a data, not a technology piece. So how to innovate ethically? There are the two things, and I'm going to mention data again because it is the absolutely key piece. Data is now the royalty on what we are doing with AI, not technology, not IT. That's the key key bit. The is pick the right use cases. Now I mentioned the use cases based on some of the COGX, how I was taken by the AI and robotics.
Now robotics and AI is hugely, hugely important, but it's key to also focus on the right things. Now let's talk about autonomous vehicles. Now I I want to drive my car. I actually really, really enjoy driving my car. It is something that I take great pleasure in, and I know a lot of people take great pleasure in. But autonomous vehicles for individual commuters, for example, is something that is hitting the headlines. There is a quiet revolution taking place underneath, though, which is about industrial AI and manufacturing. So mining vehicles, warehouse vehicles and automated warehouses, driverless trains, this is the quiet revolution.
There is nothing better than thinking about how we can use driverless technology in a place where we can preserve life. For example, in a mine, which is a very dangerous environment, doesn't it doesn't matter. Is it underground or if it's if it's, an open mine? Similarly, warehouse vehicles. You know, being able to do that pick piece without people wandering around in between forklift trucks, as well as driving them just makes the whole setup a much safer environment for humans to be in. Whereas autonomous cars, what is the true value and business case? As I said, I look quite like driving my car. That is my one of my enjoyments. At the same time, are we also ready to take the biggest step in AI ever unless a vehicle potentially make a life and death choice on our behalf? So the question is, how do we innovate? What juice cases do we pick? And the a the data is the food of AI.
Now what is the impact we're going to see in enterprise technology? We all know the words. AI can improve efficiency. It can give better customer experience. There's actually loads of people who prefer talking talking to a bot than they prefer talking to a another human being. But the biggest thing is we are going to see the new business models. Enterprise tech focuses on AI in processes and decision making, bringing in things like AI agents as supervisors. Imagine if you were a manager job. So it's your managerial job, and you come into work, but, actually, your team is not human. Your team is agents. How do you manage them? What is the interaction? How deep do you need to understand the work that those AI agents are doing? How do you add value to their work?
All of these things are starting to mix up, and change the way that we're going to work, but also changing the way we interact. We already know about call center technologies. That is a brilliant, brilliant way of automating and improving your customer experience. That all exists there. Or I mentioned the industrial AI, the mining, warehousing, preventative maintenance. Again, really, really important. We also got another area that is really mature, the risk assessment in insurance, banking, corporate compliance. AI decision making is rife in that space. So we already know that the impact on office based work is going to be significant, and this is where the enterprise apps applications come into play. Trades are going to be impacted too. However, I expect that trades are going to be impacted with AI AI augmentation rather, than automating the work that you currently do.
We are also moving into the new business models. We currently have traditional industries or traditional companies, and then you've got the digitally native organizations. Moving forward, we're going to have the traditional companies, native organizations, and AI native organizations and companies. And as we bring in AI native companies into our ecosystem, the question of why and the critical thinking is going to become even more important, not just taking it by assumption that this is right. So we're also going to be driving changes in how the companies are structured, the way we work, how we design our work, and living spaces. We're going to be accommodating human pets, robots, and AI. So there's a there's a lot of complications that are going to be moving in there. It is in its infancy.
And for us driving and participating these activities, we are going to have to be critical and take a a broader view on continuously asking the question why. Okay. There are going to be challenges around data security. We heard in a previous presentation the the security piece, but it is going to be absolutely there. The ethical dilemmas are going to be there, and there is going to be a massive need for transparency, something we're going to have to think rather than as the afterthought. So finishing off some practical actions. I could talk to you about implementing ethical guidelines or promoting transparency and mitigating bias. I'm sure all of you are doing a lot of work in your enterprises, in your companies, your organizations to look at these and go, we have got new policies. We are doing this. But so what? What are the practical steps that we should be thinking about on what this is? We need to start with the data.
Think customer three sixty, employee three sixty, product three sixty, partner three sixty. Who needs what information, person, agent, robot, to complete their job successfully? Don't start it from the business process. Start it from the data. Engage the stakeholders. All of those who interact with customers, even though you don't think they interact with customers, might actually interact with customers, or at least with the paid piece of data. Engage with them. Promote the critical thinking, and remember that AI is just another tech wave. Treat it with respect, for the potential it has, but also don't just wave it into your organization as into your life. Treating it as special could create the biggest technical and data debt we have ever seen. So finishing off, find the appropriate value and use cases. Focus on the data.
Be prepared to be excited and explore the wave of AI native products and services. And be even better, develop your own. Thank you very much for listening.
Thank you so much for your thought provoking and timely session on responsible AI, Helena. It was such a pleasure to have you with us. I enjoyed greatly the the the the keynote of yours, and we appreciate you being part of Chief in Tech Summit.
Thank you, Anna. It was a pleasure.
Thank you so much. Have a great day. Looking forward to having you at our future events. Bye.
No comments so far – be the first to share your thoughts!