Responsible AI: Balancing Innovation, Ethics, and Impact in Enterprise Technology by Helena Nimmo
Helena Nimmo
CIOReviews
The Importance of Responsible AI: Balancing Innovation and Ethics
Artificial Intelligence (AI) has become a buzzword that fills the pages of newspapers, magazines, and emails. With its growing influence on various sectors, the conversation surrounding AI and ethics is more critical than ever. Having delved into this topic over the past decade, I have witnessed how discussions about ethical technology and data are reshaping our understanding of AI.
Why Focus on Responsible AI?
Responsible AI is about balancing innovation with ethics while considering its impact on enterprise operations. To understand responsible AI, it’s important to define three core principles: ethical, transparent, and accountable.
- Ethical: Align technologies with societal values and ethical principles.
- Transparent: Ensure clarity in AI functioning to foster trust.
- Accountable: Hold developers responsible for the choices made by AI systems.
Although the urgency around this topic seems recent, the groundwork of responsible AI has been laid over many years, characterized by ongoing discussions about the ethical use of technology.
Why Responsible Practices Matter
The potential of AI lies in its ability to drive innovation and efficiency. However, it also poses risks, such as:
- Perpetuating bias through biased algorithms.
- Winning trust through transparency to prevent erosion of consumer confidence.
As AI capabilities expand into various enterprise applications, it is crucial to consider the geographical and cultural context of these technologies. If an AI system is developed by a homogenous group, transferring it to a different region might amplify inherent biases. Therefore, engaging diverse teams in AI development is essential.
The Role of Diversity in AI Development
Diversity in development teams enhances the potential for innovative and effective solutions. To foster a well-rounded approach:
- Encourage participation from various demographics.
- Incorporate critical thinking into decision-making.
- Raise questions and foster open discussions during the development process.
Moreover, balancing innovation with ethical considerations resembles the age-old challenge of balancing compliance with business goals. By prioritizing the right use cases, companies can minimize risks associated with AI deployment.
Examples of Unintended Consequences
Several well-meaning AI initiatives have faltered due to a lack of foresight, emphasizing the need for critical evaluation:
- The McDonald's AI drive-through faced issues understanding varied accents, leading to its eventual discontinuation.
- Microsoft’s Tay was taken offline within 24 hours after it began generating offensive content due to manipulation by users.
- Amazon's AI recruitment tool was scrapped after it demonstrated significant gender bias.
These cases illustrate that ethical considerations must guide the collection and use of data, not merely technology itself.
Innovating Ethically: Focus on Data
As we move forward, data should be regarded as the backbone of AI, making informed choices in application essential. Here are practical actions to consider:
- Identify the Right Use Cases: Hone in on areas where AI can make a substantial impact, such as:
- Predictive maintenance in manufacturing to enhance safety and efficiency.
- AI applications in risk assessment across banking and insurance sectors.
The conversation must also shift towards the evolving workplace where AI agents replace traditional roles. Understanding how to manage these AI agents will redefine workplace dynamics and workflows.
The Future of Enterprise Technology
As AI continues to evolve, businesses will need to adapt by:
- Creating new business models that intertwine traditional and AI-native organizations.
- Emphasizing transparency and accountability in AI implementations.
- Continuously questioning the ethical implications of AI technologies.
With these changes come challenges, including data security and ethical dilemmas. Transparency should no longer be an afterthought but a foundation during development.
Final Thoughts
To navigate the complexities of AI responsibly, companies should embrace:
- Diverse stakeholder engagement to better understand the data needs of various roles.
- A mindset that values AI as a transformative wave rather than a mere technology.
Video Transcription
Why am I so passionate about this topic? And everybody's talking about AI right now. It's it is literally what it is. We open every newspaper, every magazine, every email.AI is in there. I got interested in AI and ethics about ten years ago where I attended a CognizantX conference in London, and I was absolutely taken by a number of things. of all, the conversation around ethics and technology and data in particular was absolutely in the forefront. The thing that really caught my attention was the diversity of the attendees. At the attendees, we had all the demographic representation. We had fifty fifty men and women. We had various different ages, various different backgrounds. It was the best tech conference representation I have seen. What I did also spot, and two things other things also stuck into my mind, focused on robotics.
There were a lot of conversations around ethics and drones and swarms, and look at where we are ten years later. So that definitely stuck in my mind. But what also stuck in my mind was people have been thinking about this thing, AI and ethics, for a very, very long time. And it might have burst into our remit, into our kind of sphere in the past couple of years, but I take comfort of the fact that a lot of people have been putting a lot of effort into talking about ethics, AI, data, and technology, and how they come together now that we're at this point.
Okay. So let's start, with with the presentation. I've only got one slide. I like to hold the agenda up so that people can see through what we're going to be talking about. So if you flip on to the next one. Right. So why responsible AI? What is responsible AI? Balancing innovation and ethics and the impact on enterprises. I'm a very practical person. There's a lot of theory around AI still, and there will continue to be a number of years. I'd like to give some practical actions that I I I believe are going to be working on it. So responsible AI. How do you even define what responsible AI is? And that's because there's three words that come together, ethical, transparent, and accountable.
Ensuring that all of our technologies are designed and used in a way that align with societal values and ethical principles. Now aligning societal values is a challenging cause given the difference we all have if we start breaking humanity down into the various tribes that we have. Do you adhere to the top global values such as do not kill and stick with that and leave political, religious, specific cultural values and beliefs out of the mix. Now we know we can't. The content that AI uses, especially in the public domain and the consumer domain, uses any content it can get its proverbial hands on. It just it can't help it. So why why does this matter? Why does this matter in the enterprise sector as well? AI has a massive potential to be driving innovation and efficiency. Not only that, AI has got a huge opportunity, a potential to drive parity or dividers.
So we do need responsible practices because there could be some serious unattended consequences. For instance, everybody's heard about biased algorithms. They can perpetuate discrimination, and a lack of transparency can erode trust. We've seen this in multiple cases. Now taking it into something real. So take an enterprise application that is absolutely loaded with AI capabilities, designed from a and developed from a single geographical location or from a single industry perspective or by a very specific group of individuals who are very similar to each other. Try and take this to another region and the industry is just multiplied because enterprise applications from now on, it's not just about the technology. It's about the data. You add on top of that the array of different AI legislations that are currently coming to bear, and suddenly you have got this piece of data, systems, legislation, processes combined with our global world. It's no different than what we've experienced before.
So, you know, if you if you think about some of the tech that we currently do, and how we look at things. So right now, we often get blamed and said that, tell you what, if a system is implemented by a single function, finance or IT, it will automatically have an IT or a finance bias built into it. How do you start localizing AI? What does that know? How how do we deal with that? We already know that we can do legislative localizations, for example, for Brazil picking a country there. But how do you then do the AI localization pace? So here, it is absolutely key absolutely key that you have a diverse and representative team. It's more critical now than it's ever ever been before to actually build lasting, well thought through creative solutions.
And that is because it's more about the data than it is about the technology. We cannot put all the emphasis, though, on on the people building the solutions. I think that will be quite an unfair statement. I think there's also go going to be an increasing emphasis on us as individual consumers, but also as enterprise consumers accessing the solutions that are coming out. We need to emphasize critical thinking, and we need to increase the value of the question why. You know, now is not the time to be silent in the meeting room, when we're looking at these solutions. Now is the time definitely to be heard, and the question why needs to be raised on a pedestal, and anybody asking that question should also be elevated.
Now balancing innovation and ethics. It's a little bit about same conversation that we've had about balancing innovation and compliance. There's always the this is what you can do and this is what you can't do. Now we know that AI is going to transform industries. It's going to transform industries by the automating of processes, enhancing decision making, but it's also going to be creating, most importantly, new opportunities. For example, we talk about AI and predictive maintenance in manufacturing. Everybody focuses on it and says it can significantly reduce downtime and cost. But take it to a different level. Predictive maintenance in manufacturing reduces the number of emergency shutdowns or emergency situations that you have. Emergency situations are highly stressful and they often put people and assets at risk.
So being able to calm the situation down by using AI and using preventative maintenance, It's all about people and how we protect, ourselves. So it is effectively what we're talking about here. It is a new cycle of an old challenge that we all have. And I've mentioned that alluded to a couple of times there already. Now how do we balance the innovation and cost control? I mentioned balancing innovation and compliance. We all know that if we don't get this right now, we are going to be creating a huge amount of technical debt because the debt that we'll be creating is not just technical. It is data debt. And data debt tends to, as we all know, sticks around beyond the system having gone. So let's talk about a couple of good examples about the data debt.
And and then and these are these are really good examples because, actually, they were exceptionally well intended initiatives, that were done around, you know, five, eight years ago. We had the McDonald's AI drive through initiative, and that was all about automating a process of ordering. However, the background noises and the accents, individuals' accents that people have in their in their the way they speak, it it it just couldn't understand it. So it was scrapped. It was scrapped literally only last year. I remember you I certainly remember, and I'm sure all of you do as well, Microsoft Tay. Do you remember when Tay was launched in 2016? It was a huge funfair, and it was going to be the AI taught by humans. Well, it needed to be brought down nearly twenty four hours later because those individuals who started to teach Tay created responses by manipulating it because it was offensive and racist.
So there's the whole piece around it is not about the technology, it is about the data, but it is also about us as humans. And that's why holding everybody in check and having that balance in place is the piece that is the ethics bit that we really need to focus on. Final example, Amazon AI recruitment tool, the the initiative that that that really kinda rolled out in Global Attention. A tool types of tools we use every day because it now makes complete utter common sense to have AI to have the look at CVs. Now the challenge here was that when Amazon rolled it out in, '20 I think it was 2016 or something like that. It was scrapped in 2018. They used ten years worth of data to train the model. Ten years. That sounds really, really good.
But what they did, they had a gender bias in it, and it was trained on male CVs. And as a result of that, they had to take it offline, because it was not representative of what Amazon wanted the future to be. And, again, it is a data, not a technology piece. So how to innovate ethically? There are the two things, and I'm going to mention data again because it is the absolutely key piece. Data is now the royalty on what we are doing with AI, not technology, not IT. That's the key key bit. The is pick the right use cases. Now I mentioned the use cases based on some of the COGX, how I was taken by the AI and robotics.
Now robotics and AI is hugely, hugely important, but it's key to also focus on the right things. Now let's talk about autonomous vehicles. Now I I want to drive my car. I actually really, really enjoy driving my car. It is something that I take great pleasure in, and I know a lot of people take great pleasure in. But autonomous vehicles for individual commuters, for example, is something that is hitting the headlines. There is a quiet revolution taking place underneath, though, which is about industrial AI and manufacturing. So mining vehicles, warehouse vehicles and automated warehouses, driverless trains, this is the quiet revolution.
There is nothing better than thinking about how we can use driverless technology in a place where we can preserve life. For example, in a mine, which is a very dangerous environment, doesn't it doesn't matter. Is it underground or if it's if it's, an open mine? Similarly, warehouse vehicles. You know, being able to do that pick piece without people wandering around in between forklift trucks, as well as driving them just makes the whole setup a much safer environment for humans to be in. Whereas autonomous cars, what is the true value and business case? As I said, I look quite like driving my car. That is my one of my enjoyments. At the same time, are we also ready to take the biggest step in AI ever unless a vehicle potentially make a life and death choice on our behalf? So the question is, how do we innovate? What juice cases do we pick? And the a the data is the food of AI.
Now what is the impact we're going to see in enterprise technology? We all know the words. AI can improve efficiency. It can give better customer experience. There's actually loads of people who prefer talking talking to a bot than they prefer talking to a another human being. But the biggest thing is we are going to see the new business models. Enterprise tech focuses on AI in processes and decision making, bringing in things like AI agents as supervisors. Imagine if you were a manager job. So it's your managerial job, and you come into work, but, actually, your team is not human. Your team is agents. How do you manage them? What is the interaction? How deep do you need to understand the work that those AI agents are doing? How do you add value to their work?
All of these things are starting to mix up, and change the way that we're going to work, but also changing the way we interact. We already know about call center technologies. That is a brilliant, brilliant way of automating and improving your customer experience. That all exists there. Or I mentioned the industrial AI, the mining, warehousing, preventative maintenance. Again, really, really important. We also got another area that is really mature, the risk assessment in insurance, banking, corporate compliance. AI decision making is rife in that space. So we already know that the impact on office based work is going to be significant, and this is where the enterprise apps applications come into play. Trades are going to be impacted too. However, I expect that trades are going to be impacted with AI AI augmentation rather, than automating the work that you currently do.
We are also moving into the new business models. We currently have traditional industries or traditional companies, and then you've got the digitally native organizations. Moving forward, we're going to have the traditional companies, native organizations, and AI native organizations and companies. And as we bring in AI native companies into our ecosystem, the question of why and the critical thinking is going to become even more important, not just taking it by assumption that this is right. So we're also going to be driving changes in how the companies are structured, the way we work, how we design our work, and living spaces. We're going to be accommodating human pets, robots, and AI. So there's a there's a lot of complications that are going to be moving in there. It is in its infancy.
And for us driving and participating these activities, we are going to have to be critical and take a a broader view on continuously asking the question why. Okay. There are going to be challenges around data security. We heard in a previous presentation the the security piece, but it is going to be absolutely there. The ethical dilemmas are going to be there, and there is going to be a massive need for transparency, something we're going to have to think rather than as the afterthought. So finishing off some practical actions. I could talk to you about implementing ethical guidelines or promoting transparency and mitigating bias. I'm sure all of you are doing a lot of work in your enterprises, in your companies, your organizations to look at these and go, we have got new policies. We are doing this. But so what? What are the practical steps that we should be thinking about on what this is? We need to start with the data.
Think customer three sixty, employee three sixty, product three sixty, partner three sixty. Who needs what information, person, agent, robot, to complete their job successfully? Don't start it from the business process. Start it from the data. Engage the stakeholders. All of those who interact with customers, even though you don't think they interact with customers, might actually interact with customers, or at least with the paid piece of data. Engage with them. Promote the critical thinking, and remember that AI is just another tech wave. Treat it with respect, for the potential it has, but also don't just wave it into your organization as into your life. Treating it as special could create the biggest technical and data debt we have ever seen. So finishing off, find the appropriate value and use cases. Focus on the data.
Be prepared to be excited and explore the wave of AI native products and services. And be even better, develop your own. Thank you very much for listening.
Thank you so much for your thought provoking and timely session on responsible AI, Helena. It was such a pleasure to have you with us. I enjoyed greatly the the the the keynote of yours, and we appreciate you being part of Chief in Tech Summit.
Thank you, Anna. It was a pleasure.
Thank you so much. Have a great day. Looking forward to having you at our future events. Bye.
No comments so far – be the first to share your thoughts!