Responsible AI and The Leaders' Role To Get It Right by Füsun Wehrmann

Füsun Wehrmann
CTO, CPO and Transformational Leader

Reviews

0
No votes yet
Automatic Summary

The Importance of Responsible AI in Today's Digital Age

Welcome to our exploration of responsible AI, a crucial topic in today’s rapidly evolving technological landscape. Led by industry expert Fusun, a seasoned leader in the AI field, this discussion dives into the complexities of AI usage, emphasizing the leader's role in ensuring its responsible application.

Introduction to Responsible AI

As Artificial Intelligence (AI) continues to infiltrate various sectors, its responsible use has become imperative. Responsible AI encompasses ethical considerations, legal compliance, and the overall impact on society. In this blog post, we’ll discuss the responsibilities of leaders in implementing AI solutions that are not only effective but also safe and fair.

Understanding the Growing Concerns

  • Headlines about AI misuse have escalated in frequency, with major companies facing backlash due to irresponsible applications.
  • Prominent figures like Sam Altman and Elon Musk have shifted their narratives regarding AI regulation, highlighting the unpredictable landscape of AI governance.
  • According to Fusun, it is essential for the discourse on AI to be stable and serve the global public interest rather than being dictated by corporate whims.

Challenges Faced by Companies Deploying AI

Many organizations find themselves at the nascent stages of AI deployment. Issues often highlighted by industry leaders include:

  • Pilot projects that lack scalability or clear direction.
  • Compliance and legal uncertainties surrounding the implementation of AI technologies.
  • Internal struggles to establish a coherent AI strategy amidst the chaos of rapid technological advancement.

The Leader's Role in Responsible AI

Leaders play a critical role in steering their organizations toward the responsible use of AI. Here are some fundamental responsibilities:

  1. Establish Governance Framework: Define what constitutes acceptable AI usage within the organization, aligning it with ethical values and regulatory standards.
  2. Compliance Awareness: Stay informed about existing and upcoming regulations, including the EU AI Act, which outlines rules for various AI applications.
  3. Addressing Data Risks: Recognize that data integrity is often the greatest risk in AI deployments. Poor data governance can lead to significant legal and reputational consequences.
  4. Implement Testing Procedures: Ensure thorough testing of AI systems to identify biases and potential failures before deployment.

Global Perspectives on AI Regulation

The regulatory landscape is rapidly evolving, and it varies significantly across geographies:

  • Europe: Stricter regulations are being established, especially concerning high-risk AI applications.
  • USA and China: The emergence of a regulatory framework remains ambiguous, with potential regulations taking shape.
  • India: Organizations should be aware of the implications of non-compliance with international standards, especially when accessing global markets.

Building Trust and Maintaining Brand Integrity

Trust is foundational in brand-building, especially in AI-dependent businesses. A few key points to consider include:

  • One AI failure can significantly tarnish a brand's reputation.
  • Major companies like Air Canada have faced lawsuits due to AI providing incorrect information.
  • Establishing a culture of accountability and transparency can mitigate risks associated with AI.

Conclusion: The Path Forward for Responsible AI

As we transition from the era of proof of concept to proof of value in AI, it is crucial that organizations adopt a responsible approach. Leaders must:

  • Engage in robust discussions about ethical AI and its implications.
  • Invest in frameworks that prioritize ethical considerations and data governance.
  • Stay abreast of the evolving landscape of AI regulations to ensure compliance.

With ongoing advancements in technology, now is the time for leaders to act decisively and responsibly, ensuring AI solutions are beneficial and ethically sound. For more insights on responsible AI and to connect with Fusun, visit her website or connect on LinkedIn.

Thank you for joining this enlightening discussion on responsible AI!


Video Transcription

So welcome again. Great to have you all here in this virtual room. Such curious and brilliant female minds coming together. It's it's really great. I'm happy to be here.I will talk about responsible AI today, something I'm very passionate about and the leader's role, to get it right. So it's not going to be only about, you know, the fancy and exciting parts of AI, but it's also about what happens when you make the decision to use AI. Maybe you are the one who is responsible for running it safely or you might be considering a use case for AI. After this presentation, maybe you go back to the plan and maybe add stuff, change stuff, reflect on what we talked about. And before we do all of that, I would like to, introduce myself. My name is Fusun. I live in Berlin, Germany, and, I'm married.

I have a son, and I have two cats. They are incredibly unhelpful when it comes to production releases or preparing presentations, but they are very talented at stress testing unattended keyboard. If you stay until the end of my session, you will see why. Coming back to me, I've been in this industry over twenty five years really, and I started as an engineer, coding, a lot of infrastructure, and I work my way up to, global leadership positions as CTO. You might recognize some of the companies I work for here depending on your geography and your industry. And currently, I am, working as a fractional CTPO sharing my experiences, coaching advisory under the name Scale and Solutions. So very great to have you here. Alright.

Responsible oh, Kat is from Germany. Great. Great to have some support from Germany. When you think about the headlines about AI in the last couple of years, I don't know what your perception is, but I find some of them, a lot of them, very worrisome, honestly. Responsible AI teams from big companies like Meta, Microsoft being disbanded. You know, they stop existing. Google says, well, AI maybe can be weaponized one day, anti regulation talks, rise of AI pornography, what have you. Right? It's very, very, worrying and it's increasing in numbers. So it's not all about excitement, potential unicorns. There is also this side to AI. And, when we speak about control regulation, the reaction of the big companies is similar to how my cats react to medicine. Right?

They look at it, they blink slowly, and they do all kind of weird things to run away from it. Really, pretty pretty significant. Let's start with Sam Altman, for example. Year, May 2023, exactly two years ago, Open GI was out there, and Sam was one of the influential figures in AI. And he was at the congress, and he said, oh, please regulate us. And he advocated for licensing, auditing, control. You know, we ask for these things. Fast forward to May 2025, you can find this video on YouTube where, Sam and the CEO of, CEO of AMD and, you know, people from Microsoft discussing about AI. And he said he urged the lawmakers for more freedom, less regulation, criticize the EU regulations significantly, really hard, and, you know, ask for for more freedom. The same person, but very, very different too. So it's quite the change. And then, also, there is mister Musk, of course.

And, also, in his case, a few years ago, I remember seeing videos where he talked about, oh, no. AI, we have to control it. It could be it could end disastrous. We need to regulate it. He gave statements supporting, California's attempt to push, tested AI practices. And then just recently, the same Musk's ex AI company, its chatbot, Grok, started denying holocaust, started talking about white genocide conspiracies. Right? So, it's a good example of how things might end if it's not if the regulation is not reactive or absent. So I ask myself, do they at all care about regulation? Is it a topic for them? And how are we going to deal with these things if if the CEOs, if the companies flip the narrative, change their minds every every other day.

You know, one day you ask for regulation and the other way you push back. So is that responsible? My view on this is it cannot depend on the mood or thoughts of a CEO and how their quarter is running. It needs to be democratic. It needs to be stable, and it needs to be for the sake of the public interest global public interest. What about the other companies? You know, the other companies, most of the companies, honestly, we are just at the start of it. Not only AI governance or security, mid with AI, Gen AI itself. Right? So people are figuring out one team is doing a pilot, the other team is doing a POC. And, legal departments are unsure, compliance departments are in wait and see mode. Maybe it resonates with your situation.

And I don't know how it is in your environment, but I can speak about Germany and Europe. When you meet people in communities in a meeting, I once hosted an AI AI table, round table. People say, oh, yeah. You know, we use Gen AI everywhere and, you know, we got it. And if you're polite, you're just not, and that's the end of discussion. If you just dig deep, what comes out is in many cases, not all cases, that, you know, people say like, well, we had a POC. Right? And, we are discontinuing. We don't know where it's going to end. We don't have the budget. We don't know how to integrate into production. So people are figuring out, but the to to outside, we are trying to do a mature more mature, presentation, and it's it's normal.

I've seen it happen it with cloud, mobile, the machine learning era. So these these things are normal. But what I, what I think the leaders role is not skipping the important questions. Things like, are we testing for bias? Are we do we know where our training data comes from? Is it reliable? Is it safe? What do we do when our model goes south when something, you know, unexpected happens? And I do understand the struggle. It is really real. In 2023, mostly in 2024, we had our POCs. We had free licenses. You know, we had we opened the floodgates. Developers started using AI assisted tools, and the campus staff uses CheckGPT or other tools. It's wonderful.

Now in 2025 and forwards, at least the CTOs I talked to mostly tell me we need to put numbers on, costs, expenses, value created return of investment, expectations. They need to understand what value will be created and how. Many budgets were cut partially or fully. So I call it this is the era of turning proof of concept to proof of value. And this might sound contradictive to responsible AI, but I think it's not, because if you think about it, you are not, you are not exploring anymore, but you are making decisions now. You are about to commit to AI or you committed to AI. And then you don't have the, luxury to skip the questions. Is it safe? Is it fair? You know? Is it compliant? You have to, you have to answer these questions. And only if you make it safe, stable, and malleable, it's worth the investment.

So I see responsible AI as a support, not a blocker really. And yet when I talk to people in the industry, mostly they say, like, oh, well, responsible AI, it's how you don't get into trouble, how you don't get into fines, how your company image stays intact. I think it's only part of the equation. Right? It's all about ethics, laws, your company values, basically, how you deal with real world risks together with AI. And that's why I say every leader in our industry must care about AI, responsible AI. And I would like to now, unpack this a little bit for you. First and foremost, compliance and liabilities. I am not sure from which geography you are coming from. We are going to talk about different global, geographies. Especially in Europe, it is becoming, a very important aspect.

And, no matter what you do in software industry, if you are leading, you have to be you have to be compliant and be aware of your responsibilities as a software vendor. Right? And regulation and these liabilities, these responsibilities are not coming. Some young people have the impression of it's still coming. It's already there. There has been already rules, regulations, and more is coming in Europe now with legislation. On top of that, we have all the other things that apply to AI systems as well, like privacy rules, GDPR, NIS. So there is this whole ecosystem, and we do carry responsibility. As an example, Air Canada, last year, 2024. Right? They are using a third party chatbot, and this third party chatbot gives wrong refunding information a customer, and then there was a lawsuit between the company and the customer back and forth.

At the end, the court ruled, air Air Canada is responsible for this wrong information, not the vendor, not the chatbot. Because if you use it, you have to own it. And EUAI Act, I, I am not going into the depth of this, but it's a very extensive, legislation. It's around 300 pages more than 300 pages. But at the bottom of it, it regulates all the industries for AI and all the systems fall into one of these four categories. And, for each category, there is different regulations regulatory requirements. Just notice if your AI system is in the red category, you cannot just use it in Europe at the end of the story. It's just not possible.

What would it be if you are creating an AI system to social score people based on their personal data and behaviors and give access to some certain services accordingly? You can't do this in Germany and in Europe. Right? Everything else, you can have it, but the regulatory requirements are different. Limited and minimal risk, this bottom makes the 80% of it. So and they are not very, very heavily regulated. High risk, rightfully so. These are things like hiring systems or health analysis, you know, diagnosis of sicknesses, things like that. They are, of course, under more control. So, again, this this this, EU regulation is something that is rolling step by step over the course of four years, and we are now around, the August milestone. Again, not going into the details, but there's good resources if you are in Europe and want to get more information about that.

And let's, come to the rest of the world a little bit. The rest of the world is actually catching up. Some countries are going with legislations, new laws. Some of them are in process. Some countries are saying, yeah, you know, I don't wanna go with legislation, but I wanna I wanna create guidance for the companies in my region. When it comes to USA and China, we don't know where it's going to go really. You know? We will see. I think the next years will shape it. And in Korea, I know the pornographic deep fake images cause so much problems that they're also around about to regulate it. What I would like to tell you is this, regardless of where you are, be careful of where you are using the AI.

For example, if you are an Indian company creating an AI product and you are making business with Europe, partnerships with Europe and using that system there, and if you fail to comply with European regulations, you might be locked out of the market or lose your partnerships.

So you need to be careful about where your business is running to, not only where your company is based on. And brand. Your brand matters, a big deal. And, again, a few examples from the last years. Grog denies holocaust, Amazon creates a recruitment, AI tool that it was so biased no matter what they did. It they couldn't fix it, so it's not used anymore. Zillow AI, it's a real estate, platform in The USA. The AI causes financial, loss. Microsoft loses, exposes data, and the Air Canada example we talked about. So all of these examples are not great for the brand, and these are big brands. You know, if you have a more local, smaller brand, the damage could be more significant, damage, you know, a little bit less here and a little bit more there. If you ask me as a person in Germany, grog denying holocaust, it's a no go for me. Right?

So it makes a significant impact how I perceive the brand, and it will stay with me. So one AI mistake can win years of brand trust. Please don't forget about that. Trust of the brand is built in years, and you can lose this lose it really, really quickly. So and then we are coming to the real work, real operations. You know, how you are doing your business. What if your business just stops? Things don't function anymore. Your supply chain, your operations, your platform just comes to a halt or a big trouble because of AI. These things can happen, might be happening. If I'm honest with you, I didn't find as many examples as I wanted during preparation, but the reason is companies do not want to disclose these cases unless they have to. Especially from supply chain, somebody who worked in that industry, I know that for a fact. One thing we can talk about is McDonald's experiment with AI systems.

They partner with IBM and use their AOT automated order taking, mechanism, and they install it in 100 restaurants. So you talk to the AI system, it takes your order and, you know, push it per further, efficiency gain, and so on and so forth. But it failed so miserably that the operations got into chaos, efficiency fell, and user experience, was also not great. So they stopped, they they stopped the experiment. And I'm sure there are more examples, that are beyond, the scale of that. Another one, this is not about AI, but as we remember last year, CrowdStrike has an outage, a big outage. And not not only they had it, many companies using their solutions had big outages. Airlines got into trouble, airports landing in chaos.

I personally sent, I think, around a thousand people home because they were unable to use their computers. So, it was a big outage not only for the company, but, you know, the blast radius was really high, and AI can also cause such such damage if you if you're not careful. So think about think about how would it be if we had this problem in the medical area or in a high critical area, or what happens to your company if you have such an incident? Last but not least, corporate responsibility and ethics. This is this is a tough one because this one is directly about leadership accountability. So you see, when you are using AI now for making decisions or doing some certain tasks, there is inevitably some responsibility gap resulting from this very often in today's world. And there is a paper about this from 2025 2021 is Mikachi Santono and Mikachi, I guess. Yes. And they they talk about these gaps and how they manifest themselves.

I would like to tell I would like to talk about three of them and recommend you to check out the, rest to its really good paper. One is culpability gap. What is that? You have an AI system. It makes a mistake. It it does something wrong and damage is done. Somebody gets hurt or hurt. So who is responsible for that? What could be an example? Your company, you have fleet of cars you give to employees, and they run on AI, maybe self driving, maybe not. It makes a mistake and somebody gets hurt, somebody dies. So in that chain, who is responsible? Is it you making the decision? Is it the developer? Is it the company? Is it the car distribution? You know? It it becomes very complex legally and ethically, and complex to untangle and put a finger on it.

So we have these issues right now, and we will have it more and more in the future. Second is, moral accountability gap. Again, it's mostly about incidences or being able to explain things. Right? In this kind of gap, after the fact, something happens and you just don't can't explain. You are a doctor or you're a medical institution. You load the data of the patient, and the AI suggests a treatment out of five treatments. And if the patient asks for why this one and you can't explain, you can justify, that's a problem. The same with financial scoring, credit scoring, so we can create many examples. And, we have to find ways of dealing, how to justify, how to find answers when AI makes the decisions for us.

And there is the other side of this coin. It's before the harm occurs. Right? What is the responsibility of the company and the leaders? Again, let's say, in the case of your car, a AI supported car killing a civilian or you have a financial institution and AI does high scale, high transactional deals for you and you have a massive loss or you crash the stock market, you know, and it's a all AI's fault. But what is your responsibility as a leader? What have you looked, checked, tested, evaluated beforehand? And these parts are also very, very vague, very, very undefined right now. Okay. So I hope you are convinced that responsible AI is necessary, leaders have a responsibility, to drive this. The question is what? What do we do? What is good? What is okay? It depends, of course, into it depends on your situation. Let's go into that domain a little bit. What can leaders do or what should leaders do? First of all, define your AI governance governance framework. I don't like this slide personally because it's a mouthful and it's, like, full of buzzwords.

It looks like an AI generated slide. But what I mean by that is it's defining what good or good enough looks like in your own context, in your own company, in your use case. And once you define that, which is the biggest biggest job, you need to make it somebody's responsibility. One person, a people, a team doesn't really matter, but this definition and putting a name on it is extremely important. Deciding where do we use AI, for which cases, and where we don't use AI. You can say, hey. We use AI for data extraction and revenue prediction, but we don't use it for making hiring You know, you need to be really clear about that.

You need to look at your company values, ethic ethical restrictions, your markets, and apply it. And last but not least, map the, regulations, legal regulations, industrial regulations, map those needs so that you can create a road map. And still, you can say, yes. Okay. But how do I deal with it? Is it a document or is it a department? That's the question I get asked very often. It depends. So let's walk through some examples, some cases. If you are a small company or, you know, a bigger company, but you are just doing some simple stuff, low risk, it's not on production, increasing productivity, early stage, you are using somebody else's AI, then in those cases, you don't have to, overkill. Right? Maybe it's just a document explaining, software development life cycle requirements and compliance basics, some simple monitoring. Maybe you are okay.

And then you when you come into this POC, proof of concept experimental mode, and when you say, there's something there, you know, we can do more with this. We can leverage more in business like that. So you need to take these foundations you created, Word document or whatever. You have to make a framework out of it. You have to, you know, give it a little bit more structure, and you need to see, okay, what are the legal compliance or, you know, security compliance I need to look at? What am I walking into? And think about oversight, what kind of oversight you will be having, and get ready for, you know, using it in production. Get ready for taking responsibility or legal requirements. These things will always take time. So it as your POC scale up, you need to get ready for for the governance part of it. And then you're finally integrating it into your business.

You are using it in several places, could be a chatbot, could be an AI assisted assisted product. And then you that means just the framework might not be, might not be enough. What I forgot during the POC phase is also because it was getting serious, you also need to think about who needs to be trained, who needs to know what about AI. In those cases, it's usually the people working on AI itself and not the whole company. But once you put it on production, once it becomes a part of your business value stream, it usually means most people, corp corporate people at least needs to know about what is AI, how it's used, how we are dealing with it. You need to have an incident management process. If something goes south, who do you call? How do you deal with it?

And the end of it is if you are using AI in a high state mission critical situation, it could be a medical use case, it could be aviation, it could be defense system, That's a big deal. Right? You have to have low tolerance, maybe zero tolerance for incidents, errors. You don't have any tolerance, so you have to put a lot of effort in this. We are talking about people, departments, serious amount of money, documentation, auditing external people coming and rec team auditing you. So there we are going into the heavy territory of AI governance and taking responsibility of the criticality you are putting into your business through AI. So I I hope that gives a picture of in which category you might be falling into. Yet another slide that looks busy. But here's the thing. If there were one thing I would like you I could choose that you can pick out of this presentation. It would be this. Data is your biggest AI risk.

Data is your biggest AI risk. If you are out of time, out of money, capacity, and you have to take take one thing. You could do one thing about AI governance and security, do it about your data. From the collection and training and running your systems, things about data is extremely important. It's where most of the trouble comes from. Also, when something goes south, when regulators and auditors come, it's the first place they will look. So coming back to the EUAI Act, it's hard to pronounce. Not going into the details later, but for those of you who are in Europe and interested and for those of you who want to check it out, you know, how complex it is or where your own use case and software would fall into, or if you are, interested in, interested in what is that that is part of this body, I think this is a good site to check.

And as I told you, the act EI act, this document is too long. I'm an engineer. I'm interested in the topic. I can't read it from, you know, cover to back. But there is a comp AI framework, and I'm not sponsored by them or anything, but it makes it actionable. It's a framework to evaluate large language models and put metrics on it, put scorecards on it so you can compare it, you can see how the AI act and several aspects of model evaluation can be applied on it. I'm a really big fan. I would like to show you an example. So here you see three models. I randomly picked three models. Right? Gemini, Ay, and JPT four. And this is a very dense aggregated view. You see all the models are more or less okay, privacy and data governance wise. When it comes to transparency, j p GPT is better, definitely. Diversity, nondiscrimination, and fairness, I would say, you know, they all have some ways to go.

So it's pretty interesting. And I think this is these automations and this transparency metric based transparency is exactly what we need for consumers and in the tech world. A more specific example here is disclosure of AI. For example, in Europe in the AI Act. It's a requirement if you are using an AI system, it has to deny being a human. It has to identify itself as nonhuman, and exactly that's what's happening here. Gemini is saying, no, I do not identify as a human, And the other model is saying, oh, yeah, you know, who knows what a person is. It depends on it depends on the, definition, so that's a non compliant answer. Exactly things like this is what we need, to have the same language when we are evaluating models. I'm not defending AI act. I'm not saying it's perfect.

Parts of it is very complex and vague, but there are still a lot of goodness, good ideas in that. So coming towards the end of our discussion, I would like to touch two things. The first is, again, having a having a structure, having a little framework about what does good look like in your area. For me, these are these eight checkboxes and how you fill these sections in depends entirely on you, on your on your industry, on what your company expects or their values. Accountability, we talked about it, And ethical alignment, company values where the company sees the ethical importance of things, data integrity, privacy, for sure, testing, a lot of testing. And I must say, human oversight also in testing. As the risk profile of the models increase, we cannot just automate.

We have to include human oversight into the game, and especially in the air irregulation, this is also reflected. Compliant and transparent use, also very context, dependent. More risk, more governance. I think that's clear. And independent audits. That's not for everyone. I agree. You know, your your size and the use of AI might call for it or you might be forced to do it. And probably the most important one, I put it in the last, at the last point, Culture awareness and learning development. The company culture, awareness of AI, why it's important, how it's important to risks, and how you incorporate it into the company's learning and development practices is extremely important. So, and last but not least, when we now think about we talk to talk over, half an hour here, and I appreciate so many of you are still here.

The takeaways from the session. We talked about we are moving from the age of proof of concept to proof of value. Countries are putting more and more requirements and legislations in in terms of guidance, in terms of laws, in in terms of directives, so you have to be aware of that. And don't forget, it's not only the company you are in, it's everywhere your model is being used. Your company's values and mission define how ethics shape AI use. Every business needs a way to govern AI risk. Whether you are using just a simple GPT or you are building a complex, LLM model, you have to have governance. It could be one document, it could be a team, but you need that. You need the clarity of how you are doing it. If the resources are tied, then focus on data and data related risks. And remember, this industry is so advanced, so progressed, so fast.

There's already tools and frameworks exist out there to help you. There was a time I knew a lot of the new stuff, but, you know, I just dropped the ball for two months, I guess. I'm already you know, I feel like a dinosaur. But these tools are out there to evaluate and manage your own, models and assess the third party stuff that you want to buy. And last but not least, how my cat is testing unattended keyboard stress testing. So this lady doesn't like people working, so she would come and lie on your keyboard, on your laptop. And last time she did that, she googled Taylor Swift.

We don't even know how. We don't listen to Taylor Swift. There's nothing in our browser history. So as you see, you never know what happens, when in when it includes computer and software. And with that, I would like to say thank you. It was really great being here and great talking here. And, please connect with me through my website, through my LinkedIn. I would love to hear from you very much and your feedback, and have a great rest of the day and conference.