Auditing AI Ethics by Arpna Aggarwal

Arpna Aggarwal
Senior IT Auditor

Reviews

0
No votes yet
Automatic Summary

Understanding AI Ethics Auditing: A Path to Responsible Innovation

Welcome to our deep dive into the world of AI ethics auditing! In a time when artificial intelligence (AI) is profoundly influencing various aspects of our lives, it is imperative to address the ethical implications surrounding its development and deployment. This article encapsulates the nuances of auditing AI ethics, ensuring that AI technologies align with human values and ethical standards.

The Urgency of AI Ethics

AI has transitioned from a futuristic concept to a powerful influence in our everyday lives, affecting healthcare, finance, criminal justice, and more. However, with this power comes significant responsibility. The consequences of unethical AI practices can be alarming, as demonstrated by the case of AstraZeneca's healthcare platform, which inadvertently excluded minority groups from critical research trials. Such algorithm bias can exacerbate existing disparities and underscore the pressing need for robust AI ethics.

Principles of AI Ethics

As we navigate the complexities of AI, it is essential to adhere to the following core principles of AI ethics:

  • Fairness: AI systems should be developed to avoid bias and discrimination, ensuring equitable outcomes.
  • Transparency: The operations of AI systems must be understandable to stakeholders, fostering accountability.
  • Accountability: Clear lines of responsibility must be established for AI actions and outcomes.
  • Privacy: AI systems should protect personal data and uphold individual rights.
  • Benefit: The utilization of AI should maximize good while minimizing harm.

Why Auditing AI Ethics is Crucial

Auditing AI ethics is essential in today's rapidly evolving technological landscape for several reasons:

  1. Regulatory Compliance: With increasing scrutiny from regulatory bodies like the EU AI Act, organizations need to ensure compliance to avoid legal liabilities.
  2. Public Trust: As public concern over AI bias and misuse rises, auditing can help maintain trust while protecting vulnerable communities.
  3. Operational Efficiency: Regular audits can identify inefficiencies and enhance systems' performance, aligning practices with core organizational values.

Challenges in Auditing AI Ethics

Auditors face unique challenges in the realm of AI ethics, including:

  • Complexity of AI Systems: Many AI models, particularly deep learning algorithms, operate as "black boxes," complicating verification of ethical compliance.
  • Lack of Standardized Frameworks: The field of AI ethics auditing is still emerging, necessitating adaptation of methodologies for various audits.
  • Data Quality Issues: Poor data quality can lead to biased outcomes, emphasizing the importance of thorough checks.
  • Privacy and Security Concerns: Safeguarding personal information during audits is critical to avoid unintended harm.
  • Integration Challenges: Merging AI auditing processes with existing systems can be technically complex.

Key Steps for Effective AI Ethics Auditing

A structured approach is key to effective AI ethics auditing. Here are some essential steps to consider:

  1. Define Objectives and Scope: Align the audit with organizational goals and regulatory requirements.
  2. Establish an Auditing Framework: Guide the auditing process with appropriate frameworks.
  3. Conduct Risk Assessments: Identify and mitigate biases, data quality issues, and security gaps.
  4. Evaluate Data and Model Performance: Analyze the datasets and assess the algorithm's decisions.
  5. Perform Continuous Monitoring: Implement real-time tracking to ensure compliance and effectiveness.
  6. Communicate Findings: Share results with stakeholders and develop an action plan to address high-risk issues.

Frameworks and Tools for AI Ethics Audits

Several frameworks and tools are becoming instrumental in AI auditing:

  • IIA Auditing Framework: Emphasizes governance and risk management.
  • COSO FERM Framework: Focuses on risk assessment and stakeholder collaboration.
  • NIST Framework: Guides risk management in AI systems.
  • IBM AI Fairness 360: A tool to detect and mitigate biases in

Video Transcription

Presentation. Hello, everybody.I have dropped my LinkedIn in the chat to connect with you in case I'm unable to address your questions in this session as I would like to cover much of the ground in auditing. I'm very grateful to be presenting in women tech conference. Since yesterday, I have attended very insightful sessions on AI ethics, which set the foundation of my presentation. In today's world, AI is no longer a futuristic fantasy. It's shaping our present, but with great power comes great responsibility. And the ethical implications of AI are massive. As a senior IT auditor specializing in data governance and compliance, I've seen firsthand how critical it is to ensure these systems are built and used ethically. We need to move beyond. Can AI do this? Or should AI do this?

I'm here today to discuss how we can audit AI ethics, minimize risks, and ensure a future where AI benefits all of the you know humanity. Overview. We have seen the alarming consequences of AI systems gone wrong. Take the international case of AstraZeneca. Their AI driven health care platform intended to personalize drug treatments was found to exclude minority groups from critical research trials systematically. This is just a technical glitch. It's a stark example of algorithm bias with the potential to worsen existing health disparities on a global scale. These failures underscore the urgency need for AI ethics. AI ethics is a crucial field that establishes principles and guidelines to ensure the responsible development and deployment of artificial intelligence. It seeks to align technologies with human values and protect fundamental rights do not cause any harm.

As systems become increasingly into our daily lives, influencing decisions in health care, our finances, or criminal justice and even above and beyond, that impact impacts importance of AI ethics cannot be overstated. Without a strong ethical framework, we risk creating AI that is not only flawed but deeply unjust. The purpose of this presentation is to provide an overview of auditing, AI ethics, and its growing importance. I want to address a very fundamental question. How can we ensure AI systems are developed and used ethically? Also, explore the question, I aim to provide you with the knowledge and tools to navigate this complex landscape and contribute a few contribute towards the future. I wanna start talking about myself with a little bit of inspiring quote. Auditing AI ethics, it's not just about finding faults.

It's about building trust, ensuring fairness, and lightening the path towards responsible innovation. With over a decade in IT audit and credentials such as CISA, CFE, and data privacy and technology, I've enhanced my expertise in navigating complex regulatory landscapes like SOX, HIPAA, NIST frameworks, GDPR, and various state regulations. Now I'm applying that experience to the AI ethics, ensuring these critical systems in health care are developed and deployed responsibly with the minimum risk and upholding the highest ethical standards. Let's talk a little background on what AI ethics mean to an IT auditor. I see AI ethics as a rule book that provides the principles and guidelines that organizations should follow in the development of AI technologies. These guidelines are designed to ensure that systems are aligned, again, with human values, they respect their rights, and definitely no cause, no harm.

This responsible responsibility does not only fall on senior leadership. It is a collaborative effort of the individuals of the organizations and externally in the society as well. Some of the key principles that should strengthen all AI initiatives should be and should also be incorporated off audits as part of objectives are listed here. I will start one with the fairness. AI systems should be developed and used in ways that avoid bias, discrimination, and promise ensure equitable outcomes for individuals and broader groups. Transparency. This is a very key factor. The working of AI systems should be explainable and understandable to stakeholders, which then fosters trust and accountability. That leads to our next point, which is accountability. Clear lines of responsibilities. We see that in a lot of organizations, a lot of audits. There is a missed responsibility on what the roles and responsibilities are.

Hence, there should there is a urgent need of clear lines of responsibilities to be established for actions and decisions of AI systems, ensuring that the mechanisms in place address any harm does not address any harm or negative consequences. As we all know, privacy is the number one concern. So AI systems must be designed in ways that protect data, privacy, and uphold individual rights to control their personal information. Benefits is AI systems should strike the maximum good and minimum harm, ensuring that the benefits outweigh any to no potential risks. The potential consequences of unethical AI are far reaching and deeply concerning. Imagine AI sys systems that perpetuate society bias biases leading to discriminatory outcomes in areas such as hiring, lending, or matter of fact, criminal justice systems as well.

Consider the erosion of privacy as a AI powered surveillance technologies to become more persuasive. Just to give you a pinch of some of the examples of AI ethics, one would be bias in hiring algorithms. We saw the case of Amazon's AI recruiting tool, which was found to prefer male candidates over female ones. The systems were trained on historical hiring data, which reflected existing gender biases within the company. Privacy violations. The campaign analytics scandal revealed how AI can be used to manipulate individuals and undermine democratic processes. The risk here was the data from millions of Facebook users was harvested without their consent and used to target them with political advertising.

It is therefore very important imperative that IT auditors play a central role in promoting responsibility. Now let's talk about the need of auditing AI ethics. Why audit AI ethics? AI auditing is essential in today's environment for growing the rapid technology advancement across all sectors. Regulatory as well is increasing. There's a lot of scrutiny such as the EU AI Act, which demands that organizational rigorously assess their ethical and operational integrity of their AI systems. These regulations require providers to ensure compliance, maintain technical documentation, implementing risk management processes, systems, and undergo conformity assessments before placing high risk AI on the market. If we fail to meet these, this is not only gonna create significant legal liabilities, but we're gonna also lost the trust and the access to the market. The second one is public concern. We are all concerned about this. AI bias, discrimination, and misuse is on high rise.

Either people don't understand or or they don't know the clear definition of ethics or they manipulate the word ethics and use it for their purposes. Auditing helps identify and mitigate issues like unfair treatment, privacy violations, misinformation, which get which are crucial for maintaining public trust and protecting vulnerable groups. A thorough audit demonstrates a commitment not only to fairness, but it's also transparent and holds accountability, reinforcing an organization's reputation and align the AI practices with core values and ethical standards. And we will notice that a lot of organizations have already started aligning their organization's core values and ethical standards and trying to strengthen as the emerging technology moves forward. Additionally, we have seen AI ethics audits offer operational benefits by identifying inefficiencies and risks within the systems. It's heavily everything is risk everything is related to the systems, thereby enhancing their performance and reliability.

Regular periodic audits also helps organizations stay ahead of evolving regulations, which keeps changing every as we speak. So they keep making enhancements to the regulations that are out in the market, reducing the risk of penalties and ensuring continuous compliance. That one, it becomes a very important key to keep up with the emerging technologies, the regulations, and being more informed. So I wanna touch base on the challenges that us auditors face while auditing AI ethics. Sorry. Auditing AI systems present a unique and evolving set of challenges that distinguish it from traditional IT, operational, and financial audits. They should be at a park. They should be integrated in our current audits in the planning phases under objectives, which we'll touch touch base a little bit later, but they should also independently be tested from the other regular audits.

Some of the challenges that we face is, one, complexity and the black spot problem, which is many AI models, especially those based on deep learning and generative AI, operate as black box. Meaning, their internal logic and decision making process are opaque. Even though they're creators, they lack transparency. The auditors find a very hard time to trace back on the outputs have been generated and complicates to verify accuracy and compliances with ethical guidelines. This is when if you do, like, a cookie cutter audit, then it gets very hard to trace back to the sources. Lack of standardized auditing frameworks. The field of auditing AI ethics is so raw and emerging and there is no universal accepted framework, a set of best practices for evaluating AI systems. As we see, there are frameworks being generated by different countries, international boards as well, but they are at the raw stages and are trying to mature.

Auditors must navigate a patchwork for evolving standards, often adapt or develop your own methodologies for every new audit engagements. You can always customize your frameworks to meet the needs and objectives of the engagement. Data quality and biases. This is very, very critical. We've seen that when we rely and the fairness of AI systems, what is the system's dependent on on the quality of the training data? So if it if the data is poor and it represents unrepresentative populations or embedded biases, this can lead to flawed outcomes. So detecting and mitigating these risks at an early stage through audit, these will will be help when you continue, and the results will help overall because you will need to evaluate the data analysis techniques that they are gonna be using in in conjunction with the data quality.

Again, I can't emphasize how much the technology is revolving, how the compliance is revolving, the systems are revolving. They're trying to keep up with the new AI innovations. So that is another challenge on its own for the auditors as well as the regulatory guidelines. As mentioned, data privacy and security concerns cannot cannot totally emphasize how much this is important. If mistakenly during the audit, there is an exposure of the personal and confidential information, that can cause more harm. I think the most painful aspect we have seen so far is integrating with existing systems. So integrating AI auditing process with existing risk management or compliance systems can be technically challenging within the organizations. One, because it's legacy IT infrastructure, they're not updated.

So that poses a very big challenge when you're coming in performing audits. Also, relying solely on black box access or black box methods, which are only inputs versus outputs, limits the depth of evaluation during audits. What we suggest is more rigorous audits, such as thinking outside the box, critical thinking, which includes starting from the source code testing and then the documentations that is produced, also what what the evaluations internal evaluation reports to full assess risks and system behaviors.

How does the system behave during that? I wanna touch base on the next one, the key steps in an AI ethics auditing. Auditing AI ethics requires a very, very structured approach to evaluate our main fundamental concepts, fairness, transparencies, and accountability, while keeping in mind we have to address the regulatory and organizational risks. I wouldn't say this is a practical guide and we need to follow. I always do customize, but these are the underlining key steps that we need to incorporate in our audits. Firstly, we need to start with defining objectives and scope. Align the audit with the organizational goals and regulatory requirements. This includes ensuring the system avoid bias, protect privacy, and align with the stated ethical principles. It also review includes reviewing the entire life cycle, which I consider AI audit life cycle. You start from the design to the development, deployment, to monitoring.

And, again, not to forget integration with the existing IT systems. How does it sit with the systems in which you are going to perform the AI software? Then we establish the auditing framework. I will be walking through a couple of frameworks that we can use for auditing to guide the process, which then leads to governance and strategy. This is the role of leadership and who is taking accountable. Is there a board? Do we need a AI ethics board? An AI ethics board needs to be composed of different people from different groups within the organization. That's when you get a broader picture and learn more from a different sources.

This is my one of my favorite step to evaluate the process, which is I consider call it risk assessment. We have to identify and mitigate biases and also identify any security gaps within not only the processes, what is currently being done and what we need to achieve, and especially with people as well. What are their mindsets? Having interviews, that helps a lot, and this is all done in the planning phase. There is sometimes we see a gap of documentation and technical records. Sometimes it gets very hard to pull data sources. Right? We don't have a process flow diagrams. There is missing of opportunities. There's a lot of opportunities we see that the the documents are not fully giving us the information that we require. So model training methods, the day which helps us understand how you reach out to the decision, the algorithm reviews.

So evaluate data and model integrating of training datasets for representatives, accuracy, and biases. Again, data quality. I mean, what data are they using? Where is the data coming from? So, again, back to the source of data, especially with the training data and how the how are we driving to the result from the algorithm. Performing an assessment on AI outputs which affect end users, so playing hypothesis or devil's advocate, like health care denials, hiring decisions, what we need to get to, and how do we start the steps. So make sure there's a continuous monitoring control to validate real time performance tracking and model retaining protocol. Meaning, I would say, how what are their monitoring controls in place to keep up with the data training data they're using in in real time, and what tools are they using in? Lastly, we need to communicate these findings with stakeholders and draft an action plan that not only I per per se, not only prioritize high risk issues, but also get an understanding on what other issues and how long would it take to fix them.

Now let's touch base on little bit on the frameworks and tools that are out in the market as best practices and have some success rates. So these these are a set of frameworks and tools that industry has already incorporated and are constantly moving to the maturity stage. These both frameworks and tools ensures fair, transparent, and compliant with regulations. Starting off with IIA auditing framework. This emphasizes strategy, governance, ethics, risk management, and transparency areas. COSO FERM framework, which we already incorporate in our audits, but it has been enhanced to cooperate the AI attributes in there. So focuses on risk assessment, performance monitoring, and stakeholder collaboration, which again hits the target of accountability. The GAO accountability framework, this center is completely around AI governance, the data quality performances, and what are the continuous monitoring controls in place.

NIST also enhanced their frameworks, the risk risk assessment, risk management framework to incorporate AI guiding on the mapping on major mapping, measuring, and managing the risk the AI risks. COVID framework, which we already have in place, is slowly revolving to incorporate AI as well AI components as well. But the overarching purpose for COVID is a comprehensive approach to IT governance and management, and now AI systems as well. ISO forty two zero zero one. This is really revolving at the speed. So international standards for establishing on improving AI management systems, not only covering ethicals but also accountability throughout the whole AI life cycle. That is a very widely used one as well. There is a ICO, AI auditing framework, that is born from UK it's a born from a UK based framework, which centers more on data protection, fairness, transparencies, and individual rights in the AI systems.

Now moving forward with the tools. There are a couple of tools with their used cases that has been currently being used to in their in our audits. So the IBM AI fairness three sixty, widely used for fairness, fairness and compliance audits. It helps detect, report, and mitigate biases in dataset and models. Microsoft Fairlane, it is an interactive dashboard for comparing models and navigating fairness performances for trade offs. Google What If tool, it's an again, it's an interactive visualization tool for model behavior and testing hypothetical scenario scenarios to meet the objective of fairness without any source code intervention. ICATIS, flexible bias audit toolkit, like an out of box toolkit for evaluating again, the main objective is fairness. It also generates helping reports, which supports ethical and regulatory compliances, which evaluates the algorithm of decision making, how we reach to that result. AI explainability.

This is an open source library with a broad set of algorithms that shows how it's interpreted and explaining the model decisions. Test sets. This is more of a visualization tool for explaining datasets and identifying potential data quality issues. I wanna kinda talk about three case studies that has drive and shows how AI ethics audits could have potential to significant ethic failures. Starting with our first one, biased risk prediction. It's an Optimus algorithm. It to a '29 and a 2019 study exposed that Optimus AI driven risk predict prediction tool widely used to identify patients for extra care. Systematically disadvantaged a minor group of color patients were not included in that study. The algorithm used health care spending as a proxy for medic medical need.

Since that group historically had less access to that care, their spending was lower, causing the AI to underestimate their health risks. As a result, fewer of that color patients were flagged for necessary intervention and reinforcing existing health care gaps. And now we can say, how could audit help this? So if we had a thorough ethics audit, we could have examined the choice of proxy variables, tested for this disparate impact across demographic groups, and flagged the biases before deployment of the systems or software. This could have prompted the developers to use more equitable criteria because we would have assessed the criteria before it was deployed, ensuring there was a fair access to care and the population, the dataset population was a broader one for all patients. Again, these examples are mostly related to health care because it's a very sensitive, and that's where we see a lot of ethical concerns. The second case study is a commercial AI driven dermatology tool, was found to be 40% less accurate for patients with darker skin tones due to lack of diversity in the training data.

Again, we're hitting the quality of training data. This could be this could have been prevented by an audit, which would have required demographic analysis of the training set and performances testing across different skin types. By identifying this gap before deployment, the audit could have prompt this inclusion of more representative data reducing misdiagnostic rates. Again, AI ethics could have prevented it before it before it was deployed. So lack of oversight in AI powered clinical decision support. This is for the electronic health care record systems. We've noticed the large language models that been using, you know, is car is causing stereotypes. So with a clear oversight, these biases can go undetected, impacting the quality care.

How can an audit help here if you had established transparency standards and regular bias testing and a review of stakeholders. Seems like we come back and circle to accountability. Right? We need to have seen more reviews from clinical and patients before it had deployed. Again, these these are some of the AI cases that audits could have helped for us. In conclusion, I wanna sum up. In conclusion, so in auditing, AI ethics is not simply a regulatory requirement. It's a fundamental necessity for ensuring that responsible and sustainable development of AI as AI systems become more and more deeply integrated into our lives, the potential impact on individual organizations and overall in society grows exponential. I mean, it's gonna change and keep growing. We'll be getting more risks. So what is more important?

Without robust ethical frameworks and auditing practicing, we will risk creating a future where AI amplifies existing biases and erodes privacy. Let's echo three key key takeaways from the presentation. AI ethics is crucial for responsible AI development. It provides the principles and guidelines necessary to ensure the systems are aligned with human values. Auditing AI ethics is initial for accountability and trust. And lastly, in the essence of auditing AI ethics, it's not just about checking the box. It's about shaping the future of technology in both innovation responsible. Call I suggest, and I really urge, that since the journey into AI ethics and its auditing implication is continuously growing and so are the risks, I would like everybody else to keep up with the emerging technologies, keep up with the risks, and keep informed. So it so we would be the preventive control to keep to make sure the future, all AI benefits and organization, people, and society.

I think I am over my time. I think that is the end of the presentation. Thank you so much for everybody joining.