AI Governance: Balancing Innovation and Responsibility in the Age of Dual-Use Technology by Memoona Anwar
Memoona Anwar
CCOReviews
Understanding AI Governance: Balancing Innovation and Responsibility
In today's rapidly evolving technology landscape, artificial intelligence (AI) stands out as a transformative force. However, with great power comes great responsibility, particularly regarding AI governance. This article, inspired by insights from Mamoon Anwar, Chief Compliance and Innovation Officer at TerraZoo, delves into the dual nature of AI technology, exploring both its potential benefits and risks. It also emphasizes the need for effective governance structures to ensure responsible use of AI.
The Dual Nature of AI
AI is a powerful tool that reflects the intentions of its creators and users. Its capabilities range from groundbreaking advancements to concerning misuses. Some positive applications of AI include:
- Disease Detection: AI can predict diseases before they occur, aiding early diagnosis.
- Natural Disaster Prediction: AI helps foresee natural disasters, potentially saving lives and property.
- Financial Fraud Detection: AI systems can identify and prevent fraudulent activities.
Conversely, AI also poses significant challenges:
- Spreading Misinformation: AI is being exploited to disseminate fake news.
- Manipulating Elections: Misuse of AI technology can impact democratic processes.
- Algorithmic Bias: Unfair profiling and inaccuracies can lead to wrongful accusations and discrimination.
The Importance of AI Governance
As we witness the rapid growth of AI, the statement that we are "building AI faster than we are governing it" resonates deeply. Many AI systems are deployed without proper auditing for bias or guidelines for ethical usage. Here are some alarming statistics:
- Over 40% of companies lack a formal AI use policy.
- Most AI systems are not audited for bias before being launched.
- There is no global standard for AI safety or testing against potential harmful impacts.
Unchecked AI can lead to damaging consequences. For instance, cases of voice cloning used for scams and wrongful arrests due to biased facial recognition highlight urgent governance needs.
Developing an Ethical AI Framework
Despite the challenges, we have the potential to construct ethical AI frameworks grounded in key principles:
- Accountability: Developers and organizations should take responsibility for their AI systems.
- Fairness and Bias Detection: Systems should be tested for fairness to eliminate bias.
- Explainability: Choosing transparency allows users to understand AI decision-making processes.
- Security: Implement rigorous security checks to safeguard against malicious use.
Organizations like NIST and ISO are working to create regulations that embody these ethical principles and enhance AI governance on a global scale.
What Can We Do?
As individuals—developers, policymakers, and tech leaders—we must take active roles in shaping the future of AI. Here are actionable steps we can all take:
- Raise Awareness: Speak out against unethical AI practices when we encounter them.
- Push for Security Testing: Implement security screenings as foundational governance practices.
- Demand Ethical Reviews: Ensure all AI projects undergo thorough ethical assessments before deployment.
Conclusion: Shaping the Future Responsibly
The future of AI holds immense potential, but it is imperative that we approach its development and deployment responsibly. AI is not just technology; it is deeply intertwined with social, ethical, and human dimensions. By fostering a culture of ethical AI governance, we can ensure that advancements in this field benefit everyone, mitigating the risks while maximizing positive impacts.
As we move forward, let’s commit to writing the next chapter of AI with foresight, fairness, and mindfulness toward all its stakeholders. Together, we can harness the power of AI for the greater good. Thank you for engaging in this critical conversation.
Video Transcription
Hello, everyone. My name is Mamoon Anwar, and I'm a chief compliance and innovation officer at TerraZoo, and a very passionate advocate for ethical and secure AI.So today, I'm going to talk about AI governance, specifically from the perspective of balancing innovation and responsibility in this, age of dual use technology, particularly talking about AI and how we can build governance structures around it and make, positive use of this technology and avoid some of the negative impacts of this technology.
When I think about AI, I don't see it as, good or bad. It's a mirror. It reflects the intentions of people who have created it, the intentions of people who create the models and use it. We all of us, I'm sure all of us have seen the good and the bad in AI. We have seen AI do amazing things. It can detect diseases, before they actually occur. It can protect natural predict natural disasters. It can also, sense financial frauds. But at the same time, it's also being used to spread fake news. It's being used to mess up elections. It's being used to, identify people unfairly. So, yeah, AI is a powerful tool, but like any other technology, there is a flip side of it. There is other side of the story as well.
And you can use it to build, good solutions, but you can the fraudsters are using the same technology, to scam us. Now AI is what we call a dual use technology. Kind of, it's it's similar to a nuclear energy. It can light up cities or it can level them up. On one side, we have got breakthroughs that feel like, science fiction. AI helping doctors predict the diseases, predict the bushfires, training the education program for individual needs, even predicting, the cancer or any cancer cells in human body before they can actually occur. So that's the real impact. But once you flip the coin and it's a different story, the same algorithm, can profile people unfairly. The same algorithm can be used to for destructive purposes as well, where on one hand, AI, it's predicted that AI can, contribute 15,700,000,000,000.0 to the global economy by '22 2030.
But at the same time, it can impact 40% of global jobs. It can be negative or it can be positive impact. So this also raises question, what is our strategy for sustaining this technology in long term? What is our governance structure? What governance initiatives we have taken so far? Every AI advances, that we see in today's era casts a shadow. And our job, yours and mine, is to is to build the governance structure around that because governance is what amplifies the light of this new technology and keeps the dark in check. So, I'm sure many of you will be aware of all of these, scenarios and use cases that I'm going to discuss on this slide. These are some of the some of the real world examples of where AI went wrong.
In Hong Kong, some scammers just targeted an employee, who ended up transferring 35,000,000 to scammer's account just by thinking that in the video call, it was his boss asking him to do that. Voice cloning, one company wired money after hitting, what sounded like their CEO. And bias, facial recognition once, identified someone wrongly, and it ended up the system ended up arresting that person. So bias is something that actually feels very personal because I have been victim of algorithmic bias many times myself because of, because of being women of color. Many during many conferences, the, the the the AI system would not recognize me. And while it's funny, but sometimes it feel it feels very personal as well. So AI goes wrong not because, it's a powerful tool, but because it's left unchecked. There are no government structures around it. So where is the problem?
The problem is that we are building AI faster than we are governing it. Right now, and many of the research companies have done conducted research and surveys on it as well. One of the, survey results, you can see on your screen, most AI systems are not audited for bias before they go live. And over 40% of companies have no formal AI use policy, and there is no, global standard for AI safety or testing against, bad impacts of AI attacks. And here's something, like I said, very personal. When when the system, denies to identify me because of the color of my skin, it's it's really, disturbing because if you're not considered in the training data, you're not considered at all.
So biasness, being unfair, lack of explainability, lack of governance, all these things are some of the very important points that developers should keep in mind when they are developing the AI models. Now that we we have been researchers and developers and companies that have been doing a lot of work for some years on AI, I think we have reached that stable state where we actually know what are the pillars of an ethical AI framework. We talk about accountability, with trustability, fairness and bias, detection, safety, security, explainability. We know what what is AI, ethics and what is AI governance. The problem is that we do not have a standardized, regulation or standard that is globally relevant. But the good news is the, regulators and standard makers, companies like NIST, like ISO, they're taking all these principles of, ethical AI into account and crafting the laws and regulation that will help companies build the AI governance structure and, make AI available for everyone in a positive way.
So, what we can do? of all, we need to understand that governance doesn't mean slowing innovation. It means shaping it responsibly. Some of the laws and regulations, which are globally relevant can be applied to business context where they're applicable, such as EU AI act, ISO 42,001. They are globally relevant. But what we can do as individuals, me and you, as developers, as policymakers, as tech leaders, we have to make sure that whenever we see something is happening unethically, we raise our voice against it. We introduce, security testing and security screenings for AI models in whatever is in our capacity so that this biasness is ruled out not as a bonus, but as a as a ground rule, as a baseline, as a minimum viable, governance, practice. So how how we can, shape the future of this technology so that we can make sure that AI see, AI is a technology that we have never seen before. It's, not only about being some tech tech technology that we can employ at our work or at our house or at our universities, it's very personal.
It's, it's social, it's ethical, and it's human. So it's very important that we build it very wisely so that it can help mankind. Because if this technology goes wrong, it will be destructive. So in your role as a developer, policy maker, founder, or researcher, ask for ethical reviews, push for security testing, and make sure that it's bias free. So let's make sure that the next chapter of AI is written with foresight, with fairness, and with the all of us in mind. Thank you.
No comments so far – be the first to share your thoughts!