Cybersecurity and AI - Opportunities, Risks and Impacts by Dasha Davies

Reviews

0
No votes yet
Automatic Summary

The Dual Nature of AI in Cybersecurity: Exploring the Good, the Bad, and the Ugly

In the rapidly evolving world of technology, few topics generate as much discussion and concern as the intersection of artificial intelligence (AI) and cybersecurity. With almost 30 years of experience in the cybersecurity field, Dasha Davis provides a unique perspective on how AI is both a boon and a potential threat to our digital safety. In this article, we will explore her insights and analyze the implications of integrating AI into cybersecurity.

Understanding the Impact of AI on Cybersecurity

As AI continues to permeate various sectors, its influence on cybersecurity becomes undeniable. Davis underscores the need to understand both the benefits and risks involved. Here, we examine the key points raised:

  • Enhanced Security Measures: AI provides tools that significantly improve the speed and accuracy of cybersecurity defenses. Security teams can respond to threats more efficiently, allowing for quicker remediation.
  • Automated Analysis: AI can automate data analysis, enabling security teams to identify threats more effectively and efficiently.
  • Job Transformation: The introduction of AI into cybersecurity is transforming job roles, with an increasing demand for AI specialists and cybersecurity experts.

The Flip Side: Risks and Challenges

Despite the advantages, AI also introduces several serious risks that must be addressed:

  • AI-Powered Attacks: Cybercriminals are leveraging AI to create sophisticated malware and automate attacks faster than traditional defenses can respond.
  • Deepfake Technology: The rise of deepfake technology poses significant threats to trust and integrity, leading to potential fraud and misinformation.
  • Social Engineering Risks: AI tools are being used to enhance social engineering attacks, making scams more convincing and harder to detect.
  • Self-Learning Bots: The emergence of self-learning malware and botnets, which can adapt to evade security measures, highlights the evolving threat landscape.
  • Data Quality Issues: AI is only as good as the data it learns from. Poor data quality can lead to inaccurate conclusions and decisions, placing organizations at risk.

Setting Up Effective Protections

To navigate the complexities surrounding AI and cybersecurity, organizations must implement robust protection measures:

  • Data Integrity Monitoring: Regularly monitor data and AI systems for unauthorized changes or manipulations.
  • Employee Training: Equip employees with knowledge on how to utilize AI responsibly and securely, particularly when handling sensitive information.
  • Transparency in AI Systems: Ensure transparency regarding the datasets and algorithms utilized in AI applications, to mitigate bias and ethical concerns.
  • Incident Response Plans: Develop and regularly test incident response strategies for AI-related threats to ensure quick recovery from potential attacks.

Looking Ahead: The Need for Collaboration and Awareness

Davis emphasizes the importance of collaboration among IT professionals, organizations, and government entities to address the evolving threats posed by AI in cybersecurity. This collaborative effort requires:

  • Awareness of New Threats: Be vigilant and informed about emerging AI threats and vulnerabilities.
  • Continuous Adaptation: Adapt cybersecurity practices to the changing environment as AI technology continues to advance.
  • Leveraging AI for Good: Use AI responsibly and ethically to enhance security measures while being mindful of the potential for misuse.

The Bottom Line

AI undoubtedly brings transformative potential to the cybersecurity sector, offering improved tools for monitoring and defending against threats. However, it also presents significant risks that cannot be overlooked. By fostering a culture of awareness and collaboration, the cybersecurity community can harness the benefits of AI while mitigating its dangers.

If you're interested in learning more about the nuances of AI and cybersecurity, consider connecting with Dasha Davis on LinkedIn or check out her book for deeper insights. Together, we can contribute to making the digital world a safer place.

Conclusion

As we embrace the advancements of AI, it is crucial to balance innovation with security. By understanding both the good and the bad aspects of AI in cybersecurity, we can create a safer digital environment for everyone.


Video Transcription

Thank you so much for for joining and for your interest in AI and cybersecurity.I, let me introduce myself first and, tell you how I got in here and why I want to take a little bit of a different approach about AI and cybersecurity than you've probably heard in early today. So I'm Dasha Davis. I'm a, I would say, a geek, a geek by heart. I, have been in the cybersecurity and risk industry for, gee, almost thirty years now before it was called cyber. And, in AI, I've been, working in that space probably for the last, five years. So with this, I, of course, the next thing is now AI is everywhere. So I took a look at how will AI play into cybersecurity and how is that going to impact us, especially on the IT side, on the business side.

So as you get started, let me ask you a question here, and feel free to put it in the chat. Where do you see the difference here? I see two pandas as a human being, but if you take it as an AI, AI can see one, it's an 80% confidence level it's a panda, and the other one, it's a 95% confidence that it's a car. So what makes a difference? Just a little bit of noise in the picture, which means some changes made to the picture that to us as humans are not visible at all. So what, what gives us that? It gives us something to think about. We have anonymous cars driving around. We have drones. We have all sorts of things that are happening without really us as humans being involved in it. Now think about driving cars, autonomous driving cars.

You see a stop sign, but what really does the car see if the data is not protected properly or the data gets manipulated or it's just poor quality? And that's why I would like to take this conversation. AI is great. AI gives us the opportunity to really get in this field. A lot of companies are planning to use it. A lot of companies already implemented it and planning to implement it even more. And the automation jobs in the last quarter are just increasing. So we're already seeing a major shift from how businesses were conducting business without AI and how much it is now. And we're gonna see more and more implementation of AI as we go and as new technologies will come up and new software. So all great. It's AI is great.

It gives us tremendous benefits, on the security side, it helps us with protection. We don't have to do so much, overview. We do not have to analyze so much. It becomes a lot quicker and better. The security teams can respond a lot better than or in a lot quicker than what they can do now. The analysis of data is just great. It goes quick. It saves time, and it makes a life a lot easier for the analyst to actually be able to respond something to respond something or even identify, is it really something I need to pursue? Is it something that is relevant, or is can I just ignore it? Now all of that is great, but and this is where I bring in the the different feature to this is the bad and the ugly.

There is another coin to all of this.

But we don't see the slides change.

Oh, you did not? Yes. Can you see it now?

No. We we only see the edit view of the of the presentation. If you exit presenter mode, you can go on at the top view Yep. From the top menu.

Yes. Can you see my screen, though?

Now I cannot. I used to see it. I mean, I was seeing it up until now.

Okay. Can you see it now?

Yes. Oh, at the top menu, can you go to view? Oh, wait. I see it full screen now.

Yeah. Okay. Perfect.

Change. Thank you.

Awesome. Thank you. Alright. So the bad and the ugly. I wanna talk about the other side of the coin and the risks and the challenges that we're gonna see. So talking about cybersecurity, and we're all here in technology. So cybersecurity, at some point or another, we have come across it. Most likely, a lot more and sooner than later. And we're seeing it. I mean, we've got AI is already being used in so many different areas. We've got, malware being created by AI on the fly a lot quicker than our analysts can actually take care of it. It's also very difficult to really assess how much risk it currently means or may mean to our infrastructure, to our data, and just the speed by itself. So it's it's one of those things where you gotta be you know, we're seeing the attacks already.

But the good thing is we also have the AI on the other side, the security itself, where in the SIM, for example, you can automate the analysis of data. The correlation gives us a lot. But at the same time, we've got the red team and the blue team, and we've already seen today that we got AI, the red AI and the blue AI already playing against each other. So we're starting to have this cold war of AIs trying to infiltrate our network, ex accessing our data. But at the same time, we also got AI defending ourselves or defending itself. And, so it's it's a very interesting approach. We are in a lot of automation. I know we're still very, very far on how far we're gonna get with AI itself, and how far it will take us before it starts learning by itself, without human interaction.

So what do we see? And, you know, we only got twenty minutes on this. But there are several areas that we're already seeing that are very concerning. One of the things and this applies to all of us. Doesn't matter what industry you are. Does not matter if it's business related or if it's personal related. We are relying a lot more on our machines, on CHAT GPT, on Gemini, you name it, to do the work for us, and, unfortunately, also to do the thinking for us. We we see that because a lot of the time, we take what GPT or any AI gives us as a value and that it's correct without really double checking. And it makes our life easier, so, of course, we're gonna use it, but it has its negative sides as well. The other bad thing is China is expected to overtake The US in AI. Yes. We have great teams.

We have great research, but, looking at how much China is putting into the effort to work on AI, improve AI, and implement AI into everyday's life. And, of course, China from a cybersecurity perspective is a big enemy and a big attacker on The US or really a lot of countries in this case. It's it's it's it's scary. We're seeing smart botnets. We've got self learning botnets. We got zombies that really act by themselves without us. And we also got bots and malware that literally can put our endpoint security, all our things that we have in our infrastructure, data protection, the malware, the everything that we have pretty much neutralize it. It's, it's scary because we're putting a lot of effort in to make sure we are secure, but it's, you know, we are getting getting there. Deepfake. It's another thing. AI can really create deepfake data at this point.

And, we've seen it I think the very first case was probably two years ago where you probably all heard of it, business, what is it, where somebody calls in or sends an email requesting the, the finance team to pay an invoice that is legit, but to a different account business account.

And, of course, the person, wants to do it. They double check, validate, and there is actually a case where that person assumed it was a hoax. So they requested a call, with multiple people in the company. Or the CEO itself suggested, hey. I'm gonna make a Zoom call. You can talk to me directly. I'm gonna be on video, and let's invite a couple other people from our company. And, to make sure that you see that this is legit, the money really needs to be wired into a different account. So all good. Unfortunately, all of those were AIs. The only real person on that call was the finance person actually making that transaction. So realistic fake video and audio is really bad. Social engineering is being misused at this point, with all sorts of AI.

So we all have to be a little bit more careful than we were just a couple years ago. Malware creation. Pretty much talked about it. It's really polymorphic malware. It changes on the fly. It actually can detect if and what security tools you have in your infrastructure or at the endpoints to adjust and not be detected. So it's getting smarter and smarter. Poisoning machine learning engines. So one of the things that we have here is also the information. What kind of information are you getting? Where are is the information that AI gives us? Either the analysis or the source of the data that it's based the answers on. Is it legit? Is it biased? Has somebody put in some different information that gives us false results? It's basically a hallucinating also, just making things up as it goes because sometimes it just does not wanna admit it does not have the information or just to not do enough research.

So we got all sorts of different things that are happening here. And, unfortunately, we also got different attackers, and the attack surface has changed. We've got militaries, cyber warfare as big across everywhere in the world where, yes, we still have the regular weapons, the old, war as before, but now we're also getting into the cyberspace. Governments. Governments also like to control people, take down other governments, provide misinformation, corporations as well. They have a goal, make as much profit as possible. They can put out misinformation. They can also use AI to get information they should not be getting. We've got hackers. And especially now, you've got hackers that literally, you don't have to be a hacker. You can use AI to become a hacker without really too much knowledge. You got psychopaths. You got criminals. You got just cults.

Just because AI gives us the tool to attack, gives us the answers where previously we had to research, Now it just we have to ask the question, and we get the answers on how to do things, what to do, and how to do it. And this is the scary part because especially now, we got AI as a service. Just like we had ransomware as a service, AI as a service, literally anyone can be a bad actor. So now who we have to protect ourselves against is becoming a much bigger problem than before. Before, I mean, twenty years ago, you really, we didn't even have too many script kiddies at that point because if somebody was able to get into a network or hack, if it was protected, of course, twenty years ago, it took a little bit of effort and knowledge and expertise. Today, not so much. AI does all the heavy lifting or most of the heavy lifting for you. And if not, you could just purchase it as a service.

So why am I saying all of this? I mean, don't get me wrong. AI is great, but there are also different things that we, especially on the IT side, but also as humans need to kinda understand as we start using AI is how are we actually protecting these. And I've seen a lot of companies already instead of using AIs from third party as a service, cloud based service, also implementing AI in their own on prem environment or their own cloud environment. Customized AI. Now how is that being protected? Has anyone actually put the same controls around the data and the AI, the source code, the databases, the the processing the same way we do around our servers? Is somebody actually monitoring the logs to see if somebody makes changes, if data is being manipulated? Is there a file integrity monitoring in place?

Are we actually putting all these controls in place around AI itself? And if we use it as a service from somebody else, are they doing the same thing? Then you probably are aware of it, especially beginning when ChatGPT came out. Everybody was eager to use it. And, yes, especially employees, they started copying sensitive information, putting it into chat GPT, and asking for answers and help and analysis. Well, guess what? The data is all there. The question is, are we actually training our employees, but also our kids, on how to use AI in a secure way? Are we giving guidance on what tools we can use and what data can actually be used in an AI that is in the cloud to still protect personal information and or corporate data or PII or intellectual property? Because if not, it is a huge risk. And I've seen where this is not as mature as we probably should have.

How about the algorithms, the implementations, the research and development, the data that is actually being put in place? Is that actually, do we have the transparency? Is that protected? If we use an AI, are we actually do we have the transparency and the visibility? Who created it? What kind of datasets are actually being used? And that leads to the other question is, do we, when we use the AI data or the AI service, does it actually use the data that we, for business, need to use and have? Was it actually was the thinking adjusted for our business need? Because if the data is not accurate enough or the algorithm is not what we need, then the data, the result itself will not be necessarily 100% accurate to get us to where we need to go as a business or as a professional. The same thing around what happens if something goes wrong, especially if you if you think about, cybersecurity and AI making decisions blocking traffic, do we have any controls and procedures in place that if suddenly AI starts taking over and starts blocking all our network communication because it thinks that we are being attacked?

Do we have that pull the plug or something where we can intervene and say, uh-uh. This is we need to override this because this is a false positive. As long as we have that, that's great. But, also, do we have the capabilities to detect if AI actually has been tampered with? Could be insider threat. Could be somebody external. Could be just an accident. But we are relying on AI significantly, and it's gonna get worse and worse more and more. We need to trust it. And then, also, if you think about it from an ethical perspective, what are we how are we actually using the data? What are we feeding the data? Because if the data is not correct, it can very easily give the wrong information just because it it did not have the right information to process, and that can have really bad ethical issues or even bias against, let's say, women, minorities, a geographical region on who gets, let's say, educational funding, who gets, I don't know, library, books, who gets to go to college.

All of these things, we kinda have to be a little bit, you know, question all of this. And I think from us, as IT professionals, it, considering where we are with this, I think we have to take the next step and make sure that we questions all of this. We understand what the risk is. We embrace it, but we also take a look at making sure that we use the right AIs in the right processes. AI is pervasive. It's great. It's gonna stay here, and it has two sides, the good, the bad, and the ugly. So with this, I know we're running out of time, but, hey, I appreciate you guys being here. Look me up on LinkedIn. Connect with me. Let's collaborate. And then, also, if you wanna know more about AI, I have a book out there. Either ping me or look it up on Amazon.

If you got Prime, it's, you can get it for free. Just go there. It's a fascinating topic. I, definitely want to collaborate and see where we can take all of this to make it safe, secure, and AI proof. Thank you so much, everyone, and I will see you probably in the next, session. Thank you.