Security Governance model for Cloud virtual machines using flexible AI
Samvedna Jha
Senior Technical Staff MemberSuneetha Vanjivakam
Advisory software engineerReviews
The Significance of Security Governance in Cloud Virtual Machines
In today's fast-paced digital landscape, security is not just an IT concern; it has evolved into a crucial business imperative. As organizations undergo digital transformation, the need for a robust security governance framework becomes increasingly vital. In this article, we’ll explore the security governance model for cloud virtual machines (VMs) and understand why it is essential to empower it now, particularly as artificial intelligence (AI) continues to gain traction across various sectors.
Why is Security Governance Important?
Effective security governance for cloud VMs can be encapsulated in four main pillars:
- Data Protection: Ensuring the protection of sensitive data, including customer information.
- Compliance Management: Meeting various compliance requirements to mitigate risks.
- Cybersecurity Risk Reduction: Implementing risk management strategies to minimize cybersecurity threats.
- Flexible AI Utilization: Leveraging AI for advanced threat detection and proactive security measures.
This flexible AI model not only promotes efficient threat detection without human intervention but also adapts to the specific workload of each VM, enhancing security and resilience.
Governance Models to Consider
Several established frameworks guide organizations on how to implement security governance effectively:
- ISO 27001: Provides a systematic approach to managing sensitive information and ensuring its secure storage across various environments.
- NIST Framework: Offers guidelines for managing cybersecurity risks and focuses on five core pillars: identify, protect, detect, respond, and recover.
These frameworks can be enhanced to meet the unique challenges posed by AI and cloud computing environments.
Implementing Flexible AI in Governance
One of the proposals is to enable or disable AI based on specific use cases, ensuring it is applied only when necessary. This flexibility not only saves resources but also optimizes performance. By categorizing VMs into three primary groups—critical, medium, and low risk—organizations can tailor their security measures:
- Mission-Critical Workloads: Deploy AI for end-to-end security and immediate incident response.
- Medium Workloads: Utilize basic AI features for essential security needs.
- Low Risk VMs: Apply minimal security measures, focusing on protecting underlying hardware.
Each VM must have its own governance model, allowing the VM owner to determine the necessity and intensity of AI intervention.
The Benefits of Flexible AI
As highlighted by experts, flexible AI provides significant advantages:
- Faster Innovation: Organizations can deploy resources efficiently without overwhelming their systems.
- Increased Resilience: A targeted AI approach maintains optimal performance by flexibly allocating resources.
- Ethical Oversight: Customizable governance supports audit capabilities and regulatory compliance.
- Energy Efficiency: Flexible AI usage minimizes power consumption, contributing to sustainability efforts.
Conclusion
With AI's increasing influence across sectors, it’s critical to implement a security governance model that is flexible and responsive. By optimizing AI deployment per VM, organizations can save costs, enhance security, and meet ethical standards, all while maintaining the efficiency of their cloud environments. As we continue this journey into the digital age, ensuring the integrity of our security frameworks is paramount for the future of sustainable business practices.
We hope this overview sheds light on the importance of security governance and the innovative role of flexible AI in cloud VMs. For further discussions, feel free to connect with us on LinkedIn.
Video Transcription
Good morning. Good afternoon, everyone. Thank you for joining us today.So, you know, in the era where digital transformation is driving, everything, every aspect of organization, our products, so security is no longer it's like an IT concern. Right? It is, it's not, something which is a tick mark anymore. It is a business imperative. And in that, whether you like whether we like it or not, security governance is the framework that ensures that all the requirements we are meeting, we are we have all these strategies in place. We have a way to handle the risk and things like that. So today, we are going to explore the security governance model for the cloud virtual machines and why it is so important to empower it at this stage itself when AI is picking up, in every every area, how we can bring innovation, how we can build trust across, these digital subsystems, and using which which are going to use AI.
So with this, let us move to the, next slide. So, you know, every presentation of security, we need to start with a kind of motivation. Right? Because, not only for to handle, the security aspect, but also to ensure that how are we handling the customer's requirement and making sure that everything is in place. So the importance of security and governance for DBMs, has we have divided it into four pillars, basically. The first one is the security. If they're just to ensure that the data is protected, the sensitive data, irrespective of the nature of its its nature, the client customer data, they're all protected. We are also meeting the compliance requirement, which is the second pillar. We are reducing the cybersecurity risk, which is the third pillar as part of the risk management. And finally, we are bringing in flexible AI here.
And the idea about this flexible AI to bring from cloud security is to have threat detection to ensure without human intervention, to have it, like, a very clear, fair, detection. It it also should have to adopt to security based on the VM load it is having, based on the final workload which is running on that VM. And then the third third point that we want to bring is that these VMs, when they get into a troubleshoot mode, how to have them as a self healing system so that they not only, you know, go down, they also have ability to come up come back up. And then the finally, you we should have, like, a security, you know, policies per VM, wherein then based on the type of the workload it is running, how to ensure that the policy actually is flexible enough for them to get into those, you know, to utilize those flexible AI. Just to give a little, you know, overview for what flexible AI we are seeing here is, it actually is trying to bring the kind of, input from the workload type it is running, the type of data it has. And based on that, the various decisions are made to ensure that this AI is not applied blindly, but based on the personalized requirements of that VM.
So that is the main flexibility which is coming up here. With this, I'll move to the next slide. Now here, you know, we will be talking about the the governance model that we said that we are talking about here. Right? So you must be aware of some of the well known, widely adopted governance mod models we already have in industry. For example, ISO 27,001. And what is its purpose? Its its purpose is to provide a systematic approach to manage sensitive information to ensure it remains secure at all environment. There are physical controls which it handles. There are, you know, it is, logical controls, then it is about data center. But it has to enhance to ensure that it handles all the AI requirements also. And that's and when we are talking about the flexible AI for that matter. The next one, I would like to cite example is NIST framework itself.
So NIST actually will provide is a very well known, widely accepted framework every product. Mostly all products meet all the risk requirements. And they their also approach is to manage these cybersecurity risk. And, you know, it has provided various guidelines to handle various type of, you know, the the the the main pillars, for example, integrity. You have, identify, protect, detect, respond, and recover the system. So based on this example, what we are proposing here in our session today is about the, this model powered by the flexible AI, specifically targeting the virtualization platform wherein in in each each server itself, each resource itself, you are trying to virtualize it to achieve to run multiple virtual machine on the same hardware resource.
Right? So what what we, what we actually propose here is to have enable disablement of AI. You need not have AI for all case. You you should have a mechanism by which when it is required, you enable it, and when not required, you disable it. So, you know, that type of governance model should be in place. Then next point is about decide that each VM, basically, you know, where it requires, AI needs a security governance model or not. For example, in any public data, which is, like, a very publicly available information, you don't want to spend your resources and, tools to basically apply AI there or basically to apply any governance model. You can actually utilize that resource to do something which is, like, more critical with respect to the mission or the workload that VM is running.
And then final point is about you don't have to use this AI all the time. When it is needed, use it. It also helps you to save energy and resources in the the cloud itself. Always remember that none of these AI features, all the so called cool features, they do not come for me. They actually are very much resource eating, you know, applications and tools. And, my friend, Sunita, will actually talk about it more detail next in next slides. But these all are very much important and why we need to have it a mechanism to not use it all the time when not needed specifically. So we have decided divided, you know, our VMs at a very high level in three categories, basically.
The VMs which are running the mission critical workload, where we would like to have AI to handle the end to end security completely, wherein in case of attack any security incident, you not only are informed about it, but also there is a mitigation which is applied immediately.
And I can cite one example wherein an insider attack, if they are trying to do some malicious activity for your VM to get to to get destroyed or misbehaved, there should be a AI mechanism that triggers, and it gives you the complete and security so that it it stops the harm or the damage at a level which you can recover back.
The next one we will want to highlight is about the medium workload. This is the most commonly available, you know, workload which is deployed in our organizations. And then we don't we have we need to have basic AI. Right? We need not have, like, a very advanced or complicated, which is, like, resource exhaustive, or things that we need to deploy here. And then the finally, which will be very simple one, though, wherein you don't it they can be as simple as a tutorials, which do not need any kind of, fees or any kind of, you know, membership to access it. If these are, like, the publicly available information, which any user can get it. Just that you need to protect your underlying hardware and resource. You need to have a security compliance in place. And for that, we need to have a governance model.
So with this, let me move to the next slide, and, I will request Sunita to, you know, share some of the insights about it.
Yeah. Thank you, Samena. So hi. Hi all. This is Sunita, advisory software engineer working with IBM India Systems Development Labs. So, continuing with what, my SDSM Samvenna has conveyed in this session, let me start with the significance why AI should be flexible for every cloud VM runtime. So when it comes to AI models, I think data is the core heart of the AI models. Only with the data, we can we can bring out more insights and more, you know, more purposeful decisions we can make with the data. But when we collect the data, I mean, it should be very conscious and it should be flexible enough per VM rather than it should be configured I mean, it it is configured on, a global scope level. So this this, session is more about what what is that, you know, significance we can bring out when the AI is per VM.
So the one such principles which can be protected properly, you know, the core part of security is the data isolation. When it comes to VMs, VM data should be protected properly because each VM carries its respective workloads. So the data isolation principle can be properly governed when we keep this security governance model for VM. And another principle is resource sharing principle. When it comes to, cloud VMs, VMs share a lot of resources, but the the it should not be at the cost of security. The security governance model, when it is configured per VM, resource sharing will be on purpose, and it will be on, it will be
on
purpose. And when we bring the VM owner into the picture here, VM owner is, given a liberty to decide whether AI is really needed or AI should be on pause or AI can be disabled for a time frame depending on the workload that is running on the cloud. So the the the the overall purpose of this here is we should be meet we should use AI. We should bring out interesting insights from AI. We should you, we should we should, we should bring out every value we can give it to our end customers from our applications, but AI should be flexible enough to save our power consumption and energy costs. So that is a more, significance, that can be brought out when the AI is flexible when it is configured per VM. So we will see what what can be the expected outcomes particularly when it is really flexible. So here here is a small glimpse of the data.
I think we can see from International Data Corporation. So the report, the report the report statistics we can see here with every year by year how the our consumption and energy costs are rising and the surging demands for electricity when when we are, when we are consuming AI, without without making it flexible or consciously being used for applications.
So when it is really, flexible, we can bring down this and make this more sustainable and, you know, for future generations. I think already large language models or small large small language models, which are already, in the market by all the market leaders in the, you know, AI, they are making their way to save our energy and power consumption. Making it making them agentic AI, which is, making them, you know, small small agents, which is going to bring down the data center cost. Because when when the agents are really working on, working on small small purposes, there is no there is no need for more power or more energy you can use at a global level, you know, for the whole product. So when it comes to agents, it is it it can be even at a granular level of VM, and it it can be more granular level of even for that VM also for a particular purpose, the AI can be used. So, that is the more, expected outcome of this, of this more flexible AI that can be brought out here.
And, this is evidence we can, we can see here from the, power consumption and energy data. Samveno, can we go to conclusion? Yeah.
So,
here we conclude. We know AI is gaining a lot of momentum, but AI should be, more flexible enough and more conscious enough, you know, or not to to bring lot not to just solve the problems, but it should not break any virtualization boundaries we build in our Cloud VM products. And the security governance model we bring on, our, AI the the security governance models, which is powered by AI, which is, you know, which is part VM. It should be more flexible enough to, configure or deconfigure at any moment or not to enforce on, overall VMs. And the outcome, we've we will evaluate periodically, so that, you know, it should it should not be always running, you know, saving our data center cost and energy cost. So the the that's, that's the conclusion of this session. So, Samid, go to your back.
Okay. So thank thanks, Sunita. I think, you know, the the the main important thing that, you know, as Sunita highlighted and as a as a whole, we would like to highlight here is why to bring this flexible, you know, AI solution, in case of this, you know, cloud VM security governance. And, because the the well known facts are you know, for example, I can consider few of the items, as in, you know, talking point. It gives you faster innovation. Wherein when you need to highlight it. Right? Why you need to have this AI tools run all the time when it is consuming so much of energy and power of your, you know, VMs. You are paying for it, right, at the end of the day. So, it it's not so important. You whenever it is required, whenever deployment requires, enable it.
And, also, there are some work. Some VMs are only doing the experimentation or, you know, deployment before moving to market. So for such things also, it it is so important that, you know, these these will actually give us a very cost efficient solution. The next one we can talk about with respect to the resilience. It is like, you know, when you have this type of model where on need basis, we are having some governance model for this AI to be used, it gives you it maintains the performance of these environments very, very, you know, at the optimal level. Because here, you are utilizing the resources at the at the 100%. It is not that you are there is a waste or anything which, you know, which which can be found out.
And last last important thing is about the ethical oversight. So what happens is, many times, you must have been involved as a security analyst or security investigator in the forensic of any malicious activity done on the VMs. Now when you have such requirement, these flexible implementation actually gives you very easy for audit, you know, meet the regulation in in, in, the requirement and also adjust the, you know, the requirements as per your all organization's values. So lot of our you know, not only for the security, but in general, carbon footprint requirement has become one of the important aspect. Right? Lot of lot of companies are trying to achieve that. Now this will be one of the small step to take forward to have it run when needed, and you you can always calculate it back and show this much carbon footprint has been saved.
So, you know, such for such thing, it can be helpful. Now let me cite you with couple of example where, actually, this flexible AI is showing really good, output. So first one, I would like to cite IBM Watson Health. So these AI is as as its name suggests, right, it is related to some health care industry. So these AI subsystem, actually, it adopts to the new medical research and the patient data. So it's not that, you know, it is this AI is not just, like, based on this any hypothetical theory. It actually goes with the history data. It adapts to it for the next time. So it is also learning. Like any human doctor, they are also learning. Right? They they we when we go to any doctor, they ask us for a lot of history, and then only they come come out, you know, about the present situation.
Right? So in the same way, it has to learn. So this this it is adopting flexible AI in a different way. If I go to another plot platform, for example, you know, Azure, ML, which is one of the Google Cloud platform available, so they are using these, deployment of custom models, which is learned from the previous, you know, trained data that you had supplied. So it's not like a blank foundation model is available to you on which you can run your ML or any algorithm or any tool or, you know, derive anything out of it. So people are seeing a lot of advantages for this flexible AI, and just it is such a high time that we instead of reacting, it is so important to have things that all of us here will appreciate because of our, you know, work background that in security, why it is so important to bring such principle now itself so that these governance models are helping us in many, many, different, you know, forums.
So, you know, not only to meet security requirements, but also to ensure that we do not bring additional burden, because more and more, we are moving to cloud, and you know that in cloud, you have a VM, which is like a shared resource. So how it it can meet all the underlying principle of virtualization? And at the same time, you you get a, you know, platform which is scalable, which is reusable, and most importantly, it is secured for, you know, piece of, piece for others and for the, you know, organization. So, I hope you found the, you know, details, useful. We will be very happy to, you know, help, or, you know, guide or, you know, talk, have the continued discussion offline as well. We both are available on LinkedIn platform. And, you know, thanks to, women technology, four of which gave us this opportunity to come and share our, you know, session today.
Thanks a lot, and thank you very much for all all of your time and, you know, coming here, for for this session.
Thank you. Thank you. Thank you.
No comments so far – be the first to share your thoughts!