CIO Panel: How to Integrate AI for Scalable Innovation by Laura Kohl
Reviews
Unlocking the Future: How CIOs are Revolutionizing Business with AI Agents
Welcome to an insightful discussion on the transformative impact of Artificial Intelligence (AI) and AI agents in today's corporate landscape. As organizations worldwide grapple with the implications of AI, three distinguished CIOs share their experiences and insights on deploying generative AI technologies at scale. Patricia Grant from Tenable, Meera Rajwell from Palo Alto Networks, and Laura Cole from Morningstar Financial delve into their AI journeys, showcasing varied use cases and governance strategies.
Understanding AI Adoption: Driving Forces Behind Change
The adoption of AI technologies within organizations stems from both bottom-up enthusiasm and top-down directives. According to Meera Rajwell, the global CIO of Palo Alto Networks, a dual approach drives AI success:
- Bottom-Up Curiosity: Employees, fascinated by AI capabilities like ChatGPT, actively seek innovative use cases that enhance productivity.
- Top-Down Leadership: Boards and executives emphasize the need for AI to improve operational efficiency and address business challenges.
Meera highlights the necessity of having both senior management and technical teams on the same page to capitalize on AI's potential.
Building a Collaborative Environment for AI Development
Creating a culture of innovation is crucial for successful AI adoption. Patricia Grant emphasizes the importance of involving employees in the exploration of AI technology. At Tenable, an innovation team allows employees to test and deploy new tools quickly, fostering a sense of ownership and creativity.
Moreover, establishing a governance framework is essential to manage innovation. Laura Cole mentions the importance of decentralized governance where product teams bear responsibility for compliance and security, ensuring AI tools' efficacy without stifling innovation.
AI Governance: Striking a Balance Between Innovation and Control
Effective AI governance requires finding the right balance between fostering innovation and ensuring compliance. The panel discussed various tactics to maintain oversight:
- Creating an AI Council: Establishing a cross-functional AI council to oversee AI tools, ensuring alignment with business objectives.
- Emphasizing Visibility: Utilizing dashboards to monitor AI tool usage and effectiveness, allowing for informed decision-making.
- Defining Metrics: Focusing on productivity and efficiency metrics to measure the impact of AI adoption.
As organizations venture into uncharted territory with AI, the need for security presents another layer of complexity, with each organization's policies being unique based on their specific challenges.
Preparing for the Future: Essential Skills for Aspiring CIOs
In light of AI's rapid evolution, aspiring CIOs must focus on developing a diverse skill set. Key skills identified by Patricia and Laura include:
- AI Fluency: A thorough understanding of AI technologies and their applications is essential.
- Change Management: Effectively leading teams through transitions and overcoming resistance to AI adoption.
- Cross-Functional Collaboration: Building relationships across departments fosters a culture of shared knowledge and collective growth.
- Compliance Awareness: Understanding data privacy and compliance issues is pivotal in today's regulatory landscape.
Conclusion: Embracing Change in the Age of AI
The insights shared by the CIOs underscore a significant truth: organizations that proactively embrace AI and establish robust governance frameworks will reap substantial benefits. As AI continues to evolve, staying ahead of the curve through continuous learning and knowledgeable decision-making will be vital for thriving in this new digital era.
Stay tuned for more discussions as we navigate the exciting world of AI and its implications for the future of business!
Video Transcription
Welcome, everyone. We are very excited to have this conversation today. As we know, AI and especially AI agents is all the rage now.We're all having conversations about what this means for our business, what this means for our technology, what this means for our careers as well. And we'll cover all of this with three amazing CIOs, global CIOs, that are using AI and working with AI agents right now in their organizations at scale. So I would love to welcome these three amazing CIOs and have them all introduce themselves. As you do introduce yourself, please your name, your organization, and a few words on what generative AI and AI agent use cases you have already deployed in your organization. Patricia, we'll start it off with you.
Sure. Thank you. Thanks for having me. I'm super excited about today's panel. So hi, everyone. My name is Patricia Grant. I'm the CIO at Tenable. And, just it seems like, you know, there's a lot of AI things that we we are doing. So when we think about all the different agents that we're obviously doing Gemini. We've got GitHub Copilot. We've got Clari, Einstein, ServiceNow, Call AI, Amazon Polly, just to name a few. And and the the list is longer, but those are the some of the top ones that we're we've deployed so far.
Amazing. Alright. Meera, how about you?
Hi. My name is Meera Rajwell. I'm the global CIO for Palo Alto Networks. In Palo Alto Networks, the way we approach, AI is, I know, three layers. Right? I mean, you have agents that are coming what we call the embedded AI. If you are talking about ServiceNow or Salesforce, where it is actually coming up with their own AI agents. But they are much more today, truly, they're not full on agents. They're working within their premises. And then something like Gemini, which I call it, actually a more pervasive AI that's actually available for multiple use cases. And we are, in fact, enabling people to write their own gems and, you know, using their notebook LMs to share their notebooks with others as well.
So there's a level of democratized AI, and then there is scaled AI, which IT is taking a full on. And then there's a layer of AI, which we have deployed at scale as well. For example, we have built, our own employee Copilot, which actually uses the power of LLM and automation using Palo Alto's technologies of XO and whatnot. And for example, it actually, automates close to, 55% of my IT operations completely, and we are probably on track to, you know, achieve by 75%. And this is an agent that's available to every employee in the company, and we have been in the production for more than nine months now with that kind of an agent.
Wonderful. That'll be really interesting to dive into as well today. And Laura.
Hi, everybody. Thank you for the invitation. I'm Laura Cole. I'm chief information officer at Morningstar Financial. We are the leading provider of independent investment investment research, across the world. And we we've really embraced AI and GenAI from the very beginning. I we we also have a lot of different platforms. And so we've got the ServiceNow, the Workday, Salesforce. And so we're leveraging a lot of AI capabilities within some of those tools, but also products that we primarily build for our clients, how you would gather data, do some research on our own information. So we do have some, private clouds there that leverage a lot of that. So we both have the internal, and then we also have the client facing, investment opportunities and and data there. So it's it's, kind of a mix for us.
That's great. It's it's incredible to hear from three CIOs that are so far down the generative AI journey when a lot of other organizations are just getting started. So I think we'll start with a question around how did this come about in your organization? What actually drove the adoption of this technology to, you know, such a scaled level? Right? So a lot of people are just starting with one use case. In your organizations, it seems much more holistic, much more broadly expansive. And one of the things that I've heard is that this is coming a lot from investors and from boards. And and, Meera, maybe because you have to jump off a little bit early to talk to your investors, we can start with you on this question.
How have you seen the the drive to adopt generative AI technology? Has it come from top down with boards and investors asking about it, or has been driven more bottoms up by your teams asking for it?
Yeah. I think, Tatiana, it's I would say it's a two two ways street. Right? I mean, there is a curiosity and learning that is actually a little bit of a bottoms up. People are like when you know, if anybody in even, like, in 2022 when ChatGPD came, of course, it was not as good as what it was today. People are mesmerized with how much it can talk. Right? So there's a level of curiosity. Immediately, you have people saying, well, I can think about this use case and that use case, that kind of the lower level. At that time, it's all about the I call it the technology advocates or the early adapters or the ones who are jumping on that wave saying, oh, there's a great benefit out of this technology.
Not necessarily looking at what is the outcome, how far or how deep the outcome is going to be. At the same time, you also looked at it from the top down, and that's actually a little bit more outcome driven. And it's a little bit more, which is kind of like, okay. If you're doing this, what are the possible? I mean, there are four buckets we kind of think about it. I mean, is it going to accelerate? Where you need speed, air can be fantastic. Two, where you have a high concentration of repetitive work, It can actually be done in a way, not only that it can take away and give productivity, but also higher quality and consistency. Three, where you have subject matter experts, you know, contention. There are certain types of skills which is very hard to get.
And those type of skills are also the ones which are very specialized skills, and it's very easy. I wouldn't think it's slam dunk, but it's much easier to, you know, take a specific use case and train the AI to be the specialist. And that's a use case. And is actually we looked at it as, you know, general productivity across the board. And in our case, I we jokingly say the CEO is the AI officer for the company. We had actually kind of a top down, But then we really quickly picked on, you know, because we being Palo Alto being a cybersecurity company, in our case, we put the AI into three buckets. We put the bucket as, hey. Adversaries are going to use AI. How is Palo Alto going to ensure our customers have products that can protect their AI?
So we started our quote unquote, I would call innovation in our product line to protect AI. Then we said, you know, we have to combat AI with the AI. That means our products have to use AI effectively so that we can actually go and ensure when there's an attacker coming and launching a AI based attack, we have a way to respond to that in our product. So that's a piece of the pillar. And then the piece of the pillar is how we use AI beyond security. We started looking at it. And we speak, like, I would say, four major use cases and every one of them had we call it there's a price attached to it. The price could be you could do it in a shorter period of time compared to what you are doing before or it is a sizable money that's going to come out of it. Right? So in that, we looked at IT operations as one of the IT finance and HR, I would say, all three.
So myself, the CFO, and CHRO, we sat down. We said, hey. The company generates about 400,000 tickets, and we want 90% of them AI handling it. How are we going to go about it? Very quickly, we realized it's not GenAI who can do all of it. Because a lot of this actually is not just about information generation or information summarization. It requires some kind of an action. So you have to pair AI with some level of automation. You have to pair AI with some other workflows. Right? So that's where agentic so we call ourselves we started our poor man's version of agentic almost like, a year and a half back. And we launched our agents about nine months back, and it's used by everybody in the company.
We don't have any more phone lines, chats, or service now portals or anything. It's just the agent is the only place people can come and ask for help. The day you walk in, you got a job, you'll be told Panda is the name of the agent. We say Panda is your companion. Ask anything. If you don't, we have people in the back that can come and serve. We also introduced a similar concept for our sales organization for very specific use cases of sales. And then we also use it very very aggressively in our byte coding as well. And, we host our own models in how because we are very particular about our IP, so we host our own models.
Great. Thank you. And, Patricia and Laura, have either one of you seen a strong bottoms up drive that has actually changed the organization's priorities around generative AI? I think one of you has talked, you know, about, you know, the bottoms up enthusiasm. Right? Can you talk a little bit more about that?
Sure. You know, most definitely, the bottoms up. I think anyone who's technical in today's world, the energy is there, the passion's there. They actually want to get out there, play, and tinker because all the employees out there today, we we they wanna be more productive. They wanna feel that sense of accomplishment in getting things done. And I think you have to create that environment. We have a innovation team in which we allow them to bring new tools in, try them out quickly, get them deployed, get feedback so that the employees can respond to those and figure out, is this is this something we're gonna keep? Or no, it didn't do what we thought it was gonna do. Okay. Let's move on to the next. So, definitely, you need to encourage that environment inside of your organizations today to help drive that innovation and push it up.
And then, also, as you're pushing that up, you're also educating senior management as well on what is possible. And then it really becomes I mean, it's just it's a really fun time to be in IT. Because as the the organizations are pushing these up, the other parts of the organizations that hear about what was done, they start thinking about, well, what if we did this? Can we do that as well? And so it really starts sparking the creativity in every person inside of the company about what is possible.
Well, a little follow-up question there. If you have all these teams and all these people tinkering and creating their own things, how do you, like, corral that? How do you even oversee what's happening? And then how do you make smart build versus buy decisions? Because some of the things that people will wanna build, you're actually probably better off buying. So how how do you corral all of that? And then how do you make the next step toward build versus buy decisions?
Yeah. I can I can definitely start on that one? I think, again, we we separate it in two separate buckets. We have the products that we sell to our clients. We're very decentralized from that perspective. So what we've done is we've established some governance, guardrails, some training. But for the most part, the the business teams are responsible for ensuring that they are compliant with all that. We don't wanna stifle any new work. We wanna make sure that we can keep going and and driving new capabilities within our platforms and tools. But, again, we wanna make sure that they're not putting us as an organization at risk. So we do have some of those, guidelines for sure put in place. And, you know, honestly, with now, AI more at the forefront or GenAI, you know, we I'd say we we develop more of our own code for product facing, but leveraging some of the capabilities of some party tools and using AI there.
Who's to say in the next few years what that looks like? Right? If you can start building things faster, do you need all of these systems together? I I don't know. But I think it's a balance of, you know, the governance, making sure that and and and checking and and making sure that, you know, we're from a security side, we're we're being safe.
Yeah. I would actually add one other thing to to Tiana. Right? Because, you know, Patricia really, you know, said it very well. You have to really have the enthusiasm means you have so I think for IT, it's all about visibility at this point in time. Right? I mean, it's almost impossible to keep up. Even with just guardrails, it's kind of almost impossible to keep up. It's so much investment from VCs and everybody. There's every day, there's a new company springing up saying they are the next, you know, unicorn and bring something. So for us, the one way we got is we are very much like Lauren, Patricia mentioned. We don't want to stifle people. So we are one of those very early people telling you can do chat GPD. But we started watching.
In fact, in Palo Alto, that gave the use case for us to develop our product, AI access security product, which means it's almost like you can see every AI happening in the company. I actually have policies on chat gbt telling people you can use it, but if somebody tries to cut and paste the code into that window, we stop it. If somebody tried to upload a document that has the PII classifications that we have, it will stop it. But I don't stop them from using it. There are certain things where the model itself, we are seeing the vulnerabilities high. We don't allow people to use those models. So we are actually bringing because initially immediately, we did actually issue a statement telling people, like, don't do this. These are the good things to do. But it's becoming impossible unless you systematically emphasize this. I used to tell.
It's like telling the kids when you're telling employees, don't eat candies, but I'm going to leave the candies all over the place at home.
Right.
Right. It's going to happen.
So does that mean that you actually purchased a chat GPT license for every employee? Or how how did you get that visibility?
There is a free version. Anyone can use it. Anyone can
use it. Yeah. Right. Okay.
We use Gemini as a enterprise version, but ChatGPD enterprise is not something. But we we have people who just go and check on ChatGPD things that they want to do. Mhmm. Because some people prefer, and Chat GPD also does certain things better than Gemini. Gemini does certain things better than Chat GPD. Right? I mean, they're all getting their groove. All these models are becoming Right. For example, Claude is getting very good at, you know, software. It's all like a rat race. Any point in time, you don't know who's going to be the leader and who's going to it's like one month somebody is leading, the next month the next person is leading, but Right. Evolving too fast.
Yep. It is. Tatiana, I wanted to comment on something where I think you're maybe even alluding to in in that question is that the enthusiasm of AI coming inside of the companies, it most definitely has to be contained. And any intake process that your company has, it's now gonna be tested. That that production assembly line is gonna get tested quite quickly because the teams do wanna innovate fast. And now you have to introduce new guardrails with security, with infosec, with legal, with compliance to make sure all those things in there. So if those processes today that companies have are not flowing smoothly, they're gonna be in trouble. It's kinda like the Lucille Ball production line putting the chocolate in.
It's coming faster and faster and faster. And so the some of the enthusiasm will test companies' internal processes quite quickly, and you need to get in front of those.
Yeah. Awesome. So, like, let's let's dive in. All three of you have mentioned governance now. Governance of AI, putting in guardrails, understanding what people are doing, having visibility, over, how employees are both accessing off the shelf models, but also potentially building their own AI agents. Right? People might be, you know, across the organization in different, even functions. Right? That they have funk they have technical capabilities. They might be building their own AI agents. Right? Creating their own workflows, connecting to company databases. What how do we put in governance frameworks, to be able to really manage and control this? Is it important to control it? And how do you Mira, you were talking about having this visibility, so maybe we'll start with you. Like, tactically, can you take us through what are the conversations that are that are being had around visibility, around governance? Is there a center of excellence that's being developed?
Like, how are those things actually happening? And as IT, you know, professionals or technology professionals, as we think about making an impact in our own companies? Like, what are the things we should be thinking about that our CIOs might be, you know, really concerned about? And how do we become the best partners for our CIOs as we help drive
the the adoption of AI technology. Yeah. Yeah. Yeah. So one thing we'd made a conscious decision, Tatiana, was not building a parallel, like, called AI governance. Right? I mean, we felt like there's a investment governance already that is in the company. How we make investment, how we are making decisions from experimentation to proof of concepts to, you know, production to running the business. There is a governance. And same thing when it comes to security and privacy, we also have a governance that's in place. And our reason for not making AI governance as a separate arm was because if sometimes when you make separate thing, they disconnect some of the steps when you try to replicate. So what we did was we brought these teams and said, you are now responsible. AI is a new technology, so you cannot just apply the same thing.
Figure out what we need to update in our current governance around AI is going to be. That's that's why in I still remember December 2022, we issued our quote unquote, I call it, like, educational policy, AI policy, which is nothing but a document because it's very early because we found, you know, Jet AI I mean, Chativity was released, and we found immediately people jumping on it.
And we need to tell people. It's like an AI. Once you throw something in, you can't rein it back. It goes in and it's going to sit there forever. Right? Especially when it's learning and when they have disclosures that says that if you do something, they take the rights to learn it, then you can't unreal it. Right? I mean, even OpenAI doesn't know how to how to unreal or Gemini doesn't know how to unreal because it's so complicated. So we have to educate people very quickly saying, hey. You're allowed to go and check it out, but don't put company confidential document. Don't put this. Don't it just gave a set of guidance. Right? The thing we we realized as a security provider, that's an opportunity for us to go and, you know, fill the gap in the market because we are not the only people.
There's everybody going to have. So we released our product, and we became the customer of our own product, like, having a complete visibility of AI usage in the company and defining security policies based on either categories of AI applications or specific AI application or even specific set of users. You don't want them to do certain activities. That's a kind of thing we introduced next thing. On the I think Patricia mentioned when it comes to AI trigger or so many things You need to move through the decisions quickly. So in my organization, I specifically have somebody who actually I have to bring in as a focused AI person who moved through this chain because I already had the architecture team which moves through my, innovation chain pretty quickly, but it was not fast enough. Things are moving much faster. So we need to get to a place of what do we allow, what do we not allow. So we improve that process as well.
I would say that's a place we are continuing to pay more attention. But it is our specific decision was not to go and build a parallel, but actually incorporate AI as a vector recognizing it is different, but into the current processes and enhance the current processes.
How about, Patricia, how about you? How are you thinking about AI agent governance, having the visibility on what different even functional teams, what AI agents they're buying. Right? Yeah. Agents they're building. How do you even do you have a central dashboard?
So we we have a cent so we do have an AI council that again, when this came out, it was just all the intake and all the demands coming into the company was quite high. Once you go in and you take a look at those and you start prioritizing those and looking at what's doing what, we actually created I love heat maps. Don't ask me why, but I like heat maps. And so we created a capability heat map for what does those AI tools do, how many are doing, whether it's, chat, the different types of use cases that you would have out there. And we started categorizing all of the tools that we have based on the capability of what it provided. And by having that cross functional AI governance committee, same problem that we have in IT with every business unit wanting to buy different applications coming in the door all the time.
It's like, no. We already have five of those. You don't need that. What's how do we start standardizing on it? So you have the the same app rationalization that CIOs are all familiar with. The same thing is gonna be with AI. So you wanna make sure you're getting visibility, understanding the capabilities of all the tools, and then, you know, trying to find the right one right fit for the company and standardizing that because that AI sprawl is happening in many companies out there today. And so having that centralized dashboard, taking a look at that based on the capability, and same thing as application life cycle management, measuring what is the value of that AI tool, what is the productivity, what's the usage, what's the adoption. We've we rolled out Google Gemini a few weeks ago, and it just got a a Slack this morning from one of my teammates that sent me, hey. This is the current adoption. So we rolled it out. We trained.
Because a lot of it is, for us on IT, we understand this prompting, but we have to do that education as well. So having that cross functional group, sharing the capabilities of those, and then making sure that you're finding the metrics that measures the value of all these tools to
make sure you're making the right investments. And what like, so you said the metrics. So what are those metrics? What kind of metrics do you have on that dashboard? What are you what are you interested in?
So we take a look at productivity specifically. So I ask my my teams to capture just give me the hours. Give me the you go through former life did a lot of value management work. So you go through and take a look at what is that productivity savings and and trust me that it's a hard muscle. Most people, you know, don't really dive in deep into value management. Understanding that based on an hourly rate, or a number of hours that you've automated, then take a standard labor rate. Because if you don't, you're gonna have everybody using calculating based on a different labor rate.
And so we use that to show the the productivity, and it's the soft savings, but you're basically head count avoidance, those types of things. Also, we look at the adoption. Like I mentioned, are they using it? Are they not using it? For some reason, either you didn't train them enough on it and they backed off on it. And then also an interesting one is in in your worlds, if you can take a look at that run versus grow metric, how much time is your FTE spending on running the business versus growing the business? And, Mary, you talked about pushing a lot of the tickets, you know, to the agents. So that would just I think you said, like, 90% or something. Getting getting it up to that point and being able to measure where your FTEs, the assets that are costing you the most inside of the company, where are they spending their time at?
Are they the day to day drudgery playing whack a mole? Are they actually doing up leveled, more high skilled, and high valued work?
Yeah. What other what other metrics are you looking at, Mira and Laura, in terms of determining One
other thing is the velocity as well. I think Patricia covered very, very nicely, like, you know, productivity metrics and, some of the things. Right? I mean, for me, it's, like, for example I mean, right now, we we had white coding for a while. So one of the thing I start measuring is what is the story point velocity is going to be. If I'm having AI assessing my engineers, what's the velocity is? We had some hits. I mean, adoption was actually a very easy metrics, but when it comes to velocity, it's a little bit it was getting a little tricky. Especially in the brownfield, it's very different than greenfield. However, I'm actually right now, running another pilot where we actually literally step back and relooking at our entire software development life cycle from instead of, PRD coming and the product managers sitting down and understanding how to solution, etcetera.
We are kind of turning it a little upside down saying, okay. Let's have chat sessions where it gets recorded and it gets fed to, LLM or, or our own, grounded and trained version that will spits out the takes the PRD and this training and spits out all the user stories. And then, the technical engineer will give guidance around data flows and system flows and takes that and spits out the testing. We are probably going to have real velocity document because AI generates more detailed document than the human, and there's a consistency. And I'm taking that, and when I'm using that for pipe coding, people can use that for prompting as well very cleanly instead of a human piping the prompt. Right? So we are seeing a significant I mean, I'm in the early stages of I don't want to celebrate success yet.
For me, at least I need to have you know, this is the quarter we are going to, really, you know, have, I think, seven programs are running through that right now, and then we strike it hard for everything that happens in IT kind of a thing or everything happens even in the company.
We're just taking it. So for me, that absolutely is a velocity because every time you can go to any company I've been in and as being a time CIO, I can tell every time it's IT is a bottleneck. We don't have enough resources. We have a backlog. If I can accelerate my velocity, it's more valuable for me than just few resources I could say.
Yep. Absolutely. And and maybe, transitioning from this metrics conversation, and and, Laura, maybe you can pick up on metrics as well as we talk about, you know, how we've taught, you know, we've talked a lot about cross functional teams and and the need for the c CIO to also be working cross functionally, in order to have this visibility, in order to, you know, really drive both, AI agent adoption and generative AI adoption, but also the effectiveness of this technology across the organization.
And I'd like to ask you, you know, you all, but maybe start with Laura. You know, do you have a a team that's really responsible for this? Like, do you have a a center of excellence or a group of people who gets together? Or is it really, you know, you or someone in your department that's driving a lot of these issues around governance, adoption? What are the metrics that everybody needs to be thinking about as they're, you know, using this technology? How is that actually coming together in your organization?
Yeah. Sure. I can start on that. So as I mentioned, we're pretty decentralized. So we started our whole process of getting a few people together, understanding what the risk compliance, you know, security side. And now that we've established that, we pretty much rely on our product teams with the guidance and, of course, ensuring that we've got the inventory and we we check what people are doing to make sure everything's fine. But then that governing body, I'd say, that we've established is probably not that we're gonna go away, but I think more of the onus goes on to to the development teams. And where our view is it's everybody's responsibility, so you get the right training. You make sure that, any policies get updated and, you know, being in a global organization, I'm sure some of you have that, is you've gotta make sure you're capturing any new policy changes.
But, really, it's it's we're decentralized in that process. And how we measure, I'd say, is also depending. For my team, it's more from a centralized approach. And as Murad said, we we actually do some story points in in certain groups, to to measure. As we're looking at our business reviews that we even do from a centralized perspective. We're looking at efficiency metrics, scalability, you know, again, how do we show the productivity, which is really interesting as you are in this like a decentralized model because sometimes it's counterintuitive. But, again, as we adopt new tools like Copilot or other other things, we we have to look at the, what it's adding to, whether it's efficiencies, sometimes it's hard to measure, or, you know, the productivity and how how much faster we can deploy code or how much faster we can do things.
Like, some of that, we actually have to do our show back because, just as Ben mentioned, the central team is, you know, you always get sort of like, okay. Everything is expensive, so you've gotta show back some of that great work and some of those central functions.
And, Patricia, how about you?
Yeah. So we you know, same thing as, like, the the council that we have. I would call that the same thing. For us, it's the council is also the center of excellence. So we go through and, you know, we meet on a regular cadence. We have had one of our learnings is that there was so much excitement on that. We're now breaking the meeting apart because everybody wanted to be it was the FOMO factor going on. Everybody thought they're missing out if they weren't part of the center of excellence. And so we're gonna be splitting that out to having the core team of decision makers more at that higher executive level.
You know, like, what tools do we wanna do? What problems are we trying to solve, and separate that out the center of excellence where we're doing training, education. Other groups are actually coming in and showcasing how are they using the tools in their environment. And and I will tell you, they they are super cool. I love I love watching and seeing what the other teams are doing inside of, you know, their organizations with these souls. So, you know, I do think that center of excellence is really key, and we're all learning in this together. And these these groups are evolving on a quite regular basis on, you know, what's next. So we had one group. It's now two groups.
Interesting. I also wanna, bring in a comment, that someone posted here, Vidya, around, how products AI products are, progressing from prototype production and bring it into this conversation around this cross functional team and the governance. Because what I've heard is that sometimes that center of excellence has, like, the legal and compliance members on it. And those folks, when they're really closely aligned in talking to the technologists and the product people, that can either speed up or in some cases slow down, the deployment of AI products. Can can do any of you have stories around that? Yeah. Mira.
Yeah. Yeah. I'm just going to make a statement and then drop off, unfortunately. I think it actually depends on I think it's use case based. Right? Because you don't want to put too many red tapes from the get go. To me, if you're keeping your privacy and your legal team upfront, because in our case, any new technology we are adopting in fact, even when we do the test, we'll make sure that they're actually from the get go starting. If the data is not transferring, they're not connecting to our network. They can do the work in parallel while we are testing. But if it is going to be a long connection and then they need to so I think it really is to me is, you know, a bit of that is education because legal is definitely the legislations are not there yet, and they're worried about what is going to come out of it. There's also when you talk about AI, there's also every country and every nation state has their own protection mission coming a little bit lack of better term. Right?
So I think there is a real reason for why they are concerned, but you need to go, you know, use case base basis. If you are doing something that's not super sensitive data, then go quickly. If it is super sensitive data, don't. It's okay. Because once you throw it in, it's not it's like it's like pouring the water into the lake and then saying I wanted to retrieve it back. You cannot retrieve it. It's already mixed.
Right. And, Patricia, you thank you, Mira.
Thank you. Thank you. It's a great, thank you for having me.
Yes. Thank you. And, Patricia, I saw that you had a little bit of a response there as well. Is there is there anything around again, in this Yeah. In this group that you're part
of, how
are those relationships with with legal and compliance?
I'll I'll give you an opposite perspective, from Eera, who unfortunately had to drop. But what I would say with that is that, you know, if you try to go fast in the beginning, then everything has to go through contract review. Hopefully, it's going through contract review. So you're either gonna get slowed down in the beginning and learn along the way and how to speed it up, or you're gonna always get backlogged in the end, and you don't know why. And and for me, I'm I'm more of a fan of, let's understand legal. What are you worried about? What are you concerned about? When do you need to know it? And and, really, when I talk about that intake process, you've gotta figure out how to maximize that throughput quickly.
And if you know in the beginning that this is what legal is looking for in the contract or what compliance is looking for, what security is looking for, if you invest the time, which might seem like it's slowing it down, but I'd rather invest the time in the beginning knowing how do we all protect ourselves because the last thing you wanna do is get something get ready to, like, hey.
Let's just buy this. We need to go. And somebody comes back and says, time out. No. We're not doing that. But as everybody is learning together about what's important, what matters to the company, then that's that process can get streamlined over time and just keep moving faster. But the goal you know, my goal working for cybersecurity companies that we need to protect the company at all times. So speed is is great, but I also have to put that that fact that lens on it as well.
Yeah. Absolutely. And and, Laura, I'm gonna, dive in a little bit more into this question. There have been a lot of comments also about security and security risks. And, obviously, as CIOs, we know that one of the greatest security vector threats is the employees. Right? And and and data leaks through employees. Mhmm. And, so one of the things that we've been hearing about, and I think Jensen Huang actually said this, a a few months ago, is that IT may become the next, HR because there needs to be an HR for AI agents. Not just because of the security risks involved, but also because how hard they are to manage and govern and because they're not deterministic software. This isn't the type of software that we, you know, grew up with, which is you write it once and it runs the same way every time. Right? This is pretty unreliable technology, sometimes. Right? You can do a lot of things to make it more reliable. We talked a lot about governance.
We talked a lot about controls. But when we think about this, you know, HR department for AI agents, do you think that's right? Do you think that HR should be leading this, you know, move to really understanding how AI agents are performing? Should it be IT? Should it be a combination? Or should be the should it be the line of business? Like, how do we how are CIOs thinking about this complicated terrain where the line of business is ultimately responsible for the outcomes of the AI agents?
Mhmm.
They kind of have to be managed like humans, and yet it's technology. Like, how are you thinking about that, Laura?
Yeah. For sure. It is it is kind of interesting. So I don't know that it's really HR. I think if the way we've handled it is we have our policies, we have the training. Again, we're decentralized, so it's the responsibility of the product teams. We do have the security tools to ensure that things data is not being lost, that, you know, there's prevention there. But where we would have a situation perhaps where somebody did something blatantly or maybe it was an accident, we still have to have the policies to ensure that, a, we protect hopefully, it doesn't, you know, data doesn't get lost or or breached, but, you still have to have that, accountability, I guess I would say.
So does that fall into HR's hands if there was, you know if the training was taken and something had happened? I mean, it's it's the individual's responsibility to know. It's also our responsibility to train. So it's kind of a mix. I don't know that HR necessarily owns it, but I think as part of our the way we work, it's it's just like anything else. Like, if something were to happen, you know, how do you pull HR into the conversation? I don't know that it's necessarily my role to govern and guide, or not guide, but govern and and, you know, enforce. I think that's sort of a group, decision. So
Patricia, any thoughts? HR or AI agents? Is that IT? Is that HR?
I love this question. So I I'm gonna give a I'm gonna give a a futuristic digital flare to this the the response on this. So, you know, when I think of I'm calling these digital agents that you have out there. So these digital agents that we're deploying out inside of our company, whether it's an RPA bot, GenAI, or whatever we have out there, they are the same thing. If you if you put in the world of HR, and I don't know what Jensen was thinking. He he never asked me directly, so maybe someday he will. But it's it's if you think about the life cycle of an employee, I get hired in, then I'm doing my job for a period of time. I get performance reviews. I'm either performing or not performing. And then some point in time, I retire from that role position and go on to something else.
Same now I want you to put your digital hats on and think about this and put this into more of a digital mindset is that same thing with these agents. So within IT, we may build and deploy. So think of of IT as that HR recruiter, so to speak. So bear with me. IT is actually hiring or building those bots, and then they're deploying those virtual agents and assistants out into other companies' departments, same ways IT hires us, but we go work in IT, HR, finance, legal. And when those bots are out there, we talked a little bit ago about how we manage and, how the performance of these things are. So I think of it more as as as IT, we might be the builder, but then we we push push them out. We need to make sure we understand, are they performing? Are they doing what we thought they were going to do? Are they productive?
Did we completely eliminate an application and now something came I came in and replaced it, and now we need to retire that bot? The thing that I think in in my mind, and this is just in Patricia's mind, is that I almost see, like it's like a CMDB for tracking these types of digital assets or these digital FTEs inside of the company. Because if we all win the lottery and we all leave, these things are in the background. If nobody knows that this employer contractors, people can relate to that, are still working inside of the company and nobody even knows that they're there. Bad things can happen. So I think that it's more of a concept maybe what Jensen was talking about, but I I look at it as we we own honing and and help help that same mindset way of application repository sitting inside of our CMDB, tracking now that AI technology that's deployed and running on behalf of us where we can just sit back and say, hey.
Look at that. That's just happening on its own. I think we do need to monitor, manage, and and because those bots aren't free, you know, UiPath, they charge a per bot. So you wanna make sure that that's actually providing value for you. So I do see the future in digital FTEs, and I've actually even currently taken my org chart. I'm probably driving my team crazy with this, but this new mindset, we're actually having my org chart. I have digital FTEs, so I'm now capturing inside of my organization how many bots have we built and deployed, where the virtual agents running, because guess what? Each one of those are gonna be doing productivity.
They're gonna be doing work on behalf of a of an FTE of a person, and I can capture that and understand that. And so I believe in the future, I'm creating a blended org chart with, my digital FTE sitting alongside, my my human FTE. So I think we'll have a blended environment.
Fabulous. And, we are at time. One last right lightning round question, if I can, for the two of you, which is, what are the skills that people might need who are not at your level yet, but really wanna be in the CIO seat a few years from now with the AI keeping in mind this AI trend? What are the skills that maybe people haven't built yet that they should be building right now? Laura?
Yeah. I think, just the the thirst for learning. I mean, things are constantly changing and at such a rapid pace continuing to learn, to really know what problem to go after the right problems to solve and measure. And I do think, you know, the communication skills are so key in collaborating with your fellow peers and stakeholders constantly and be a part of those conversations so that you can bring tech to the forefront.
Awesome. And Patricia?
Yeah. I would say in addition to to what she said already, I would say the AI fluency, there's so much content out there. And, again, you know, she iterated that, you know, we need to keep learning. So by all means, keep learning, keep doing, keep growing. But, also, as you're looking to aspire to do that in the future CIOs is that it's no different than today, but you have to be able to manage teams cross functionally. You do you need to be able to influence others, so think about those. And also change management. The same thing when we're talking about digital transformation fifteen years ago in digital, in this world of AI, we need to do change management to help people are worried. Is this gonna take my job away or what's going to happen to me?
And then also just spending the time and understanding, Mira I think Mira was talking about, like, all the compliance and data privacy things that are coming in. So just take the time, invest your skills, and understanding data compliance and, you know, those types of things. I think it's really gonna be key things that we need to put in our tool belt as the CIOs of the future.
Thank you. Excellent. Well, thank you all so much. This has been incredible. We're getting some great feedback from the audience. For
No comments so far – be the first to share your thoughts!