Scale Enterprise AI with Trust by Aravinda Gollapudi

Aravinda Gollapudi
Head of Platform and SVP of technology for Sage Intacct

Reviews

0
No votes yet
Automatic Summary

Scaling Enterprise AI with Trust: A Comprehensive Guide

In the rapidly evolving landscape of artificial intelligence (AI), the importance of trust cannot be overstated, especially for enterprise software. Today, we delve into the core principles of scaling enterprise AI while maintaining trust. My experience, spanning over 26 years in technology, particularly at Sage Group, has provided me with insights that I'm eager to share.

Why Trust in AI Matters

Many of you may already be familiar with large language models (LLMs) and their potential. However, issues such as misinformation, hallucinations, bias, and lack of transparency demonstrate the challenges that come with AI deployment. These issues can lead to significant repercussions for businesses, as seen in Google's loss of approximately $100 billion in market cap due to a single LLM error in February 2023.

As engineers, we strive to provide effective solutions, but LLMs can sometimes offer likely answers instead of correct ones, creating challenges that organizations must navigate.

The New Wave of Agentic AI

  • Agentic AI: This refers to the ability of AI agents to perform tasks alongside humans.
  • Challenges: The powerful nature of agentic AI comes with vulnerabilities such as prompt injections, data breaches, and malicious use.
  • Adoption: Many enterprises are embracing AI, seeing its application in everything from customer service chatbots to self-driving cars.

Building Trust in Enterprise AI

To ensure that enterprises can effectively navigate the challenges posed by AI, it's vital to instill trust in their solutions. The following are key considerations for fostering trust:

The Four Pillars of Trust

Enterprise AI should be built on four key pillars:

  1. Transparency: Ensure clarity about how AI models arrive at decisions.
  2. Robustness: Develop models with security in mind, anticipating possible attack vectors early on.
  3. Fairness: Address inherent biases and ensure diverse stakeholder input is integral to the AI development process.
  4. Compliance: Adhere to regulations, maintaining robust data management and security practices.

The Role of Governance in AI

A strong governance framework is critical for overseeing AI investments. Here are some strategies to consider:

  • Establish a Central Governance Team: A dedicated team can provide guidelines, monitor AI activities, and ensure compliance.
  • Focus on Security: Integrate security considerations throughout the AI development lifecycle.
  • Implement Documentation Processes: Maintain thorough records to support audits and compliance requirements.

Measuring Trust in AI

It's essential to measure trust to validate your investments in AI:

  • Feedback Mechanisms: Collaborate with your UX team to gather user feedback.
  • Surveys and Sentiment Analysis: Conduct surveys to assess user sentiment regarding trust.
  • Performance Dashboards: Utilize scorecards to frequently track AI developments and ensure ongoing improvements.

Implementing AI Effectively

For successful deployment of AI solutions, enterprises should consider:

  1. Crawl, Walk, Run Approach: Start small with test cases, gradually expanding AI capabilities.
  2. Engage Users: Ensure customers and internal teams are onboard with the journey toward AI adoption.
  3. Continuous Monitoring: Regularly check model performance, keeping an eye on model drift and making adjustments as necessary.

Final Thoughts

Trust in AI is not just a technical necessity but a cultural imperative within an organization. It requires commitment from leadership and a focus on ethical practices.

As you contemplate scaling your AI initiatives, embed trust into your organizational framework. By positioning trust as a strategic differentiator, you can enhance your value proposition in a competitive market.

In the fast-paced realm of AI, establishing a reputable, trustworthy AI model can give your enterprise a


Video Transcription

So my topic today to share with you all is about scaling enterprise AI with trust.I'm very passionate about this topic, and hopefully, this will be a useful session for you all. With that, who am I? I know Shelly gave you a introduction for me, but what she didn't cover is, essentially, I have always been a technologist at heart. I have started my career as an engineer myself, twenty six plus years in the industry, and, and I am at, at a company called Sage Group, which is in an elite technology and as well as the, business unit leader for platform. So with that, you know, I do have a wealth of experience, coming to talk to you guys today across, you know, multiple companies that surely already listed, but then going into specifics around travel, personal finance, accounting payments, mortgage, quite a few areas and, that I have actually, you know, been around in this world of, software industry.

So with that, I just want to now dive right into the topic, but let's actually go and ground ourselves. Why are we talking about this topic? All of you are probably familiar with large language models and AI and what's actually happening. But you all also know that LLMs themselves, they can actually struggle. Things like misinformation, hallucination, right, outdated information, bias, and a lack of transparency. So, hopefully, you are actually following the news on what's actually what's what's going on in this world. And in addition to that, they actually make factual errors, which is essentially a challenge. Because as an as an engineer, we are known how to solve problems. You probably heard the prior speaker talk about being a solution provider as an engineer.

But LMS are fantastic in the sense that they do help you with answering the most likely answer, but not necessarily the correct answer. The thing is you probably also know the the repercussions. If you have been following the news, you probably heard about Google's loss about about of about closely $100,000,000,000 in market cap in February of twenty twenty three, all because of just one error based by an LLM. And then the challenge of bias. LLMs are trained by people ultimately. Right? And so they do reflect the biases that actually happen, you know, are do exist. So it's actually important for us as the prior speaker already talked about the importance of the AI. You know, it it actually does reflect also in the models that are actually being, produced.

And now in the new world of agentic and if, you know, this is the new wave. Right? You know, it's the new wave of figuring out how agents can perform tasks side by side along with, humans. But it's coming with a ton of challenges. They're coming with the problem where they're extremely powerful. However, they come with a lot of vulnerabilities. So if you are already in this field of AI, you know, and agentic, you probably already have facing these challenges. Things like prompt injections. You probably are using ChargeGPT already across the board everywhere or Gemini or Claude. But when you actually go ahead and take the next step of asking an agent to do a job for you, you probably if you haven't yet already faced, you probably heard about prompt injections.

Things like authorizing authorization and control hijacking, data breaches, dynamic execution of stuff that you'd unintended, memory manipulation because these AI, toolsets, they actually have a ton of memory behind the scenes, Unintended consequences due to unforeseen interactions.

Malicious use, which is actually, probably because of, you know, a flow that didn't was not meant to be, but or it it could be through an actor who has malicious intent, sometimes unintended actions. Tools that are used can be manipulated, and the whole attack surface can expand further. Now I'm not gonna go into each one of them, but, essentially, this wave of, agentic and AI is actually broken Moore's Law. Companies are adopting it across the board. Users, all of us, are trying to adopt. If you have a a phone, you probably are already using AI. If you're having an electric car with self driving options, you are using it everywhere in your day to day life. But the pace with which, you know, the progress is happening right now is tremendously high. Now what happens to companies that are actually dealing with enterprise software?

That is the topic of today for me. How do you help your customers get the trust they need, you know, in the face of this fast paced execution and the challenges I listed earlier around elements? How do you ensure you're actually building trust, but in a six scalable fashion while continuing to also invest in your infrastructure? That's the topic for today. So with that, let's dive right in. Let's talk about enterprise AI. Right? Enterprise AI, when you're a large enterprise and you're actually offering, software that actually allows those companies that you serve to run their business, we are talking about not just a chatbot where you're asking a question to get a clarification or find something. You're actually talking about enhanced decision making. How do they actually get more value from the software you're providing to them? We are also providing automation to them.

Things like reducing their cost to do certain work so that you don't they remove the manual work that they are probably doing right now. Things like making intelligence baked in, insights and recommendations that allow them to improve the experience. You if you have not yet experienced, I would be surprised, but most of you may have experienced when you pick up the phone and call customer service, you may have talked to an AI agent already. Things like fraud detection is also another area where a lot of companies are investing in and also supply chain automation. In this world of geopolitical tension, this is another area of investment I'm seeing personally, an increased investment for enterprises. Now some of the companies and what are they doing in this world for, driving, you know, return on investment leveraging AI. I can tell you about my own company.

You know, I'm in my own team. I'm actually using it to help customers save money for closing their books. What I mean by books are the accounting books. Netflix, it actually uses generative AI in order to improve their recommendation system, right, which is extremely impromptu important for them because it drives viewer activity. Amazon is actually figuring out how to use AI, not just in recruiting. It is also using it for logistics and supply chain automation. And PepsiCo, right, thousands of hours in in improving the whole value chain, right, in in terms of their supply chain. They are actually also leveraging agents. Right? And they're using it for some specialized knowledge and skill sets. So the the point about bringing this up is to say, companies are already using it. It's not like they're not using it.

The important part that needs to be thought through for those of you in senior roles in the audience today is to figure out and say what are your challenges and how do you overcome them. With that, what are some of the challenges? If you're lead dealing with the legacy system or you have product has been out there for a while, you probably know that you are sitting on a lot of data. Data that may be siloed, very poor quality, has challenges with compliance. You're dealing with also embracing AI inside your company. You probably have to convince a lot of people who who think traditionally in figuring out how to build probably features, you know, and and capabilities inside the product instead of thinking out of the box and how can they leverage a agentic AI particularly. You probably are dealing with costs. You are probably dealing with also model proliferation because every every it's a race right now.

And as teams actually progress quickly to go and build these AI, models, you probably have a proliferation within your own company. You also have situation with Shadow AI. You probably have certain teams that are actually unmonitored. They probably are actually using models that they probably should be cautioned against using. You you may not have enough tooling, and you may have or maybe have tooling, but it's fragmented. You may be actually dealing with leveraging AI and moving fast while dealing with regulation. You also probably have the the usual push pull, which is a sense of urgency while dealing and ensuring that you're offering the highly secure solution to your own customer base. Now when you talk about trust, trust is actually not a nice to have in the world of enterprises, especially if you're actually providing critical software on which your customers are dependent on to run their business.

It is definitely not a nice to have. So I believe that there are four pillars of pre, bringing in trust through your own AI investments through your products, transparency, robustness, fairness, and compliance. And I'm not gonna go into each one of them because there's a lot of things that you can actually do, but I'll only touch upon one thing. That is transparency. What I mean by transparency is that just because just like you would do audit in your own product for an enterprise product where you record who did what, when then, you you probably should also invest in ensuring that there is transparency in how did your AI model arrive at a at a decision or take an action.

You will probably benefit if you can actually surface that and make it as part of your value proposition and and and expose it to your own customers. And within your own company, you probably will benefit by ensuring that you're investing in secure AI development. Security should be nonnegotiable. Right? Robust AI models. Thinking of, attack vectors ahead of time. Building this pipeline of ensuring that the CICD and other elements that you already have in your SDLC life cycle are integrated into your AI development as well. Making sure you go back and take a look at your own value system, use that lens when you're actually thinking about security and compliance and baking it into trust with AI. I highly recommend, if you're building enterprise software, having a team that is is independent and is a governance team that actually allows your other teams within the company to guide them on how what are the guardrails, guidelines, and provide a little bit of an oversight on some of the AI activity you're embarking on.

And so here is my framework. Right? Make sure that you think about having a governance team. Don't make don't, don't, you know, don't devalue, downplay, security or privacy. Make sure that you're thinking about AI in a responsible fashion, and make sure all the models are monitored. Now if I go specifically, enterprise customers, you will benefit when you are actually also dealing with compliance and you need to produce documentation. But to say how did you go about implementing capabilities to your customer if you have a central governance team. Because they not only guide you on how you can go about doing it, but also allow you to capture and ensure that you are able to produce the necessary documentation so your auditor can understand what happened. Security and privacy, you will be able to go and ensure that your, you know, solution set is something that's continuing to be trusted by your customers.

You have to take care of not just, you know, purely security, but compliance. Things like how do you how long do you retain data? What data are you using? And where does that data reside? Right? In terms of responsible AI, right, I think it's very important for us to think about diverse stakeholders as you build capabilities because of the biases that exist in most of the AI investments. It helps a bit to ensure that you actually have a diverse team that is involved in your own, investments in AI. But more importantly, as you're embracing agenting, when especially when you're dealing with enterprise customers, try crawl, walk, run approach because it actually helps a little bit for you to go learn a little bit and get your customers and stakeholders on the journey with you.

Most importantly, invest in models, which are very easy, right, lately, but make sure you monitor them. Know when there is drift happening because models do drift. And if you're not actually having enough instrumentation to monitor, you will not even realize the impact it'll have until it's too late. Now a potential I have no idea why my slide didn't build out, so I will actually go ahead and say it in words instead. A a potential option for you as you're thinking about enterprise customers and agentic, rollout and investments would be a three phase step. Try some of the ways I I've seen companies do, which is try with, like, a chat interface, expand it then to go ahead and do some automation, and then go into agent tech. It might actually allow you to bring not only your internal teams on the bus, with you in this journey, but also your customers.

You get a chance to test with them. Get let them appreciate, find the value, ensure they understand what is involved in, in adopting AI within their own, you know, companies, and ensure that you're giving them the time to absorb and and, and leverage it. Lastly, I want to make sure that we are also talking about measuring trust. It's one thing to say I'm going to invest in trust, but if you don't measure it, it doesn't exist. So measuring requires you to have an ability to work on ensuring that you talk to your, UX team to go and gather feedback and other tools that you are using inside your company to be able to figure out and see how trusted is it. Right? Not only just adoption, surveys and other outreach mechanisms gives you a sentiment analysis of how good is your investment. Keep an eye on it. Don't ignore this.

Make sure you have scorecards and dashboards. For every AI investment, make sure you know what to measure and measure it early and measure it often. And don't measure it and forget about it. Please take a look at it so that you get a chance to understand and have the ability to tune and tweak and catch any issues before they go out of hand. Make sure that you actually also think about, key process indicators that are tied to a specific business outcome that you are offering through your product. Right? If you're seeing a lot of churn, you may want to make sure that you are you take a look at it to say, is there something that I did recently through my AI investments that I need to take a hard look again? Once again, model model accuracy. I cannot emphasize the importance of model accuracy and robustness evaluation and ensuring that you continuously monitor.

That is another means of how you're going to be measuring trust. So, overall, that is my, you know, talk so far to to, to talk about this topic of importance of trust in an enterprise software. I'm going to leave you with final thoughts. Trust requires a commitment, not just from product development team that's actually building solutions, but it actually comes within your company culture. It comes from the top. It comes from a sense of responsibility and awareness of ethics throughout the organization that's actually building the software to serve those customers. And under that pillar, scaling is always not only technical. When people think about scaling, they think about infrastructure, they think about performance, they think about large volume, but that's not just scaling.

In my mind, it also includes organizational and ethical scaling when it actually comes to agentic and AI investments. Don't delay investing in trust. Start with trust early and make sure it's actually baked into multiple layers, not just in the product organization. Do it across your portfolio of and, and and the entire work, process that is involved in delivering a product offering to a customer. Think of how you can use trust as a strategic differentiator too because it's an important component in this fast moving pace of AI that is extremely hard to keep up. You probably see a lot of MCP models out there left, right, and center. But if you can show to the world why yours is actually trustworthy and customers can believe you, you will have a value proposition that's actually going to beat your competitors by a far, you know, a big amount.

So with that, I'm actually going to leave you guys.