AI & Innovation for the Next Era of Medicine

Reviews

0
No votes yet
Automatic Summary

Driving Innovation in Healthcare: Insights from GE Healthcare Experts

Introduction

In the rapidly evolving landscape of healthcare technology, the integration of artificial intelligence (AI) has become a significant focal point. Two prominent leaders from GE Healthcare, Karen Tam and Adriana Fernanda Jimenez Avalos, recently shared their insights on how AI is transforming the industry during a webinar hosted by Women Tech Network. Their experiences shed light on the importance of aligning technology with human-centric practices to improve patient care and operational efficiencies.

Meet the Innovators: Karen Tam and Adriana Jimenez Avalos

Adriana and Karen bring a wealth of experience to their roles at GE Healthcare:

  • Karen Tam: Leads strategy and growth ventures in North America. With extensive experience consulting for health systems globally, Karen emphasizes the necessity of integrating technology within existing healthcare frameworks.
  • Adriana Jimenez Avalos: Currently serves as the AI solutions leader. With a background in biomedical engineering and a master's in AI, Adriana focuses on enhancing internal processes through advanced technology solutions.

The People, Process, and Technology Equation

Karen highlighted a vital lesson learned from leading major strategy transformations: effective integration of people, processes, and technology is essential for successful AI implementation. The challenges often lie not in technology itself but in aligning these three components.

  • People: Resistance to change is common, particularly with new AI tools that may create fear of job loss and mistrust.
  • Processes: Redesigning workflows and clarifying accountability is crucial to ensure that AI does not simply exacerbate existing burdens.
  • Technology: Adoption must be framed positively, emphasizing benefits like improved patient care instead of merely cost-cutting measures.

Bringing Your Whole Self to Work

Adriana elaborated on the significance of authenticity in driving innovation. By integrating her multifaceted identity—engineer, artist, mother—Adriana fosters an environment where creativity and responsibility coexist. This holistic approach is essential for building trust in AI technology.

  • Transparency: It is vital for both engineers and users to clearly understand AI’s capabilities and limitations.
  • Human-Centered AI: Adriana emphasized that AI must prioritize human interaction and empathy in healthcare delivery.

Navigating AI in Healthcare: From Pilot to Scale

Both experts discussed the challenges organizations face in scaling AI from pilot programs to enterprise-wide initiatives. Many organizations are stuck in pilot phases due to misalignment between expectations and the technological infrastructure needed for rapid integration.

  • Structured Frameworks: Establishing clear goals and success metrics from the outset aids in evaluating the effectiveness of AI pilots.
  • Negotiating Internal Processes: Streamlining legal and compliance protocols is crucial to promote faster adoption of successful AI solutions.

The Role of Leadership in Responsible AI Adoption

A critical aspect of AI implementation in healthcare is the leadership mindset shift. Karen emphasized the need for leaders to adopt a framework that prioritizes:

  • AI Literacy: Understanding AI’s capabilities enables leaders to make informed decisions and foster trust within their teams.
  • Human-Centered Mindset: Acknowledging and mitigating the risks associated with AI ensures patient safety and promotes a culture of accountability.
  • Systematic Thinking: Leaders must transition from a technology-first approach to one that considers human factors and organizational structure.

Conclusion: A Human-Centric Future for AI in Healthcare

The conversation between Karen and Adriana underscores the transformative potential of AI in healthcare when implemented thoughtfully and responsibly. By focusing on human-centered practices, fostering transparency, and aligning organizational structures, healthcare organizations can leverage AI to improve patient care and outcomes.

As the industry continues to evolve, embracing a holistic approach that combines technology with empathy will be crucial to closing the gaps in healthcare delivery.

  • Final Thoughts: AI should enhance, not replace, human judgment in healthcare.
  • Empowering Workforce: Continuous education and training are vital as technology advances.

Stay tuned for more insightful discussions from industry leaders as we explore the future of technology in healthcare!


Video Transcription

I would

like to start with Karen. Karen, if you don't mind, please introduce yourself, and then we will, have Adrianna introduce herself as well.

Sounds great. Well, hello, everyone. It's a pleasure of mine to be here with the Women Tech Network. My name is Karen Tam, and I lead strategy and growth ventures for North America at GE Healthcare, where I my day job is to work with senior executives to drive growth at the intersection, of health care delivery, technology, and innovation. And my perspective is actually shaped by career directly working in the provider side. When I say provider side, these are physicians, health systems, etcetera. And my career spanned health systems in and outside of The US, and I've worked and advised large integrated delivery networks, community hospitals, physician groups in The US as well as ministries of health and nongovernment organizations in countries like Bangladesh, Rwanda, Turkey.

That range I I see that range of experiences giving me a very grounded view of how differently health systems work across the world from highly resourced environments like The US to those navigating very significant workforce infrastructure constraints. And prior to this role, I actually spent a decade as a strategy and management consultant advising c suite leaders on enterprise strategy, capital allocation, but most importantly, growth transformations. So much of that work really centered around helping organizations make very difficult trade offs, especially when market pressures, financial realities, and patient care priorities are often not aligned or sometimes in conflict. And so today, my focus is really how do we build and apply solution portfolio, including AI and digital solutions to solve some of health care's toughest challenges across areas like oncology, cardiology, neurology. And this is globally challenged global challenges across the board, and that means building scalable, sustainable models that truly change, both how care is delivered and being experienced at the patient level. So I'm very also deeply committed to leadership development, so that's why I find this to be a great opportunity to share with you all, particularly how we ensure more women are in positions not just to contribute, but really shape strategies and drive decisions at the highest levels.

Looking forward to our discussion today, Sapna.

Thank you, Karen. And now I would like Adriana to share a few words about herself.

Yes. Thank you so much, Sapna. Good day, everyone. I'm very excited to be here. Thank you so much, Women Tech Network, for inviting us today to this webinar. It's amazing topic, and thank you for the opportunity to represent our company, gHealthcare. My name is Adriana Fernanda Jimenez Avalos. My current role is AI solutions leader. What does that mean? My day to day job is to identify areas of improvement in our internal processes and provide advanced technology solutions such as AI and others. I didn't start here. My journey started in Monterrey, New Orleans, Mexico. Beautiful city surrounded by mountains. That's where I'm from. That's my hometown. Born and raised there. I decided to study biomedical engineer because I have a passion to help people in the health industry. So back then, I started with GE Healthcare around December ago. I've have held different roles in supply chain, quality assurance, mostly working on continuous improvement processes, new product introduction, transfers, and transformative projects.

Four years ago, because of my passion of helping people, I knew that AI was exponentially increasing in the day to day. So I decided that I needed to go back to school, complete my master of science in AI to fully understand what is this technology, and then bring it to our employees to make sure that everyone understands and know how to use it. I'm very excited to be here with the innovation panel. AI is here to support, our employees, and we'll talk more about that. Thank you. Back to you, Sat.

Wonderful, Adriana. Well, thank you so much both of you for the lovely introduction. My first question, is to Karen. Karen, you have led major strategy transformation across diverse health systems. What have you learned about aligning people, process, technology during major changes?

Yeah. I love how you frame the question on people process technology first. And that's exactly what I learned that in these large scale transformations, especially with AI, it's actually not the technology problem. It's fundamentally an alignment problem across the people process and operating models. Right? And to me, AI is actually unique in that it doesn't just introduce new capabilities when you introduce an AI tool. It actually introduces a new narrative into the organization, one that often carries, fear. And what that fear looks like is job loss, loss of control, trust in outputs. Right? So that adoption really isn't just an operational challenge to me, but, much more a human one. And so I will give you some stories. Right?

Recently, I spoke with a nursing leader at a large academic medical center who just asked just asked to be a champion to introduce a new AI tool to the entire clinical staff of, you know, 5,000 people across, the organization. And candidly, her reaction wasn't excitement. It was actually skepticism and distrust, and that distrust comes from she seeing it from administration, as a way to reduce headcount to ask it already stretched workforce to do even more. Right? And that fear is real. That reality of how these changes are being experienced on the ground, I would say that if we don't address that directly, there's no amount of new technology, good technology, perfect technology, innovation that will succeed. Right?

On the other side of this, I will say, is that the process and operating model, if you ask people who are adopting these new technologies, but you don't redesign these workflows or clarify accountability or even align the incentives to actually enable that adoption, you're you are basically effectively asking them to do more and not different.

And that's where I feel like a lot of these transformations truly break down. And when I work with organizations, the ones that do it well do it with intentionality. Right? They rigorously build early trust. And in the context of AI that we're talking about, this may mean increasing AI literacy, right, across the organization. And this is up, you know, up and down and across. Right? And meticulously ensuring that the change is actually reducing the burden like it promises and improving how work gets done, not just layering in more and more and more. Right? Because we usually we find our clients often just kinda slap things on without the proper change leadership and also breaking the processes as in supportive of old solutions versus new solutions. So I'll wrap with this last point. Right?

Communication to me and the framing of that really makes or breaks how organizations are leading through this transformation and succeeding on the other end. Most organizations, like I mentioned the story with this nursing leader, is not untrue that there's a cost savings angle to it. Right? And they have to be upfront about that. However, I do think that organizations that are able to wrap in a more holistic and even positive, view on the benefits, and examples of that could be hours saved or a patient's or a physician satisfaction. You're gonna get a better change journey than those that, you know, are trying to meet the ROI mandate from a CFO investment perspective. And so to me, when you get both of those sides, right, the human and the system, I feel like that's where transformation, especially again in the AI context and innovation context, can truly scale and stick, Sapna.

Wonderful. I I like how you framed, bring the people along. I think bringing the skeptics along only strengthens how we design new processes and not just take the process and automate it. And that probably will help with people believing that this is here to help us in faster care and better way to innovate and and bring solutions to the market. So thank you so much, Karen, for that. My next question goes to Adrianna. Adriana, I loved your introduction, the way you talked about yourself as a whole self. How do you bring your whole self to work as an engineer, artist, mother? And how does this multidimensional identity shape the way you build, deploy AI, or create solutions for people, patients, and customer? And any practical examples.

Thank you so much, Adna. Thank you for the question. And I think I I also agree what Karen has mentioned that human interaction, and it's quite impactful. Through my life, I have held different roles. But being an engineer, a mother, and a wife, and an artist has definitely bring a lot of learnings and experiences in the last years. And this has also shaped the thing that I, the way that I think about responsibility, creativity, and impact all at the same time. I think this goes alongside with the topic of authenticity, the concept of it, what is that. That's how I bring my full self in our in my day to day, not only to work, but family reunions, any event that I have in my life, I bring my full self.

And I really like how the author author and researcher, Brenna Brown, describes this because it's also about vulnerability. It's knowing when we do not know something and be honest about it, but also staying open and curious. This is how I build AI, grounded in transparency, humility, and continuous learning, because I'm passionate about also continuous learning. I don't think I separate what I value from the technology. So the same transparency, the same curiosity, and the same humility that I bring as a person, I bring it and I embed it into the AI solutions that I design with our team. I think in order for AI to be successful, there are two main things. Of course, AI needs to be accurate, the models, the precision, the accuracy.

But the other aspect is the trust. Does our employees, does our customers trust the AI? And this trust comes when we can provide, visibility and explainability on what the AI model does for us. Setting clear boundaries, what are the limitation, what does the AI do, what does it not do, and being honest upfront about uncertainty that the model might bring. All of these multidimensions ground me on knowing that AI must be human centered. And AI is a tool that was designed by humans to help humans. So that needs to be the center of it. Thank you.

Wonderful. Trust is such a powerful word in this, and I think it all starts with having clean data and clean models that can help tune, how AI can work for us, especially in, health care. Wonderful. Thanks, Adriana. Let's go back to Karen. Karen, what are we hearing from clients about implementing clinical AI? How is it coming from concept to pilot to scale? I'm sure some clients are further in their journey. Some are still hesitant. And and what's GE doing about it?

Yeah. Yeah. It it's that's a good question. Your instinct is right. Right? Like, I think when we talk to clients, they're moving away from the past of, like, why AI? Like, most organizations have gotten past that phase. Most organizations have got it the AI strategy part or the need for one into their executive mandates, right, and priorities at the highest level for most health systems. And, you know, this is a generalization. Of course, there's gonna be resources, constraints, or the need that's driving some of this. But for a majority of, you know, large, well resourced organizations that we're working with. Now the real challenge is, you know, bringing that vision or the need into, the pilot phase and scaling it across a growing set of opportunities where we're kinda seeing that, trend move towards.

Right? And I would say where many of these organization gets stuck is actually at the pilot phase. And the reason being there's a ton of pilots that in the excitement, most of these organizations are well, they started with encouraging and now moving towards, oh, oh, well, maybe there's too many of them. And in my mind, when we talk to them, we go, not all pilots are created equal. And what's often what's missing is, like, a portfolio approach. And so when we work with them, it's actually being very clear upfront on what success looks like. And I would say that most people will say measurement is one of those. I would add one more, which is what is that red line you will not cross? And I think, Sabrina, in your video, like, we started touching on that concept of what is nonnegotiables.

Most organizations actually find themselves in a very difficult position naming what's nonnegotiables throughout the pilots and measuring that consistency of what the red line they will not cross looks like. And we're actually, in some studies seeing that play out even more across not just I'm gonna bring it outside slightly outside of health care as well and apply it to other industries. There was a MIT study that was done in 2025, and this includes, you know, all major kind of industries in the hype of, you know, a retail kind of large language models, right, which is slightly different clinical AI. But overall, I think the concept applies. Right? There was $40,000,000,000 invested into these Gen AI tools across these industries, yet only 5%. So think about that number. 40,000,000,000, 5%, of enterprise pilots that actually deliver measurable p and l impact. So it's actually not the lack of innovation. It's much more about operationalization. Right? And part of that, when we study it more, when we talk to clients, it's really about the stakeholder alignment.

And back to clinical AI, right, scaling this within health center health care assist settings, it's about lining the financial, clinical, and operational priorities all into one. But each of these groups, think about it, are evaluating success very, very differently. And the most difficult conversation, like I said, is what will we not do versus what will we do. Right? And so organizations are building that muscle. And by the way, this translates to much more than AI. It's a general, you know, organizational problem that for those of you that have encountered it is saying no to many good things so that the best things can surface to the top. Right? The other barrier I would say that we're seeing is the internal infrastructure and decision making. Let's go back to the concept of from pilot to scale.

Many of these organizations have set up to evaluate large traditional vendors or solutions, but not rapidly evolving AI solutions. What I mean by that, and I'll give you an example of how this plays out. We're working with a cancer collaborative in the South that has an incubator of several startups for AI solutions in the past three years. And one of them became highly successful, gained strong clinical results, full executive support. I'm saying I'm talking about CEO, CFO, all those at the system level, not just at the facility level, wanting this to be, like, scaled across all facilities and all patient populations for cancer. But it took eighteen months to move it from enterprise contract from the pilot to an enterprise contract. Why?

Because legal, compliance, security process really wasn't designed to scale that type of solution. They were meant for large companies like us. Right? And by then, guess what? The electronical medical record has changed. Patients literally have had to wait for the eighteen months, and things were just back to business as usual prior to pilot. And so that was wasted time for the benefit for the clinical AI technology that could be moved from scale to implementation. And that speaks to the infrastructure that wasn't able to absorb it right away and scale it because it wasn't meant to be for a fast paced innovation cycles. Right? So when we see these in organizations that do it well, they actually do it in both. Right? Portfolio approach to prioritization, and then they also modernize the infrastructure so that these successful pilots can actually move to scale.

So, ultimately, I think the the decision point and the action to take is that the organizations that will that will win aren't the ones with the most pilots. Right? They're the ones that can build that sustainable capability to consistently move from pilot. Every successful pilot that determines and meets the criteria on what we'll do and what we'll not do moves from pilot to scale so that, ultimately, in our case, patients get the best and the ultimate benefit.

Very well said, Karen. I think infrastructure is such a an important element of scaling because, ideas are there, problems are there, solutions are also there, and creative minds are coming together to solve. But if the infrastructure doesn't support, it's going to be a a very tough, decision at the top levels to implement a scalable solution.

Absolutely.

So now that we have an external view, I wanna just, bring us back into our internal view of, GE Healthcare. And this question is for Adrianna. You have been a community of practice leader in leading the way, we adopt AI and scale AI. What barriers have you seen, in the organization, when we are trying to scale AI from pilot to projects to enterprise adoption? What is your experience there, Adrianna?

Thank you so much, Sapna. I think in also, Karen mentioned this before. Some companies and people might think that the major risk is the technology itself, just truly valid. I think there's other two components of this, which is the readiness of adoption and also the trust. The some organizations are focusing a lot on model risk, like hallucinations, and that's a real problem. I think we need to start making the questions of, are our employees ready to adopt the technology? Do they fully understand it? Do they know which tool to use for their specific use cases? For their day to day routine or task? Do they know which tool to use for that? And then do they know when to challenge AI and ask the questions? It this sounds like it's off. Is it correct? Let's double check it.

So those are the questions, that will make sure that we are ready to adopt AI. Because if we don't make them, then the major risk will become our user error. And the second part, goes with something that Carrie was explaining about the scaling of all of these pilots. Something that I have seen, and for some of the pilots that we're working on supply chain right now at the moment, we're making the foundation on what are the processes that we need to follow. In GE Healthcare, we have our AI governance framework that we follow for every AI pilot that we do internally and externally, making sure that we clear all of the different audiences and committees such as cybersecurity, legal, marketing, and more. And then the second part is the AI literacy.

So we have training for all of our employees to ensure that they are up to date on what is this technology, how can I start using it day to day, how can I prompt effectively, so it gives the answers that I'm looking for? So the these two parts are a key place for us to make sure that we are ready to scale up. I would like to add four other things that in our pilots in supply chain we're doing. So we're documenting everything, following our CRISP ML, which is a cross reference global standard for how to do machine learning. But when we move it from pilot to production, we need to make sure that we have clear use cases. Not everything needs to be AI, or the solution needs to be AI. Sometimes we need to take a step back saying, do we have a standard process in place?

Are we following a process? Can we start with that? Then we can automate and use advanced technology like AI. Second one is once we move that there, the strong monitoring phase is key here. And we're not only talking about the return of investment, but also for how the model is behaving. Is the prediction what we were expecting? What is going on in a day to day basis? Maybe something changed, and we might need to retrain the model or address any of the issues that are going on. The third one is the transparency about the limitations. So in the pilots that we're running with supply chain, we have in in including all of the users since the very beginning, making sure that they understand what does the machine learning model does, what what is what is not, the limitations, where are we taking the data from, what is the expected output, and then where can they visually see the prediction and the AI result.

And the fourth one is to have a clear part of escalation. For any issues that might happen, we need to have that, clear because in that way, again, we're reinforcing the trust from the users and our people who is developing the technology. And especially in health care, we have to bring along the journey with a healthy level of the skepticism and always keep the patient's safety at the center.

Thank you, Adriana. We are gonna keep you on for the next question as well. I I liked how you talked about, scaling. But from, as you're a solutions engineer, where do you see the greatest immediate impact of AI in how, your teams are working and, anything that you can share as a nugget to the audience.

Yes. Absolutely. The the greatest impact right now is in productivity and the way that we work in our day to day. As you know, AI can process millions of data points, give us, recognizing the patterns and give us insight way faster than a human can. So what does that mean to us, for our employees and humans, is that we can focus on things that only humans can do, such as decision making. We can do problem solving. We can start focusing on bottlenecks that we see in our internal processes. And it also talks about the efficiency that we do this. So we can now generate, faster idea generation, greater adaptability for us, and we have more resilience to the uncertainty. The more that we are using our tools give us more productivity in our day to day, but it also builds that resilience of maybe in four more months, there will come a new technology, and we will be able to adapt faster.

All the improvements internal improvements matter. And even if it's, the way that we automate how to send emails, that's a win. Some of the examples that we have been doing right now for our company is the factory promise date, which is gonna give us a better prediction on when can we have our our systems to the customer. I think it's the way that we also optimize our own processes that will be translated into improving our internal quality, speed consistently, and everything at the end translates in better outcomes for our customers and patients.

Wonderful. Thank you, Adriana. So we talked a lot about technology and process and trust and how, we move from point solution to scale. The next question I wanna go a little bit deeper with Karen on the mindset and the change in behavior that, you have observed or would like to see, leaders adopt as we innovate responsibly and effectively. Especially in a world where AI is moving so quickly. You blink and something else changes. So what type of, critical mindset or behavior must leaders adopt in today's world?

Yeah. It's an interesting question. I'll take experiences from actually, the other hat I wear, which I didn't share in my intro is also working with executive leaders on building these net new be leadership behaviors, so that they're ready for this change. Right? So we already talked about how AI moves so quickly, and these leaders are most effective. The the ones that I've worked with that are most effective are the ones that recognize this to be a change, not in technology, but a capability. Right? Like, this is a capability shift for them and naturally should lead to a new set of leadership behaviors. So there's four that I'll offer to you, and, actually, Adrianna already touched on sort of touched on some of that. Right? So we'll kinda dig deeper, with my answer.

I believe that it really does start with viewing and then elevating AI literacy as a core leadership capability. This is something that we actually do hear very often from our clients asking us what we wanna do. And I'll in a recent, conversation, I was at a, CMO meeting. So these are chief medical officers from 40 different large health systems systems that convene, on a quarterly basis. And one of them actually asked directly GE, what are we doing to embed AI literate? What can we help them with to embed AI literacy curriculum into their leadership development program.

So, you know, in many corporate settings and health care is beginning to get into these, they have leadership institutes of their own. And that was the question to, hey, GE. How do you kinda help us do that? We have, you know, an organization in partnership with number of universities called Hello AI. And like the name suggests, it's all about getting the low she up, and we should dig into what that looks like. Right? In as a leadership competency, when an organization, starts learning and embedding that as a a competency for leaders, I would actually define it as the ability to interpret outcomes. Again, Adrian already mentioned this. Question the results and make informed decisions. They are they are supposed to be supporting us.

And the the the most important part, I feel like, is that literacy is not a one size fit all. Right? And so take examples of a nurse. They may need to spot when an AI recommendation doesn't make sense. And to a doctor, it may actually mean being able to explain to the patient what the AI did or did not do because there's a trust factor to it between the patient and the physician and then between the physician and the technology itself. And then to a senior leader, this will look different as well. It may actually mean knowing when to ask a vendor to share, an explain to you know, framework. Right? Such as when you guys work with GE, you should be able to ask us, how do you explain your models? And the second I would offer out of the four is that is adopting a more human centered mindset. This seems simple. Right?

So I'll frame it in the context of the risks. So I think AI does introduce very real human risks. And what I mean by that is not only trust, but interpretation and finally misuse. The reality is that many clinicians are already using AI tools today. Right? This is one of the innovations that's rippling through the retail sector. Right? Like and we get access to ChatGPTs, Copilot, Gemini, anything you can do. And even when you go in Google search today, it automatically AI generates a search for you. So this is a very familiar tool, especially to clinicians as well, but that's where risk comes in because they're not getting structured training. And in terms of patient safety, it also, and in terms of health systems and the provider setting, it actually has a legal exposure component to it as you encounter uneven adoption across these teams.

Right? So that's important to understand that risk on the human centered mindset. It's the second one. So to me, responsible leadership in this case is really addressing that investing in education, being transparent about the limitations of these tools, and then finally creating that environment where people feel comfortable questioning the technology itself, not just using it.

So in the third, I would say, is that is shifting from, the technology first mindset to a systems mindset as a leadership competency. What we already touched on a little bit of that, and Adrian started the word responsible AI too. So to me, that isn't just about model validation and compliance checklist. Right? I think back to the concept of human, it's actually aligning on decision rights, accountability, workflows. In this context, it's about the people and less about the policies. Policies are definite. Right? But, like, if we don't remember that the policies is influencing people behavior, we haven't shift from the shift into the systems mindset. And then finally, I'll offer this last one is how we actually view AI itself. The tool being what augment that human judgment and not replacing it. Adrienne actually said it very, very well. She said designed by humans to help humans. I think that's how we need to position it and frame it.

And at the end of the day, responsible AI to me when we talk to clients isn't about slowing down innovation. So all these governance, blah blah blah, policies and workflows, it's not about slowing that innovation, but it's actually building that leadership capability to your question, Sapna, on what's needed and what we need to see more of and continue to encourage is that these AI literate organizations really treat AI, more of a partner and not as an oracle.

That's how we kind of frame that.

Wow. Those were really powerful insights from both of you. AI literacy is such an important, focus area for most organization, as not everybody is coding. And it's not about coding. It's understanding how AI works, where it can help, and where it can fail. Clinicians and technologists knowing what questions to ask, how to recognize biases and apply judgment, and build it with empathy. I think there is so much to offer from a AI literacy perspective. So I really wanna thank both of you for your comments, your insights, and your experience that you shared with the team. I, now wanna turn it back to Laurie from Women Tech Network. And Hi. Laurie, the floor is yours.

Wow. That was a great, great conversation. So many key points on there. And, you know, I was having a conversation earlier today, and we were talking about AI and how, literally, every week, something new is happening, something is changing. And it's, you know, it's it's, you know, frightening at some point, but it's also, exciting to see what we can do with it and how we can improve, from your vantage point, what you were talking about patients' lives. So, awesome. We have a couple questions in the audience. I'm gonna, get ready to ask these. Here is one from, Smitri and she says, how do you navigate the challenge of the challenges of regulation and data privacy when using AI in the health care space? Who would like to take that one? Any thoughts?

I can start with what people are trying to do.

That's right.

So I

don't think it's the one size at all. Let's start there. Right? Like, I don't think the navigation part, right, in the context of the health systems, I think it's a stakeholder engagement conversation, legal and compliance brought in early. I think often it's an afterthought. But in this case, it's so important because there's a discoverability component to it. Right? So when we kind of, like, look at the landscape and cybersecurity is, like, one aspect of it. But what now I'm talking about is as the collection of data increases, there's recordings, whether it's saved, how long it's saved. Like, all of those things are decision points that often internal capabilities are either nonexistent at worst or at best, maybe minimally getting developed and strengthening. Right?

So that's the stage that most organization we work with are at. And I don't think there's anybody, and I'm happy to be proven wrong that someone has cracked a nut on this one yet. So I would say, like, it's a journey for the entire industry, to be honest. Right? And I think us as a vendor, as a partner to our, client organization, those are the strengths that we build and we bring. And we have a internal, like, framework that we've laid out, and we I, our security officer and our our our our AI use per, officer really does lay it out, and she speaks to it in the form of building, a skyscraper. There's floors to it. So there's components are foundational, and then there are components that make us scale, like we talked about, like, from scale quickly. Right?

And those are kind of the frameworks that we can bring to clients and have that discussion. Again, no one size fits all, but it I think it's an organizational decision, and that partnership is what makes the magic happen.

Absolutely. Well said, Karen. I think it's the partnership from the beginning. Because, an organization that wants to go on this journey or already in this journey, you will quickly find out that having a strong data strategy is the foundation to using AI and and making it work for you. Along with bringing your legal data privacy partners along in the journey, creating the responsible AI framework that has the components of explainability, safety, trust, and human in the loop, which is a very, very big concept in the industry today. I think those are all the elements that for organizations, you have to inspect and see how do we bring everybody along in this journey.

Yeah. Sapna, I would agree with you, and I think you've talked about this. All of you have mentioned this in in the conversation. It's the trust component too. So the data privacy piece and the collection of data and and and how it's used, where it's stored, and how it's accessed. So, really great points on that. Here's another question. How is AI going to help close the gender gap in health care, or do you think it can?

I can speak a little bit about that from the design perspective for the machine learning. It can help you see if it is designed intentionally. When we train machine learning model, we need to ensure that the data that we're inputting in there is not biased. This means we need to make sure that it's high quality. If we are talking about genders, we need to make sure this is female, male, etcetera. So the more quality of the data and the more the intention we put in there for the solution, it's gonna bring better outcomes. For the gender itself, the gap, I am thinking this is related to the health division. But at least for the internal, machine learning models that we have been developing, we are also facing some of that bias.

And the way that we remove it is by training the model with more quality data.

Agreed. I think that's a really insightful point about that because, you know, as we are training the models, humans are training the models. Right? And so as we train them, we have to be mindful of the biases and the blind spots because as, you know, as as the, as the technology continues to evolve, you just keep amplifying those. So that's a really great point, Adriana. We have a few more questions. Here's one. With AI rapidly evolving, I think everybody can probably, associate themselves with this. What new skills or mindsets do you believe the health care workforce must develop?

I mean, you started touching on that. Right? The AI literacy, the systems thinking. Right? The ability to think about AI as a partner. I would say that Adrian is absolutely right. Like, garbage in, garbage out. What we train the model in is exactly amplifying that behavior or that dataset and or assumptions that we're making with that dataset. And I would say that it actually is, synonymous to even medical, like, you know, disease trials that, you know, pharmaceuticals even make. Right? Like, when certain reactions or adverse reactions happen, it was based on the selection of patients. So I do think that, like, for the health care workforce, the understanding of the limitations, asking the question on the other side as you're adopting to say, what am I missing?

Pressure test that. And, actually, there are studies now coming out about sort of not overplaying the AI's role in the longevity of the workforce, I think it's important to pay a note too. And what I mean by that, and the and this is in the context of succession planning. Right? Like, I think there was actually a CNBC, footage on this too, is talking about, yes, people are very, very excited right now that it can do mundane tasks. Right? Like, outside of even clinical stuff. We're just talking about all AI, large language models, and other AI tools available to us. The promise is that it helps accelerate human beings, and therefore, we can reach different productivity levels, etcetera.

But what's missing is that as you take out that bottom rung of, analysts, etcetera, that are learning the trade and learning the industry and learning the real world applications outcomes, we're gonna be suffering in the future of a middle management succession plan to have no experience actually pressure testing against, right, what, you know, we Adrienne and I both made the point around, like, you know, questioning the AI's output itself.

That's actually I see it as the workforce health care workforce next journey. How do you get the retiring nurse, retiring doctor knowledge into the younger workforce while parallel, processing the AI adoption So that they're learning the real world skills and learning that critical thinking and clinical, like, expertise that can spot the mistake or wrong recommendation that can be very, very costly in the health care workforce.

Yes. Thank you for that. I think that that's, I think that that's so critical, you know, in thinking about, you know, how you approach it and what your mindset is, around it. Last question. What and I think this is a great question. What should the legacy of AI actually be in medicine? Better outcomes or more human centered care, or is it something else?

I would say both.

Yeah.

I mean, the whole purpose of using technology is to better provide care and solutions. And also the human centric nature of it, which is making sure that we don't lose empathy, we don't lose curiosity and judgment is so important. At the end of it, it is human treating the human with powerful solutions that can help with speed, accuracy, and precision. And our focus should be always human centric care, in how we solve the problem.

Yeah. I don't really like

the information. Have any other

Yes. I I think just to add up, Sapna, it's important that we're noting. We are creating the solution now. What are we gonna do with all that amount of time that we use to invest on trying to make insights and prediction of all these millions of data? So now that's gonna be gone. Now the human can focus on that human center attention. But something important also to see here is, now how do we get better at generating ideas from that data?

Yeah.

So we're gonna have more insights now. Insights that we didn't know before. It was impossible for humans to transform all those millions of datasets. Now we are gonna have all of this information. Now what do you do with it? So as humans, we need to get better at idea generation. Okay. From this information, what should I put in action? How can I better provide health care for my customers, for my patients? How how can I use it for the benefit of the of our humans?

Karen, any thoughts?

Yeah. And I was thinking that, like, to do something else. I do actually think the outcomes align to the problem statement. So AI actually isn't a big bucket. Right? There are different tools out there that addresses different things, especially in health care. While it started with administrative things, so, like, examples of that is, like, ambient listening. Right? What it's doing is helping physicians focus on the patient more because they don't have to stay up more at night or between appointments to try to take notes. And other automations like prioritizations, and you've heard it also in the, you know, mass media about how payers are using this as well. Right? Like, so that's the administrative bucket. And that particular pinpoint is solved by particular types of AI solutions. So I think it's important to kinda tease that out. Right? So, like, besides outcomes, I think something that and both Adrian basically said both. Right? I think that there is actually layers to the outcomes you wanna be delivering. Delivering.

And maybe some of those are precisely financial or productivity related. Right? So I think we need to be thinking about it in those layers so that your outcome is a nuanced, right, set of outcomes, like I said in the beginning of the KPIs. There's gonna be benefits that are not realized by data, like hard data alone, and there's many satisfaction points and or others as long as you can tease out what your problem statement is so that you put the AI against it.

I think that's so important, identifying what are the problems you're trying to solve and then measuring it against those outcomes. So I think that's that's incredible, incredibly powerful. And, Adriana, I also like what you had what you had to say too because I think we hear this time and time again about AI. Right? Is that, you know, it is going to make us more efficient. It's gonna be able to do things in a much, you know, more expedient way, giving us time to be more innovative and creative. And I can see how that can be very powerful in health care. So, well, I wanna thank all of you for being here today. This was a great conversation. It was engaging. I think I learned a lot and there were a lot of great, comments and questions in the, in the chat.

So, thank you audience for being here and participating and engaging. GE Healthcare, thank you for being such a valued partner. We love working with you, and, I look forward to working with you more in the future. And, everybody, I'd like you to just remind you that next week, we have our, Women in Tech Day celebration. We will be celebrating with a webinar and a panel discussion on April 2. So please join us. We look forward to having you participate, and we'll have some exciting announcements on that day as well. So with that, everybody, thank you, Karen, Adriana, Safna. Everybody have a great day, and, we'll see you soon.

Thank you so much.

Have a great day.