AI with Purpose: Advancing Healthcare Through Responsible Innovation by Bruna Godoy

Bruna Godoy
Senior Responsible AI Counsel

Reviews

0
No votes yet
Automatic Summary

Advancing Healthcare through Responsible Innovation at GE Healthcare

In the rapidly evolving landscape of healthcare technology, GE Healthcare is at the forefront of integrating artificial intelligence (AI) responsibly and ethically. With over 125 years in the medical device industry, GE Healthcare has consistently been a technology pioneer, now embarking on a new journey to enhance patient care through advanced AI solutions.

GE Healthcare: A Legacy of Trust and Innovation

Since our inception in 1896, we have built lasting relationships with health care providers, patients, and stakeholders. In 2014, we began integrating AI into our devices and software solutions, positioning ourselves as leaders in FDA-cleared medical devices with more than 80 innovations to date. Our commitment to advancing technology continues as we aim to:

  • Improve image reconstruction and interpretation.
  • Automate workflow and segmentation processes.
  • Enhance real-time capacity management.
  • Support virtual care solutions.

Our comprehensive strategy seeks to unlock AI and cloud computing across the entire patient care journey—from early screening to diagnosis and treatment—all while maintaining high-quality, personalized care.

The Responsible AI Program

As a technology-centered company, GE Healthcare recognizes the importance of incorporating responsible AI practices into our operations. The global regulatory landscape for AI is continuously evolving, prompting us to implement a robust Responsible AI program:

  • The EU AI Act is one of the most comprehensive pieces of legislation affecting us, requiring strict compliance, especially for high-risk medical devices.
  • In the U.S., new state laws, such as the Colorado AI Act, are emerging, presenting unique challenges and opportunities.
  • Other countries, including Brazil, are also drafting AI regulations that align with global standards.

Our Responsible AI program aligns with these regulations and aims to build stakeholder trust through:

  • Transparency and explainability of AI systems.
  • Accountability throughout the AI lifecycle.
  • Adherence to ethical standards and fairness.
  • Data governance and risk management.

Key Principles of Responsible AI

To create a robust Responsible AI program, we have developed seven guiding principles:

  1. Interpretability: Ensuring AI outputs can be explained to users.
  2. Privacy enhancement: Complying with GDPR, HIPAA, and relevant privacy laws.
  3. Fairness: Promoting equity and access to care.
  4. Safety: Ensuring the protection of human life, health, and the environment.
  5. Validity and reliability: Building trust through consistent and accurate results.
  6. Accountability: Establishing governance structures for AI deployment.
  7. Security: Ensuring AI systems are resilient against unexpected events.

Building a Cross-Functional Responsible AI Council

To implement our Responsible AI program effectively, we have established a diverse council that includes skills from various sectors—legal, IT, data science, and clinical expertise. This council is responsible for:

  • Overseeing the development and deployment of AI technologies.
  • Maintaining compliance with evolving regulations.
  • Communicating transparently with both internal and external stakeholders.

Our structured workstreams focus on critical areas such as governance, explainability, transparency, privacy, safety, and fairness. The collaboration among these diverse teams ensures a holistic approach to responsible AI.

Real-World Applications: Enhancing Patient Care

A practical example of our Responsible AI application is the real-time ejection fraction tool, which improves left ventricular function assessments. This tool provides immediate feedback to clinicians, allowing for:

  • Green results: Highly reliable, aligned with expert calculations.
  • Yellow results: Interpret with caution.
  • Red results: Insufficient reliability, prompting further investigation.

This approach maximizes clinician-patient interactions, leading to more reliable diagnoses and ultimately enhancing patient care quality.

Conclusion: The Future of Healthcare with Responsible AI

As we continue to build and implement our Responsible AI program at GE Healthcare, we remain committed to making a significant impact on healthcare innovation. By prioritizing ethical practices, complying with regulations, and enhancing


Video Transcription

Sachin, today, I'm I'm here to talk a little bit about how we at GE Healthcare are advancing health care for responsible innovation.And as Anna said, I am a lawyer as a background. I'm from Brazil, Latin America, and I have performed in my LLM in Germany in IP and IT law back in 2018. And I've I hold some certifications by APP, which, was previous previously the International Association of Privacy Professionals. And currently, I'm have just switched roles to the senior council responsible AI at gHealthcare where I'm already in the pit. I'm building a responsible AI program. So I would like to discuss with you a little bit on how we at GE Healthcare, have technology embedded into our DNA, and we always had it.

So let me just, confirm here with my screen. Okay. So since '90, 1896, we are known by our health by the health care providers for medical devices or patient care solutions and also for our pharmaceutical services. We have always been a technology company, and we plan to, keep leading the way and and shaping innovation into our everyday practices. Today, we have, for those past one hundred and twenty five years, we have built a relationship of trust with our customers and with our patients. And since 2014, we have began our AI journey where we started to add AI into our devices and building software to complement them. Currently, we lead the list of FDA cleared medical devices with more than 80 medical devices that have been cleared.

And we have been integrating AI into our devices to help with image reconstruction, interpretation, acquisition guidance, automated segmentation, workflow automation. And we are currently in the path of moving beyond those solitary devices and single episodes of CARES to really help our providers, health care providers, to solve broader problems. So we already have products that support real time capacity management on on clinics and hospitals, real time remote scan assistance, workflow balancing, virtual care solutions, and we want to push these boundaries even further to really help providers to deliver high quality and personalized care.

I just want to make a disclaimer here that I might present some products that are still under development that might never become products nor have been approved by FDA or any regulatory agencies. I want to showcase for you where we invest in your research and development and where we're tending to go. So just to give you a, an overview of our victory strategy, we want to unlock the potential of AI and cloud computing, not only in our devices to make them smarter and to improve the speed and guidance using foundational models, but also across the care journey of the patients.

So, where we want to go from early screening to diagnosis to treatment and turning data into actionable insights to enhance the decision making of clinicians and health care providers. As well as we also want to keep, using AI across the enterprise to drive operational efficiency, where, for instance, we have command center, managing patient workflows and making them more effective using predictive predictive analysis. So as you can see, we are very much a technology company. Currently, in our devices, we have some solutions that speed up scans, by 50 by 50% in our air Recondial where we improve, detection of lesions, using AI as well, where we use AI also to do organ segmentation, which also reduces time from the clinicians so that the human in the loop and the the the health care providers can spend the time where it really needs where it is with the patient interaction.

So some of the use cases we are exploring, it's AI for diagnostic image interpretation, optimizing patient and capacity management, clinical decision support, improving provider efficiency, and also IoT for remote patient monitoring. We have more than 25 projects, for 2025 where we're looking to leverage foundational models. And as you can see, technology is at the core of our business. And where does responsible AI come to play, across all of it? I would like to to jump into a little bit of how the regulatory landscape is being shaped, around the world, And this is one of the main factors that is driving our responsible AI program and the implementation of this program. So if we look at the global landscape of of what we have in AI standards and regulations, This is a courtesy of Baker and Mackenzie, this slide. And as you can see, we have AI regulations and standards being drafted all around the world. For sure, I would like to highlight some of them here for you.

In The US, we have many states laws arising, such as Colorado AI Act. In the European Union, we have the, I guess, the most comprehensive AI legislation that was now enacted, and it's already in entered into force. In Brazil, for instance, where I live, we have a draft bill, proposed, which is very much based on the UAE Act, adopting risk classification, for the different use cases. So as you can see, we have a a constantly changing global landscape, an evolving landscape, which is imposing new obligations to both providers and deployers of AI. I would like also to touch base a little bit with you on the UAE Act because as I told you, this is the most most comprehensive legislation, and it's a legislation about fundamental rights and also on product safety. So, what this legislation does is really classify different use cases and not the type of technology, not saying machine learning or generative AI or LLMs.

It does not classify the technology itself, but rather what is being used to. So it has a different classification levels according to the use case where some AI use cases are prohibited, some of them are classified into the high risk bucket, and some of them only have transparency requirements for disclosing. And this law imposes penalties, and it it has an enforcement authority, which is the AI office. And it's particularly relevant to us at GE Healthcare, not only as we are a global company and we operate in Europe, but because medical devices are inserted into the high risk categories. And if you have a high risk AI system according to the UAE act, then you need you have a new a whole new set of obligations around conformity assessments, although this was already a part of the medical device regulation we have in Europe. But we have to implement risk management and quality management systems, data governance, transparency, human oversight, accuracy robustness, and cybersecurity.

So there's a whole set of new obligations coming, and I want to talk a little bit about the timeline of the UAS. So it has entered into forest into in August of last year of twenty twenty four, and it's taken a phased, approach towards enforcement. So the art some of the articles are entering into into forest, in in this phased approach. The key date for us at GE Healthcare as a medical device company, as a technology, as a med tech company, it's really August of twenty twenty seven, which is when the obligations for high risk AI systems that are a product to to to require to undergo party conformity assessment such as medical devices enters into effect.

So now I want to jump in where really the my magic happens and and the magic of responsible AI happens. So I guess for now, you had a little bit of background of how we GE Healthcare are a technology company, how the global legislations are changing and evolving, and how trust was always in the essence of our relationship with customers, patient, health care providers. So our responsible AI program that we're currently building and implementing, pops up on already some existing practices that we had. Because as you've seen, we have been using AI into and we have been integrating an AI into our products since 2014. So it's not something new for us, but for sure we're making this program much more robust right now, not only to comply with the legislations, but also to enhance trust. So for the ones that are not familiar with the term, what is responsible AI?

So responsible AI really refers to the practice of developing and deploying AI systems in a way that are ethical, transparent, accountable, and align it with societal values and laws. AI, as you might be aware, is a social technical tech, system which interacts with the society, with people, with companies, with the environment. So we really need to, proactively address risks, of AI and ensure that its usage benefits all stakeholders involved while also mitigating unintended harms. It's not about isolated practices, but it's rather about achieving a harmonious balance between ethical considerations, transparent decision making, clear accountability structures, and alignment with broader societal values. And how the the the this program has important, where it's touched upon and gives value and enhances value. As I said, not only to comply with regulatory legislations, evolving global regulations that we have, but also to uphold fundamental rights of fairness and mitigating bias to uphold those ethical obligations, to enhance stakeholder trust because of the transparency and explainability components and accountability components that we have, across all the AI life cycle.

And also it's important to to provide sustainable innovation because as we are managing AI risks and looking that looking at them as projects, it's how we secure the long term viability of all of those technology innovations to give and while giving legal certainty to it, it's how we we we provide this sustainable innovation.

So at GE Health Healthcare, how we're tackling this? We have started designing and building the responsible AI program, and one of the steps that we had, where we is really to define the scope of the program. So it's not only when we are developing AI system, but also when we are deploying those AI internally. So we have the final the scope of the program and then went to look at what were the principles that we would be adopting. So we did a extensive benchmark of the market, of the standards and regulations. And now our program itself, it's mostly constructed on top of the NIST AI risk management framework, which offers a set of controls, to be adopted.

And we have really cross checked the NIST principles across the principles that are mentioned in the considerandums of the UAE act, and we have come with this list of seven different responsible AI principles, which, and I will go very fast through them. But we aim them, our AIs, to be interpretable, or if they are not interpretable, then the outputs to be explainable to the users interacting with AI to be privacy enhanced, respecting privacy legislations just like the GDPR, LGPD in Brazil, HIPAA in The US, CCPA. So the privacy principles still applies. We aim to develop and use AI systems in a way that incorrect encourage fairness and increase access to care, for those systems to be safe for human life, health, property, and the environment, for those systems to be valid and reliable so we can trust, the outputs so that they are consistent and accurate.

We hold ourselves accountable through the governance, and we encourage transparency by sharing information at all times whenever we're using or deploying those systems, and also to be secure and resilient so we can leverage our capabilities to, deploy AI that withstand unexpected adverse events.

So we have adopted the those seven principles. And from those seven principles, we're building the program to create policies, procedures, and standards to really create those this risk management system across the a life cycle and always assessing throughout those seven different dimensions. How is our governance structure looking like? So we have a responsible AI council with diverse skills because this is a extremely cross functional field. So it's not only legal. It's not only IT. It's not not only, data. So it's a cross functional team that that has come together, and we have a a council that is really overseeing this implementation and approving the policies. And some of the things are escalated to the council sometimes, but you can see we have representatives from data technology, from communications to develop internal and external communication plans and look at the change management that this program has, business domain experts to provide the use cases not only on our products, but also on how we internally are leveraging and expanding our digital capabilities as an enterprise.

We have representatives, and this is the core of our tech team, the data science and AI team, data privacy culture and belonging, cyber and data security, quality and regulatory compliance, and risk management. So it's a very diverse skill set. And how we are structured is by last year, we have, started a few work streams. And this year, we have started a different set of work streams. All of them continue to go, and we have developed this three year work plan to really enhance the responsible AI program. So by 2024, we had the governance work stream kicking in, the explainability, the mapping where we have mapped all the requirements from FDA, from the UAE Act, and for from other applicable legislations, the communications work streams, the resiliency and security framework, the validity and reliability, transparency and accountability, privacy, safety, regular quality and and regulatory, and fairness framework.

So you will see that we have different leaders and owners for those work streams, And this is what makes a a big challenge to manage the responsible AI program because, again, it's a very cross functional work. And each of these work streams have work streams different work stream members spread across the organization and spread across different business units. So it has been an amazing and incredible journey with my peers at GE Healthcare to build and implement this responsible AI program. And we are still in the plan and we're still in the, in the work. There's a lot of work to do. There's a lot that have been done already, and it it's been an incredible and exciting journey to really have a work that makes impact in people's life and really enables the future of health care. So to close my my talk here, I want to give you a quick overview of how this applies in practice.

And this is one of example of our real time ejection infraction tool, that really is streamlining the left ventricular function assessment using one of our point of care ultrasound. So you can see here that the results that are being showed, if it's green, it's highly reliable and aligned with expert calculations. If yellow, it should be interpreted with caution. And red, it's insufficient reliability. Given the clinician on the point of, on the point of contact with the patient more time to develop this interaction and the the possibility to interpret the results and rely on them, with with trust. So that would be the the session that we have, that I have prepared to you. I I know I'm almost on the top of the hour, so I would like to thank again Women in Tech.