AI in Healthcare: Automating the Patient Char

Automatic Summary

Automating Patient Charts Using AI: A Step Towards Streamlining Clinical Documentation

Welcome to our blog on how Artificial Intelligence (AI) is revolutionising the healthcare sector by automating patient chart documentation. This blog post is based on a presentation given by renowned industry expert Mythika Poddar, co-founder and CTO of Obstructive Health, a health tech startup dedicated to building an intelligent AI assistant for physicians.

The Problem: Overload of Patient Data

In modern healthcare practices, one of the biggest challenges facing physicians is the overload of patient data. With the advent and rise of electronic health records (EHRs), practitioners often find themselves grappling with voluminous patient information, leading to burnout and fatigue. This problematic aspect of modern healthcare has created a need for a solution that can help streamline clinical documentation

The Solution: AI in Healthcare

In the face of this data dilemma, Obstructive Health teamed up with WellCare Medicine in November of 2020 to develop a solution that would revolutionise health record management.

The proposed solution? Utilizing natural language processing, a branch of AI, to extract the most critical information from vast patient charts. By leveraging transformer models like GPT, Mama, and Medpalm, Obstructive Health aims to tackle the daunting task of summarizing extensive patient charts into condensed, distinctive sentences.

The Process: Fine-tuning Models and Annotating Records

In the endeavor to maximize results and minimize factual errors, Obstructive Health proceeded to fine-tune open source models like Mama2 for specific clinical tasks. The team accessed 100,000 patient records from Well Cornell to work in conjunction with physicians and annotate these records.

  • Data Pruning: The process of distilling the information down to the most important and relevant content from the source notes was a crucial step. This helped prevent the model from generating false or irrelevant information, a phenomenon known as hallucination.
  • Model Constraints: To prevent the model from making unnecessary medical inferences or diagnoses, it was restrained from including medical terminologies not present in the source notes.

The Challenges: Summarizing Entire Patient Charts

The task of summarizing an entire patient chart, usually consisting of upwards of 300,000 words, is not straightforward. As a solution to this challenge, Obstructive Health conducted research and broke down the summarization task into smaller, templated sections that were easier to manage.

The Outcomes: Verdict from Physicians and Creation of Obstructive Health

The AI-generated summaries were evaluated and found to be comparable to those produced by physicians. As a result of this research and testing, Obstructive Health, an integrated product designed to streamline clinical documentation, was born.

In Conclusion

In a nutshell, AI has enormous potential in revolutionising healthcare practices by automating and streamlining clinical documentation. The work done by Obstructive Health is a testimony to this. If you are interested in learning more about their exceptional work, visit their website to find out more or get in touch with them for any queries.

Tags:

#healthtech, #ai, #artificialintelligence, #healthcaretech, #EHR, #obstructivehealth, #dataoverload, #clinicaldocumentation, #clinicalsolutions


Video Transcription

So far, my name is Mythika. Code are, and I'll be presenting on AI and health care automating the patient chart. So just a quick overview on myself.Like I said, my name is Vitika Podar. I have a background in computer Science And Machine Learning. I graduated from Lehi University in 2018, I also went and got my masters from Cornell in 2022, a master's of computer science, while getting my masters, I focused primarily on machine learning and natural language processing. I did my master's level research on the biases in the text generation, from large language models. And I also previously worked at IBM And Verizon. I'm currently the co founder and CTO of obstructive Health, a Doctor. Health is a health tech startup, building a physician's AI assistant to streamline clinical documentation. And I currently live in New York City. So just wanna set the scene here for everyone. Why exactly am I talking about automating the patient chart?

So there's currently a huge problem in health care today. There is too much data. One patient chart has more data than any one person can quickly comprehend. It takes a while for a physician to read through and understand everything. And the reason for this is due to electronic health records or EHRs. Ever since they were introduced and, health systems have transitioned to them, it's lead to increased burnout and fatigue among physicians due to the amount of documentation tests, data entry tests, and just the amount of data review that is required. So there's been increased effort to help help doctors out, with this data problem, streamline these tasks, and, try and make it easier for them to understand who their patients are and get the documentation that they need. So back in November of 2020, the obstructive health team, we partnered with Waukauna Medicine to help solve this problem. At the time, we were still grad students at Cornell. But we've maintained this partnership with WellCare Medicine since then as we've built, obstructive health.

So the solution that we proposed to this problem was that we would try to condense the hundreds of pages of notes in a patient chart down to a few key sentences, using natural language processing. We are intending to extract out the most salient information from a patient chart to make it easier for a physician of transformers. By now, you've all heard of chat GPT. There's been many more models out on the market since then, there's open source models like Mama 2 and very recently, Mama 3. There's also medically focused models such as Medpalm 2, but transformers have proven to be very efficient at textual tasks, like tech summarization. And we were focused on using them in a healthcare setting.

So We focus primarily on open source models like VAMA 2, both from a security reason, but also because we wanted to fine tune these, models for a very specific clinical task. Just a quick overview of why we went the fine tuning approach. We also could have gone to 0 or few shot prompting approach. For those of you who are aware as your 0 or few shot prompting means, engineering a prompt for the model, potentially with some examples, and it doesn't require large training data set, which is why it's become increasingly popular and has shown to have good results with models like QPT4.

And there has been research that's done that shows that for very short notes, this works well for, getting a summarization results from a model, However, for long, context windows, like the one that we would face with the medical chart, fine tuning is still the best approach.

Fine tuning model is viewed the only way to comprehensively, summarize along context window, and it results in a bit better cadency and lower factuality errors. So that's exactly what we did. We fine tuned our models on real clinical data with our partnership at Waldcono Medicine, We had access to a 100,000 charts, patient records from Wall Cornell. And then we work with physicians to annotate those records. This was arguably the most important task. We spent a long time on it, multiple iterations to really make sure that our training data set was curated perfectly. And this was because this is also one of the biggest ways to reduce hallucinization in models. So, the training data set that we were using.

We were using real patient summaries or real physician summaries of patients. And in real life, sometimes when they're summarizing their patients will include external information in their summaries that's not been down in their notes. So, information they have from a patient conversation or from external sources. And for the purposes of training a model, we couldn't have that external information in those summaries because that would lead to the model to hallucinate later down the line. So things like that, really pruning down the information down to what was the most salient and important content from the source notes so that we had the perfect, labels for our model. Was very, very important for us. And then last week, we also, worked on constraining our models, in terms of medical inference.

So we were not trying to do any type of predictive diagnoses, in these summaries, and we constrained our models again to prevent any legislation of medical words, stopping them from including medical words that were not present in the original source notes. So even with the fine tuning, automatically summarizing an entire medical chart is something that's very hard to do. You can't simply do it by fine tuning your model to summarize notes. Let me get into a little bit more about why. So if you look at one clinical note in a chart, that's about 500 words. And a doctor patient conversation is about 1500 words. You could do, whether they'd find you in a model just to, just to summarize one telco note or one doc patient conversation. But with the current word limits that models have, you could not enter the entirety of a patient chart. A whole patient chart is upwards of 300,000 words.

And just fine tuning a model is not enough. So this is the research that we focused on. This was the initial research we did back in 2022. We worked with the neurology inpatient department at WALT Cornell Medicine and New York Presbyterian to automate their discharge summary notes. And this was the, structure that we came up with in order to accurately summarize an entire patient chart. We broke it down into multiple smaller summarization tasks. So, for a dictionary summary, there's, templated, sections that the III of pleasant illness, the daily course of treatment, which is outlining all of the events that took place for a patient while they were in the hospital, and that in the key clinical follow ups. And so for each of these sections, we trained, we fine tuned a foundational model on that specific clinical task. We curated a our training data set specifically for that section. And then we also used a BERT model for, document classification.

So this was essentially to narrow down the notes to the most salient information, making sure that we only included the notes that we knew we wanted to include in the summary. So as a result of this research, we also conducted a clinical evaluation with physicians where they, compared our AI generated summaries to those produced by a physician. And as a result, they rated that our summaries were comparable to those produced by a physician. This was a follow-up research we conducted with the emergency department, producing an emergency handoff summary. It's very similar to a discharge summary, but, again, we broke it down to its component parts, an HPI summary, an ED course of treatment summary, and then any recent results in labs. In this case, we had used LAMA 2 as our foundational model, fine tuning it on the specific, cold task, and we used a Roberta model, again, for sentencing classification to narrow down which notes were the most relevant to the summary.

So from this research, we then built obstructive health which is our startup. We worked on building an EHR integrated product that we could sell to physicians in order to help streamline their clinical documentation. So this is an overview of our product. We are able to integrate into any EHR system, whether it's Epic, Cerner, Athena, and basically what we're able to do is pull in the entire patient chart through fire, and template out different summaries depending on what the patient depending on what the physician would like to see.

Within the summaries, we can highlight key medical terms. We're also currently working on a new feature for billing nudges, Okay. And then we're able to cite back to the original source notes. So as you can see here on the left side, we have the summary that we've produced and on the right side, we have the original reference notes that we pulled from the EU are. And for each sentence, we are referencing where that, summary sentence came from. And this is actually one of our biggest value props that we have and that's what physicians really like for every sentence, we can see exactly where that content came from, and it helps with their review, and it helps validate the sources. So, yeah, that that is an overview of automating a patient chart. Thank you for listening.

If you're interested in the work we're doing at obstructive health, please visit our site. Appreciate your time. If there's any questions, let me know. I think we have. A minute left. Thanks Laura. Thanks, dear Donna. Okay. Well, thank you for listening, everyone. Enjoy the rest of your conference.