Chapel Hill — UNC Health is ready to be a model for AI in healthcare.
The medical system is at the forefront of trying out new technologies and partnerships to improve health outcomes. In May they announced a partnership with Epic Systems to be early adopters of their new Generative AI tools in Electronic Health Record (EHR) software. The organization is also working on an AI Framework for responsible development.
Content edited for length and clarity.
WRAL TechWire: You’re working on a “Responsible AI Framework.” Can you talk about what stage that is in and if there are plans to make it public?
Brent Lamm: We’ve had our data science team in place since about 2016. And we’ve got a dozen or more solutions in production across various use cases that use AI by a commonly agreed-to definition of what is AI. Most of them are machine learning or narrow, narrow AI, not the generative AI that is obviously taking the world by storm. But Rachini and her team have actually been working in this space in terms of responsible AI for quite a while. I think how I would characterize this framework is that it’s not something new. We had something that was one size and now my team is really just expanding it to be much more comprehensive given the nature of generative AI.
Rachini Moosavi: Our goal really is to help elevate all of healthcare, and collaborating is a big piece of it. We have what I’ll call something that’s beyond a draft where it’s functional. We’re testing it, we actually started using it and using the questionnaire that goes along with our framework with a couple of the vendors. So it is functional. We have not yet taken the first questionnaire back to our AI and Automation workgroup. But that’s coming.
WRAL TW: Can you talk more about your “AI and Automation” workgroup and its membership?
RM: Prior to kicking off our AI and Automation Advisory Group, we mainly self-governed our AI development with a more technology-led group. The participants were IT collaborators, including our Chief Medical Informatics Officer and our Chief Nursing Informatics Officer, and some other clinicians that are part of our IT family.
In developing a Responsible AI (RAI) framework, to come up with the four-box model of fairness, accountability, trustworthiness, and transparency, we gathered different stakeholders from across the system to collaborate on what should go into this and determine ‘What do we hold ourselves and our technology partners accountable for?’ That group of stakeholders was project-specific, focused on developing our RAI framework.
We then decided that it was time to kick off the AI and Automation Advisory Group. We really needed to think about the other stakeholders that are part of our health system, how they need to weigh in, and what expertise they can bring to the table. In addition to having a bunch of different clinical disciplines, we also have some administrative disciplines. So, looking at things like care access, operations, and some other spaces. We also are bringing in an ethicist. We have legal, we have privacy, and we have those other disciplines that need to have a seat at the table.
WRAL TW: What kinds of new AI development with the workgroup be reviewing? What’s a use case?
BL: We have had our working prototype of our internal ChatGPT up and running for probably six weeks now. We’ve been very purposeful to have legal and compliance groups be a part of the early prototyping of that solution because we wanted them to be very educated and immersed in it before we open it up to other groups within the company. So this key group is really going to be important. They’re going to have a huge role to play in making sure that we’ve got all this worked out.
WRAL TW: Can you talk more about this Internal ChatGPT? Is that for UNC Health staff only?
BL: Yes, for internal use. In partnership with Microsoft, they have provisioned us with OpenAI software within our Azure environment so it stays within UNC Health’s secure environment. So let’s say a physician wants to write a prior authorization letter and have generative AI help them draft that letter. They can’t go to Chat GPT because they’re going to have to put protected health information into that letter, and we don’t want that obviously. So this internal Chat GPT allows our teammates to actually use the same capabilities of ChatGPT externally, but they can actually prompt it with sensitive data, like protected health information.
WRAL TW: What’s the timeline for an internal UNC Health Chat GPT?
RM: So we’re doing a slow rollout. We’re rolling it out strategically by the teams that we need to give us input on the capabilities of the tools. We’re also going by use cases. For everything that we’ve identified as self-service where someone could take advantage of this internal Chat GPT-like tool, we are reaching out to those stakeholders and asking them to be part of a pilot group to test out the capabilities. That way we can gather feedback from them while they’re also getting some benefit from using the technology.
BL: And that feedback is going go into the AAA (AI and Automation Advisory) group, which is going to be looking at it in terms of the Responsible AI framework.
WRAL TW: Has the ability to use generative AI changed any of the data you’re collecting or analyzing?
BL: I’ve spent the last 14 years working in healthcare IT and informatics. And during that time, and for many years before that, there’s been this huge push to move our physicians and nurses and other clinicians to capture discrete data in the electronic health record instead of writing prose in a narrative in a clinical note.
One of the epiphanies that I think we’re having right now is how with generative AI, we can improve this need. There’s a whole lot of knowledge and opportunity in clinical notes and in unstructured prose that we may now have a much better chance of working with in terms of using generative AI versus a lot of the discrete data. So the answer to your question is I don’t know of anything that we’re collecting differently. But we are beginning to relook at use cases where we can go back and potentially use traditional clinical natural language notes in ways that we hadn’t thought of before.
RM: Yeah, let me build upon what Brent said. The big conversation that fits perfectly here is “social determinants of health.” Social determinants of health are things in everyday life that can impact a person’s health and well-being. In the course of talking to your caregiver during an appointment, you might mention something like, ‘Hey, I live alone,’ or, ‘I have transportation issues’ or, ‘I often go hungry’, other impactful statements like that. Those are pieces of critical information about our patient’s socio demographics that traditionally we had to ask specific questions to capture in discrete fields to be able to take advantage of them.
Imagine with generative AI, all of the different providers that I may have seen over months and years if that can be extracted out of my clinical summaries in a way that can paint a picture of who Rachini Moosavi is and what my social determinants of health are. So that my caregiver now has an opportunity to think about care plans based on this information that gets summarized for them in a clear way, regardless of which caregiver it was told to. That’s transformational.
WRAL TW: What other feedback are you getting on the Responsible AI Framework?
RM: We work with Gartner on an ongoing basis and we’re awaiting feedback from them on our Responsible AI Framework. We just recently in the last couple of weeks, met with Microsoft experts that developed the Microsoft AI framework, as well as Nuance (Microsoft AI partner for cloud healthcare solutions), and they’ve posted a lot out there as well. So there are continued conversations and collaborations that allow us to fine-tune and tweak our model and see if there are gaps, things that we might have missed things that we need to augment. Or in working with vendors and even ourselves, when we’re evaluating our own products, whether there are things that we should be thinking about differently.
WRAL TW: Can you talk about training for using these AI tools? Is that something you’re working on as part of getting these tools in the hands of your clinicians – making clear the best ways to use them?
BL: UNC-Chapel Hill last year announced a new school, the first school in a very long time, the new school for Data Science and Society. We have a long-standing collaboration with the leadership that is helping to set up and launch the school. And one of the things that we’re partnering with them on right now is training for our users around generative AI. And they’re excited because we can bring the healthcare lens to what they’re doing, which is a huge portion of what they’re thinking about in terms of what their graduates could do, working in the healthcare field.
We don’t have anything specific worked out right now. We’re in early conversations with them about what would be the first course or online self-paced courses that we could offer to our UNC Health teammates.
RM: We always do our training from two completely different facets. There’s application-based training. Any new technology that we put out there we have to provide the how-to, step-by-step instructions on how to use that technology. But there’s also the broader education about that technology. Building a level of comfort around the technology. I think there’s – appropriately – some fear as well as some excitement that’s going on with AI. We want to make sure that we provide some context back to each of these groups using the technology in a way that it’s easily consumable for them and helps drive them towards the best usage and best experience.
WRAL TW: Can you share a bit about how you plan to evaluate your AI over time? What are your KPIs (key performance indicators) and where are you aiming when you look ahead?
RM: That’s something that we want to co-create with our AI and Automation governance group to ensure that they have the buy-in. We want to make sure that this is representative of the multidisciplinary group that’s helping us to form those concepts. But through these partnerships with Microsoft and other groups, we’re also gathering information on how they’re building KPIs into their Responsible AI frameworks to try and make sure we’re measuring the right things.
Also whenever there’s a new model that comes up, whether it’s something that Epic has delivered, or we purchased it, or we build it ourselves, we always look at a measurement of how effective it is. We do statistical analysis to really be able to provide some level of trust and understanding of how well the model fits the information that it’s trying to share. We’ve been doing that since 2016, and that will continue into our Responsible AI Framework.
We also measure model drift over time because the data can change, the information that’s feeding it can change. You have to continuously monitor the impact of the models that you build. And all of that also gets put into the Responsible AI Framework of course to make sure that the model remains effective.
Tremendous thanks to Brent and Rachini for speaking with us and sharing their work!