Member Spotlight: Lauren Kahre

Views expressed are Lauren's own and do not reflect the opinions or positions of the Coalition for Health AI.

Lauren Kahre is a Strategy and Policy consultant, working on AI policy with the Coalition for Health AI (CHAI). She comes from a public health policy background and got into health tech in 2021 working on telehealth legislation. We sat down to talk about her recent work on patient trust and AI transparency.

How did you get into health tech?

I started covering congressional hearings in 2021 when telehealth was a major focus. I was doing coalition work on telehealth access, expanding broadband, and looking at interoperability. Updating FHIR standards, things like that.

From those conversations, I saw how technology could help build equitable systems at scale. I became fascinated with how this technology was regulated, what foundational legislation we were using, and how outdated some of it was. Now health systems are deploying advanced digital platforms and AI, and regulation has become a big question mark.

You describe yourself as a tech optimist. What does that mean for your work?

I see the immense benefits that come from this technology. As we're building these foundational models, they can improve care and address so many barriers. I know I'm maybe a little naive in that.

But I also know it can deepen the existing divide if deployed without appropriate safeguards. That's what drives my focus on governance and transparency. I'm not pro-regulatory. I'm pro-safety and pro-trust.

What surprised you most in the patient survey research?

How much of a priority human oversight was to patients. 75% of respondents report using AI in healthcare settings, but only 13% feel very comfortable with it. 51% say AI makes them trust healthcare less. Only 12% say it increases trust.

More than 80% say their trust would increase with clear accountability measures. That meant third party organizations monitoring this, nonprofit oversight. Not just the vendors monitoring themselves.

Top patient concerns weren't what people expected, right?

Right. People focused on data commercialization and lack of human oversight more than algorithmic bias. Patients want to know: who's accountable when something goes wrong? Is there a human in the loop?

I think it's going to be a long time until we can't have a human in the loop. Most patients would want that assurance that a human being is looking at this, not just having everything on autopilot.

A lot of organizations say their AI improves patient care. Why focus on asking patients directly?

That's exactly it. A lot of organizations tout "this is going to improve patient care," but I was really proud that we took time to ask: what does the patient actually want? What makes them feel like AI is trusted?

We're trying to ensure that patients and everyone within the health ecosystem can adopt this technology in a way that builds trust. We can measure model drift. We can test models with new data sets. But what are the variables we don't even know to test when we're looking at patient health outcomes?

Where do you see this work going?

I think regulators should try and enhance HIPAA and update it for current technology. When you look at wearables and consumer applications, they're not covered by HIPAA. 

I think policy regulators are going to use the foundations we have right now and build upon them. There's been some talk about regulatory sandboxes, which I think is interesting. It still allows for innovation but creates more cohesive ways for the private side to work together.

The transparency paper we published really showed that regulatory entities across the political spectrum want some human in the loop. That's bipartisan.

What keeps you optimistic?

I think a lot of healthcare delivery systems are looking at how to do this in a safe, reasonable way. The amount of research being produced right now? I can't keep up with it. People are putting frameworks in place.

And the conversation is happening. We're asking the right questions about transparency, accountability, and human oversight. That matters.

How can people support this work or get involved?

The best way to reach out to me is through LinkedIn. If people want to get involved with CHAI, we have different work groups on our website. There's a policy work group, an engineering work group that's on GitHub, and other ways to participate.


Lauren works with CHAI's Policy Workgroup on health AI transparency and governance. The patient survey she helped frame was conducted by NORC at the University of Chicago and funded by the California Health Care Foundation. Read the full research here: transparency paper and the patient survey.

Next
Next

FY26 Health Budget: What Changed and What It Means on the Ground