Insights by Dr. Andrea Willis, SVP & Chief Medical Officer, BlueCross BlueShield of Tennessee
Healthcare AI is often discussed through a provider lens, hospitals, clinician workflows, documentation, and bedside impact. A recent episode of the Big Unlock Podcast showcased a different perspective, when Dr. Andrea Willis, Senior Vice President and Chief Medical Officer at BlueCross BlueShield of Tennessee, brought a “payer-and-population-health” view of what “responsible AI adoption” actually looks like in the real world. As she explained to host Ritu M. Uberoy, “AI doesn’t live in a demo. It lives inside care management, utilization management, pharmacy, quality, equity, member experience, privacy, and governance.”
Since those areas sit at the center of trust in healthcare, Dr. Willis’s definition of “responsible AI” is grounded in practicality. For her AI must make the system feel more supportive, more understandable, and more transparent, without creating new fear, confusion, or skepticism.
The conversation also opens with an origin story that subtly signals how she approaches healthcare itself. Growing up in Athens, Alabama, a young Andrea heard a mother cat in distress in her grandparents’ shed. She grabbed dishwashing gloves and scissors and went to help. With a little massage, the kitten was delivered successfully.
“That was my first delivery,” she says, and she knew from that moment she would become a doctor.
It’s a memorable story, but it also works as a metaphor for the episode: responsible AI should help reduce pain, reduce fear, and make the system more responsive, without stripping away the human support people need most.
Where Payers Apply AI First: Care Management and Utilization Management
When asked where AI sits today and how organizations move beyond pilots, Dr. Willis points to two areas where payers can drive real, scalable change: care management and utilization management.
In care management, her focus is not “automation for automation’s sake.” It’s the quality of human interaction. She describes an AI-enabled care management experience that compiles what the organization knows about a member so the care manager can stay “fully present” during the conversation. AI can summarize history, capture interaction context, and prompt next steps, reducing the invisible work that usually surrounds member outreach.
In other words, the AI isn’t there to replace the care manager. It’s there to remove the background burden so the care manager can listen, respond, and connect.
That matters because Dr. Willis repeatedly emphasizes a human reality that is members often are scared. They want to feel heard. They want to feel like the system understands what’s happening and what happens next. When a care manager has to spend the call searching, toggling screens, and trying to piece together context, the member can feel the distance.
Responsible AI adoption, in her framing, is partly about creating space for humanity. It helps care teams spend more energy on the person and less energy on the process.
In utilization management, she is unusually direct about the purpose of AI. She acknowledges that across the industry, AI is being explored in utilization management workflows. But she draws a clear line: AI is not meant to deny care. The goal is to bring relevant information forward, so approvals happen faster and decisions are clearer.
“We already have some pilots in place for utilization management and are looking at where we need to make tweaks before we scale it out broader, but that is something we’re looking at on the utilization management side of the house. Where we can bring all the information that we have in the system to bear so that we can get to approvals faster.”
Dr. Willis’s position is that responsible AI in utilization management must balance speed and transparency, enabling faster, more accurate decisions by surfacing the right context, while keeping accountability and evidence-based criteria at the center.
She also notes that beyond these outward-facing use cases, her organization is collecting employee ideas broadly to identify other innovation opportunities. That’s an important point for scale: responsible adoption is not only a single “AI project.” It’s an evolving capability built across teams, with shared learning and shared accountability.
Designing for Relevance: Why “All Data” is Not the Same as “Useful Data”
One of the most practical segments of the episode comes when Dr. Willis talks about learning from limitations early before an AI-enabled workflow becomes widely used.
She describes the importance of testing in controlled environments prior to broader rollout, and then she names a scaling challenge that shows up quickly in healthcare operations: relevance.
AI can compile a member’s information, but compiling “everything we have” isn’t the same as delivering what’s helpful in the moment. A diagnosis from years ago or medications that were once relevant may not reflect what the person is dealing with now. Without smart parameters, AI output can become cluttered, distracting, and potentially misleading.
Her point is simple, responsible AI needs guardrails that focus on what still matters clinically and operationally.
“This is a scalability insight hiding in plain sight. Many AI pilots fail not because the model can’t do the task, but because the output is too broad, too noisy, or too unfiltered to be usable at speed. In payer environments where teams manage large populations with long histories, relevance becomes everything,” she explained to Ritu.
Dr. Willis extends this mindset into digital care management more broadly. She notes that digital self-service can be very appealing. She says members do want convenience, but healthcare is complicated, and self-service cannot be the only answer. That’s why she emphasizes guided self-service, a model where members can complete routine tasks digitally, but the system can detect when someone needs more support than self-service can provide.
Guided self-service is a responsible adoption strategy because it avoids a common pitfall, pushing people into digital tools that feel like dead ends. It respects the fact that some needs are simple and some are not and the experience should be designed to escalate appropriately when the situation requires more help.
Measuring Success the Payer Way: Outcomes, Closed gaps, and Real Engagement
Dr. Willis grounds the conversation in successful metrics that actually translate into operational value. In care management, success isn’t “AI adoption” as a vanity metric. It’s whether member goals are met.
That could mean resolving an acute need, supporting a chronic care plan, closing a gap in care, or helping someone navigate a safe transition home after hospitalization. It’s practical and member centered. AI should answer: did the person get what they needed, and did the system help them move forward?
She also talks about engagement in ways that feel directly applicable. When members are informed that care management support is available, engagement rises and the downstream outcome is more gaps closed and more needs met.
She adds something many operational leaders recognize as a “quiet success metric”: when teams see what’s possible, they start generating better ideas. Innovation becomes a flywheel. Staff bring forward new use cases, new workflow improvements, and new ways to reduce friction because they can see the system improving.
In payer environments, that matters. Scaling isn’t just a technical process. It’s an organizational learning process. The more people understand the tools, the more they can apply them responsibly. This leads naturally into the broader adoption strategy she describes, which is making AI literacy a shared responsibility rather than a niche expertise.
Transparency and Governance are the Real “Scale Engines” for Responsible AI
Dr. Andrea Willis makes a point that she feels often gets lost in the excitement around new models, responsible AI at scale is less about flashy capability and more about the operational conditions that make people trust the system.
From a payer perspective, trust is built when decisions can be explained clearly, in plain language, and when the process feels consistent and evidence based. It’s also built when information can flow to and from all involved parties so fewer decisions are made in an “information vacuum,” and fewer stakeholders feel like someone else is acting without the full story.
What stands out in this conversation is her insistence that AI should reduce friction, not create new confusion. In care management, that means AI should help care teams stay present with members by handling background work like summarization and next-step prompting. In utilization management, it means AI should accelerate clarity and approvals by surfacing the right context faster, never functioning as a tool designed to deny, but as a tool designed to move the right decisions forward efficiently and transparently.
And finally, she offers a useful metaphor, mobile banking. People didn’t trust it immediately. They adopted it gradually as it became more helpful, more friendly, and more aligned with their needs. Healthcare isn’t banking, but the adoption lesson is real; people use what they trust, and they trust what they can understand.
The Takeaway
Dr. Andrea Willis’s message is refreshingly practical – responsible AI adoption in healthcare is not about chasing the newest model or launching endless pilots, it’s about building trust through real-world usefulness, relevance, and transparency. From a payer perspective, AI earns the right to scale when it helps care managers stay fully present with members, filters information so teams focus on what matters now, and accelerates approvals by bringing evidence-based context forward rather than creating new friction. In her framing, responsible adoption also depends on the infrastructure most people overlook, clearer explanations in plain language, stronger interoperability so decisions aren’t made with missing information, and cross-functional governance that protects privacy while enabling progress. The organizations that lead won’t be the ones experimenting the most. They’ll be the ones that can standardize, explain, and scale what works because their workflows, transparency practices, and oversight are built for trust at scale.
Sitting at the intersection of clinical accountability and large-scale operational impact, Dr. Willis’s key insights are especially valuable:
- Responsible AI must reduce cognitive burden, not increase it.
- Responsible AI is often the most human use of AI: it helps care managers stay present while the system handles summarization, organization, and next-step prompting.
- Scaling fails when relevance fails. AI must filter out old or non-actionable history so teams focus on what matters now.
- Guided self-service is the practical middle path: empower members digitally, but escalate to human support when needs are complex.
- In utilization management, AI should be used to speed clarity and approvals, not as a mechanism to deny care.
- Transparency in plain language is a trust engine—especially for prior authorization outcomes and denials.
- Responsible scaling requires interoperability, governance, and AI literacy so adoption moves from pilots to repeatable, trusted impact.