
Why 2026 could be the year of AI adoption
Fawad Butt, CEO of Penguin AI, says AI is moving beyond excitement and hype. He talks about the expansion of AI in an interview on Data Book, a podcast from Chief Healthcare Executive®.
Fawad Butt expects healthcare providers to accelerate their use of AI this year.
The founder and CEO of Penguin AI, Butt says excitement began building for AI in 2024. He says there was more hype in 2025 and “a little bit of paralysis.”
In a conversation on Data Book, a podcast from Chief Healthcare Executive®, Butt says 2026 will see much wider use of AI in the healthcare industry.
“I think that 2026 in my opinion, from my vantage point, at least, is going to be the year of the adoption,” Butt says.
“What we saw in '25 was that the providers adopted AI a lot quicker than the payers did,” Butt says. “What we're seeing in the second half of '25 is that the payers are trying to catch up. And I'm thinking, what I'm going to see in 2026 is providers are going to accelerate. Payers are going to accelerate.”
“You're going to see this adoption cycle that is quicker than anybody expects, and we see that in our pipeline. We see that in the conversations that we're having with both payers and providers. So I think that's going to happen quicker,” he says.
Butt brings deep experience in the healthcare industry, having held healthcare technology leadership roles at Kaiser Permanente and UnitedHealthcare.
More health systems are building organizational structures to adopt and implement AI, and that’s pushing more organizations to create new leadership roles.
“You're going to see the chief AI officer become a more prevalent role across organizations, both payers and providers, whereas I think maybe 5% or 10% of them have a role like that. I wouldn't be surprised if we end the year next year with 30, 40, 50% of the organizations either hired or hiring (AI officers),” Butt says.
More healthcare organizations are paying attention to the governance of AI technology and how new tools should be used. While Butt says organizations should focus on governance, he also says it shouldn’t bring new AI utilization to a halt.
“Not to say that you don't need governance, it's just that governance shouldn't be overblown,” he says.
Butt suggests that there’s a spectrum of governance of AI in healthcare organizations. He says healthcare organizations should come up with frameworks to get comfortable with the assumption of risk.
“AI governance is a journey, just like data governance was a journey,” Butt says. “So if you're an organization that's trying to come up with the most robust AI governance model before you start deploying AI, you're going to be waiting for five years. And you're going to miss a lot of windows that may not exist as an entity, given some of the efficiencies that are available through AI that your competitors are going to adopt.”
Each organization has to get comfortable with their own governance and risk tolerance, but he also says health systems don’t need to wait for a perfect framework to experiment with some AI tools.
Butt says healthcare organizations can start with “low-hanging fruit.”
“You want to do the things that are low risk and high reward first, and to me, those tend to be administrative processes and non-clinical uses,” Butt says.
“I would start with things that are low-hanging and low-risk, which to me, are mostly administrative process, prior authorization, risk adjustment, medical coding, you know, ACC identification,” he says. “Those are places where there's no clinical risk, really. And the only thing we can do is make the process a little better, because it sucks right now.”
However, Butt stresses much more caution for AI in clinical uses. Doctors should utilize AI tools but use their own judgment in making decisions affecting patients.
“I do think that AI should not be in a position, at least until we get high-fidelity models custom built for healthcare, where it's recommending the next best action, or where it's recommending the amount of drug or specific drug you should go on,” Butt says.
“It could make recommendations for procedures, but then it should be validated by a human,” he continues. “So it's the AI plus the human construct that I think you’ve got to start with. You ask the AI to do the grunt work, but then you ask the human to validate and confirm that this is the right answer.”
Check out our full conversation for more on AI, governance, return on investment with AI, prior authorization, and more in the podcast, which you can click on below. You can subscribe to Data Book wherever you get your podcasts.






























