Opinion|Articles|December 29, 2025

AI won’t fix hospital safety on its own: What we need from human leaders | Viewpoint

Author(s)Patsy McNeil

To create meaningful, measurable improvements in patient safety, health systems must pair responsible AI adoption with strong governance.

We’ve all heard the heroic words from Spider-Man, “With great power comes great responsibility." And these words ring especially true in the era of AI.

Artificial intelligence holds a lot of power within its capabilities. And as healthcare executives and decision makers, this means that a herculean amount of responsibility lies with us.

In recent years, artificial intelligence has been heralded as the great disruptor of healthcare — a technology poised to solve everything from clinical documentation burdens to diagnostic accuracy.

However, while AI applications hold tremendous promise, there is a misconception quietly spreading across our industry: that AI is the magic solution that will “fix” all kinds of problems, including hospital safety.

This past year we’ve seen artificial intelligence upend the landscape of hospital operations and hospital safety. From virtual nursing assistants at the patient's bedside to rapid brain scans wherein AI diagnoses stroke with high precision, much of the work that took time, commitment, constant attention and human work are now done by machines.

It is extremely important to note, however, that while AI can be a transformative tool, it cannot replace the cultural, structural, and leadership foundations required for a safety-first culture. Technology only amplifies what already exists — it does not correct what is broken.

To create meaningful, measurable improvements in patient safety, health systems must pair responsible AI adoption with strong governance, an environment where safety is prioritized first with equal prioritization by physicians, and executive accountability links leadership decisions to patient outcomes.

Building the right AI governance: People before platforms

If AI is to enhance safety rather than complicate it, hospitals must start with strong oversight.

At Adventist HealthCare, this takes shape through an AI governing committee that evaluates new tools not by their novelty, but by their impact on patients and clinical workflows. At Adventist HealthCare, the AI governance is very intentional in ensuring that there is alignment with the company’s mission, values, and strategic objectives while adhering to ethical, legal, and regulatory standards.

A committee of 20 individuals inclusive of representatives from our physician and nursing leadership, legal, finance, clergy, IT, informatics and DEI leadership oversees all artificial intelligence choices to make sure that each program is a culture fit, avoids redundancies, and is properly managed with appropriate human oversight.

A well-structured AI governance committee:

Eliminates redundancies, ensuring departments don’t implement overlapping technologies that confuse clinicians, inflate costs, or create inconsistent processes.

Insists on transparency in how algorithms are designed, validated, and monitored.

Guarantees clinical representation, ensuring the physicians and nurses who will use these tools have a say in how and why they’re selected.

Too often, AI enters hospitals or clinical practice areas in a fragmented, decentralized way — piloted by enthusiastic teams without systemwide coordination. Governance ensures that innovations support safety rather than inadvertently undermining it.

Leadership’s role in driving a safety-first physician culture

No AI tool can compensate for a culture that tolerates variability, underreporting, or unsafe shortcuts when it comes to safety protocols and measures. Safety begins with leadership — and it must permeate every clinical team.

Leaders set the tone by:

  • Championing open reporting, making it clear that speaking up about errors or near misses is not only acceptable but expected.
  • Investing in training, ensuring physicians and other clinicians understand both the capabilities and limitations of AI so they can use it wisely, not blindly.
  • Modeling humility, reinforcing that technology supports but never replaces clinical judgment.
  • Recognizing and rewarding staff that proactively make safety their number one priority, shifting the culture from reactivity to proactive risk prevention.

A safety-first culture is not aspirational; it is operational. It appears in how physicians hand off patients, how teams escalate concerns, and how consistently evidence-based practices are followed — particularly when time pressures mount. AI can support these behaviors, but it cannot instill them.

At Adventist, we decided to lean into high quality and patient safety because it was best for the patients. Including AI within the approach is the right thing to do as long as it adds to our existing culture and does not replace any portion of the culture’s wraparound intentionality.

Executive accountability: The missing link to reliability

Artificial intelligence is giving us untapped power, but we need to acknowledge and own the responsibility. Hospital safety improves when senior leaders are directly accountable for clear, measurable outcomes.

Data-driven accountability — not quarterly slogans or abstract initiatives — drives sustainable change. Leadership responsibility demands that we ensure that AI embedded programs are safe, embedded carefully within our systems and perform exactly as expected if not better.

Linking leadership performance to safety looks like:

  • Aligning executive incentives with specific safety metrics, such as reductions in preventable harm, timely event reporting, or adherence to safety bundles.
  • Ensuring transparency through dashboards that track real-time progress.
  • Closing the loop by assigning responsibility for remediation when safety trends slip.

When leaders own the outcomes, improvements stop being optional. The entire organization sees that safety is not a department — it is a shared systemwide responsibility.

AI can help measure and monitor performance, but it is people who must respond, evaluate, and improve based on what the data reveals.

AI as an enabler, not the answer

The goal is not to resist AI but to integrate it thoughtfully. When used responsibly, AI can:

  • Reduce variation in clinical decision-making
  • Predict risk before it becomes harm
  • Allow physicians to spend more time with patients rather than keyboards
  • Strengthen early detection of deterioration
  • Improve cross-team communication and documentation accuracy

But AI succeeds only when paired with disciplined governance, strong physician culture, and leadership accountability.

The future of hospital safety will not be determined by algorithms alone. It will be shaped by how we — as leaders, clinicians, and stewards of our communities — choose to govern, guide, and uphold our commitment to safe care.

AI is a powerful tool. But it is not the cure. The cure is us. 

Dr. Patsy McNeil is executive vice president and chief medical officer of Adventist HealthCare.


Newsletter