• Politics
  • Diversity, equity and inclusion
  • Financial Decision Making
  • Telehealth
  • Patient Experience
  • Leadership
  • Point of Care Tools
  • Product Solutions
  • Management
  • Technology
  • Healthcare Transformation
  • Data + Technology
  • Safer Hospitals
  • Business
  • Providers in Practice
  • Mergers and Acquisitions
  • AI & Data Analytics
  • Cybersecurity
  • Interoperability & EHRs
  • Medical Devices
  • Pop Health Tech
  • Precision Medicine
  • Virtual Care
  • Health equity

AI's Ethical Concerns Go Beyond Data Security and Quality

Article

A new commentary from Nuffield Council on Bioethics raises issues of trust, agency, and autonomy.

There always be a few debates raging over artificial intelligence (AI) in healthcare. While the industry continues to explore the many things that it can do for medicine, there’s plenty of high-level thought about things it hopefully never does, like make physicians and patients feel less comfortable or render healthcare more ethically fraught—and potentially dangerous—than it already is.

The Nuffield Council on Bioethics, a UK-based consortium launched in 1991, has thrown a few pence into that conversation with a new briefing about the ethics of medical AI. The opinion is not the first on the matter—it seems 1 comes out every few months—but it does raise a few concerns that other noteworthy commentaries have bypassed.

>>READ: The Dystopian Concerns of AI for Healthcare

While well-worn worries like opaque, “black box” algorithms, data privacy, and potential inherent biases are still highlighted, the new document goes a bit further in raising the threat that AI technologies might pose to patient and physician psychology.

New technology is often considered a boon for interconnectedness, but the report argues that an increased reliance on AI could lead to an opposite effect for patients. “Concerns have been raised about a loss of human contact and increased social isolation if AI technologies are used to replace staff or family time with patients,” the authors write. The problem inches towards tangible reality every year, as literal robots develop from entertaining novelties into legitimate, patient-facing healthcare employees.

Opaque decision support tools could also manifest as confusion or a loss of autonomy. If a physician is unable to explain to a patient the root of their diagnosis, it may rob a patient of their ability to make “free, informed decisions about their health,” which the report argues might even be seen as “a form of deception or fraud.”

Physicians themselves might feel their autonomy threatened by machines filling roles and providing diagnoses that were once central to the profession. Their traditional ethical obligations towards individual patients might end up clashing with AI decision support systems that are “guided by other priorities or interests, such as cost efficiency or wider public health concerns.”

Complacency (or resignation) as a result of AI reliance remains a concern, and one that healthcare can’t take lightly. As venture capitalist Jonathan Gooden said while explaining his AI investment approach recently, “This is not your Netflix queue. If you’re not 100% right there’s real consequences.” The new report echoes concerns that good computers could lead to bad fact-checkers.

Of course, there’s always the lingering issue of whether AI will displace lower-level healthcare workers—a topic that is debated to no end—but the Nuffield note flips it: What if better AI systems someday become justification for hospitals to higher less-skilled (and, accordingly, cheaper) physicians? “This could be problematic if the technology fails and staff are not able to recognize errors or carry out necessary tasks without computer guidance,” the piece poses.

Few doubt that the technology will make an indelible impact on patient care. A key challenge, the report concludes, will be allowing it to do so in a way “compatible with the public interest.” But whether ethical learning can or should actually be coded into the machines by design, the authors note, is a debate unto itself.

Related Coverage:

Holding Public Algorithms Accountable

Harvard Law to Explore Legal Complexities of Precision Medicine, AI

Ethical Concerns for Cutting-Edge Neurotechnologies

Related Videos
Image: Ron Southwick, Chief Healthcare Executive
George Van Antwerp, MBA
Edmondo Robinson, MD
Craig Newman
© 2024 MJH Life Sciences

All rights reserved.