
A New Ethical Wrinkle for Medical Algorithms
Unintentional bias and data privacy often steer the conversation. Profiteering, intentional bias, and the possibility of machine dependence don’t.
Much of the debate around machine learning’s role in medicine has centered on capabilities, like whether an algorithm can actually provide clinical recommendations that meet physician standards. Talks tend to focus on best intentions and failed delivery.
Three Stanford University doctors mostly skirt that debate in a recent New England Journal of Medicine commentary, instead heading into even stickier territory.
Danton S. Char, MD, Nigam H. Shah, PhD, and David Magnus, PhD, do touch on some well-worn concerns—
“In the US healthcare system, there is perpetual tension between the goals of improving health and generating profit,” the authors write, citing cases where large tech companies have designed algorithms that benefit them
The academics may well have turned in the piece, published yesterday, before Uber’s
Several individuals who spoke to Healthcare Analytics News™ during the recent HIMSS meeting in Las Vegas, Nevada, said they were excited by the new entrants. But the insiders were under no impression that the companies were making such moves based on altruism as opposed to the enormous economic opportunity that healthcare represents.
One fear that the Stanford trio notes is the possibility that systems could be designed to steer clinicians toward more profitable interventions without their knowledge. Given the number of stakeholders—hospitals, tech companies, and even pharmaceutical makers—they emphasize this risk. Physicians, they write, should be educated on the construction of clinical support algorithms to avoid dependence on ethically-questionable black boxes.
There’s an imperative, too, that the understanding of the technology be widespread. Given the increasingly value-minded, team-based nature of American healthcare in the 21st century, it’s rare that a single physician oversees a patient from diagnosis to final outcome. The algorithms could gain immense power as a lone constant in care.
“At its core, clinical medicine has been a compact—the promise of a fiduciary relationship between a patient and a physician,” they write. “As the central relationship in clinical medicine becomes that between a patient and a healthcare system, the meaning of fiduciary obligation has become strained and notions of personal responsibility have been lost.”
Standard for such commentaries, the authors don’t have solutions for these quandaries. “Machine-learning systems could be built to reflect the ethical standards that have guided other actors in health care—and could be held to those standards,”
Related Coverage:








































