• Politics
  • Diversity, equity and inclusion
  • Financial Decision Making
  • Telehealth
  • Patient Experience
  • Leadership
  • Point of Care Tools
  • Product Solutions
  • Management
  • Technology
  • Healthcare Transformation
  • Data + Technology
  • Safer Hospitals
  • Business
  • Providers in Practice
  • Mergers and Acquisitions
  • AI & Data Analytics
  • Cybersecurity
  • Interoperability & EHRs
  • Medical Devices
  • Pop Health Tech
  • Precision Medicine
  • Virtual Care
  • Health equity

Who Is Responsible When AI Fails?

Article

Healthcare can learn from self-driving cars’ fatal lessons in liability.

self driving car death,ai responsibility,artificial intelligence liability,tesla death,uber death,hca news

In March, Elaine Herzberg walked her bike across the street in Tempe, Arizona, unaware of the Uber self-driving car moving toward her at 38 miles per hour. The car, with a human driver behind the wheel but not in control, struck Herzberg. She later died in a hospital. It was perhaps the first fatality of an unconnected pedestrian involving a self-driving car. Herzberg’s daughter and husband were preparing to file a lawsuit. But 10 days later, Uber settled with Herzberg’s family for an undisclosed sum.

After the accident, Arizona Governor Doug Ducey suspended the testing of Uber self-driving cars in the state, according to a letter he sent to the company. At the time, he said, about 600 vehicles with “automated driving systems” from several companies were cruising Arizona roads. The National Transportation Safety Board (NTSB) is still investigating the incident. A preliminary report was expected to be published in May, but the final examination could take two years to complete, said Christopher O’Neil, a spokesman for the board.

>> LISTEN: Finding Fault When AI Kills

There have been only a handful of fatal accidents involving self-driving cars, so case law guiding liability has not been clearly established.

For healthcare applications of artificial intelligence (AI), even the nascent case law that has begun for automobiles and self-driving cars is not firmly established. Although self-driving car systems are being tested on a regular basis at scale in the real world, AI systems for healthcare applications do not have the same authority over the lives of patients.

In the healthcare sector, what is broadly called AI takes a number of forms, from simple algorithms programmed to create efficiencies to machine-learning systems used to analyze images and make treatment recommendations. But some AI applications go beyond diagnostic tools, equipping robots to deliver treatment. As AI systems integrate with patient care and make more autonomous decisions, liability questions become more important in an environment without much legal guidance.

Healthcare Diagnostics and Tools Are Not “True AI”

“We’re not operating in the world of true AI yet,” said Tracey Freed, JD, a transactional attorney who teaches at Loyola Law School’s Cybersecurity & Data Privacy Law program. “We haven’t reached artificial general intelligence, where machines are making autonomous decisions on their own.”

To anticipate that development requires “thinking of what that world could look like and what liability looks like,” Freed said. “From a legal perspective, [AI could be considered] ‘agents’ of these companies or hospitals. Having these machines be my agent means I take on the liability for any incident that arises because of the machine. That’s the situation we’re more likely in today.”

>> LISTEN: Is AI Real?

An Israeli technology company, Ibex Medical Analytics, designed its Second Read system to check clinical pathology diagnoses. To train the system, the company fed it thousands of images of prostate core needle biopsies to teach it how to distinguish between benign and cancerous diagnoses.

Such tools—machine learning systems that are fed massive amounts of imaging or other data and trained to distinguish between different results—have become increasingly common. The federal National Institute of Standards and Technology has begun to push for the adoption of standards for measurements and medical imaging. These benchmarks could ease the design of machine learning systems and make them more broadly useful.

elaine herzberg,ai healthcare,ai malpractice,hca news

Determining Who Is at Fault When Things Go Wrong

Healthcare applications of AI occupy a critical position at the nexus of several important concerns for providers and patients, like privacy and human life, according to Lucas Bento, MS, an associate at Quinn Emanuel Urquhart & Sullivan, a large international law firm. Bento advises companies on the development stages of AI applications and technologies.

“The biggest question,” Bento said, “is how to allocate responsibility when the chain of causation is breached by, say, unexpected software error or hacking. Some theories of strict liability may be applicable.”

Strict liability is a legal claim that does not require a particular finding of fault, like negligence or intention. Product liability draws on this principle.

“In other instances, a court may take a deeper dive into what or who actually caused the error,” Bento said. “These are all novel issues that, barring legislative intervention, will be resolved via successive litigations across the country.”

But if litigation for these liability cases does not reach court, as in Herzberg’s case in the March Uber crash, the question of how to adjudicate liability for errors or injury involving AI cannot be established.

There have been other fatalities with AI systems involved in which drivers were killed, unlike Herzberg’s death in Arizona. Tesla’s Autopilot driving system was faulted by the NTSB for a fatal crash in Florida in 2016. The family of the deceased driver and their lawyer ultimately released a statement clearing Tesla of responsibility, but they refused to comment on any settlement.

>> READ: How AI Is Shaking Up Healthcare, Beyond Diagnostics

On March 23, shortly after Herzberg’s death, Walter Huang died in a fiery crash while driving a Tesla Model X and using its Autopilot system. B. Mark Fong of Minami Tamaki, the law firm representing Huang’s family, claims Tesla’s Autopilot feature is defective and caused the driver’s death. The firm is exploring a wrongful-death lawsuit on grounds that include product liability and defective product design.

Tesla has argued that its Autopilot system is not self-driving and that its user agreement requires the driver’s hands to be on the wheel when the system is engaged.

A. Michael Froomkin, JD, a law professor at the University of Miami and the author of Robot Law, calls the human in the AI system the “moral crumple zone.”

“If there’s no human in the loop, it’s not controversial that whoever designed the tool is liable,” Froomkin said. “The AI [portion of the system] doesn’t impose anything special on that. ...The special bit is where the human is in the loop.”

Allocating responsibility between a human and AI in a system that relies on both parties is an unsettled and controversial issue, he said.

Designers of AI systems can claim they were not the last in the chain of responsibility, that AI is only advisory, and that the human is the decision maker.

Developers might also argue that a doctor or insurance company bears ultimate responsibility for negative outcomes or damages.

“If you have a human in the loop, if the system is advising a person, the person is going to take the fall,” Froomkin said.

As he put it, when linking this question to healthcare applications of AI: “When is it appropriate to blame the doctor using the AI? The easy case is where there’s no person. The harder case is when there’s a person between [the AI and the patient].”

AI doesn’t have legal standing because it does not have sentience. But “if [AI learns] on the job, you have an interesting liability problem: Was it taught improperly? Those are hard questions that are very fact dependent.”

According to Bento, product liability theories seem well suited to address questions of liability arising from the use of AI products and services. “Some challenges exist as to attribution of liability due to computer code errors,” Bento said.

When AI Shows Bias

Both Bento and Freed brought up other issues in expanding the use of AI, such as bias, and their implications for liability.

“Bias is another big issue for liability,” Bento said. “Eligibility for products and services is increasingly dictated by algorithms. For example, some consumer finance companies run algorithms to decide who is eligible for a loan or other financial product. Biased outcomes could create litigation exposure.”

Freed raised the issue of using algorithms in criminal sentencing to predict recidivism. In 2016, the nonprofit news organization ProPublica released a detailed investigation into how machine-driven “risk assessments” used to inform judges in sentencing showed bias against black defendants. One company, Northpointe, which created a widely used assessment called COMPAS, does not disclose the calculation methods used to arrive at final results. (Northpointe disputes ProPublica’s report.)

“Especially in healthcare, we need to know why the decisions are being made the way they are,” Freed said. “We need to know it’s the right cause rather than something that’s more efficient or something that’s the right call but for the wrong reasons.”

AI and machine-learning systems use data sets, which require careful thought and vigilance in their own right, Freed said.

>> READ: Holding Public Algorithms Accountable

SOPHiA Genetics and its SOPHiA AI genetics testing system, with its 200,000 patients tested, has a large data pool at this point to draw from and refine its assessments and predictions. Its international user base also gives it geographic range. Other machine-learning applications applied recently to analyze images must rely on a relatively small data pool to train the system’s clinical predictions.

“What’s interesting for healthcare [is that] a lot of what we’re doing is based on training data,” Freed said. “It seems [as if] your algorithms are only as good as your training data. A lot of the training data, if you don’t have a ton of resources, are coming from sources that are easily obtainable or free or not great sources of data.”

Recently developed machine-learning applications built to analyze medical imaging often rely on a relatively small data pool to train the system’s clinical predictions.

“What’s interesting for healthcare [is that] a lot of what we’re doing is based on training data,” Freed said. “It seems [as if] your algorithms are only as good as your training data. A lot of the training data, if you don’t have a ton of resources, are coming from sources that are easily obtainable or free or not great sources of data.”

Cautious, Informed Optimism for AI’s Future

The risks don’t necessarily outweigh the promise of AI advancement, but healthcare and society at large must use caution alongside technological innovation.

“We need to take an approach where we’re excited but also more thoughtful and not innovating without being cognizant of the consequences,” Freed said. “We must take into consideration how these things can make either poor decisions or decisions that don’t make sense.”

Bento is also optimistic about the benefits of AI systems despite the attendant new and uncertain legal risks.

“New ways of doing things always create risks of unintended consequences that may result in harm,” Bento said. “The beauty of AI, though, is that it doesn’t just provide a new way to do something but, if done right, a better way of doing it.”

Bento also argued that AI’s potential to reduce human error is an important factor that users should not ignore. That potential for error reduction is something that Tesla has strongly argued when its Autopilot system has been linked to fatalities.

After Huang’s fatal crash in Mountain View, California, the company pushed back against any claims that its Autopilot system was defective. In a press release, Tesla touted the ability of the system to drastically reduce the risk of any fatal accident, ultimately making roads safer, and it claimed that the system’s efficacy continues to improve.

The National Highway Traffic Safety Administration, which includes the NTSB, “found that even the early version of Tesla Autopilot resulted in 40 percent fewer crashes,” Tesla said.

Tesla also aggressively criticized the NTSB for chasing headlines and inappropriately focusing on “the safest cars in America” while “ignoring the cars that are the least safe.”

Some auto manufacturers working on self-driving technology have pledged to accept full liability for accidents involving their self-driving cars because of error, including BMW, Mercedes-Benz, and Volvo. The companies are betting that with more testing and wider implementation, the number of accidents will plummet.

In contrast, Tesla appears focused on zealously defending its reputation in the media.

This may point to a possible way forward for technology companies and healthcare providers to instill confidence in users and patients who are asked to subject themselves to the care of AI systems. If error-reduction claims are true, those who build and use AI systems can take full responsibility for any negative outcomes.

If the Huang family chooses to file a wrongful-death lawsuit, and the matter reaches court, product liability case law surrounding AI may become a little clearer. That seems to be a result Tesla is working hard to prevent. But AI developers and users, as well as the general public, require more clarity and settled law to illuminate how to handle responsibility for any damage resulting from AI systems. As these technologies become inextricably bound into our daily lives, society must come to understand their effects.

Editor’s note: This story has been edited to remove mention of SOPHiA Genetics due to a misunderstanding regarding the interview.

Get the best insights in healthcare analytics directly to your inbox.

Related

Rise of the Anti-Opioid Algorithm

Ethical Concerns for Cutting-Edge Neurotechnologies

VA Plans to Use AI to Track Deteriorating Health in Veterans

Related Videos
Image: Ron Southwick, Chief Healthcare Executive
George Van Antwerp, MBA
Edmondo Robinson, MD
Craig Newman
© 2024 MJH Life Sciences

All rights reserved.