• Politics
  • Diversity, equity and inclusion
  • Financial Decision Making
  • Telehealth
  • Patient Experience
  • Leadership
  • Point of Care Tools
  • Product Solutions
  • Management
  • Technology
  • Healthcare Transformation
  • Data + Technology
  • Safer Hospitals
  • Business
  • Providers in Practice
  • Mergers and Acquisitions
  • AI & Data Analytics
  • Cybersecurity
  • Interoperability & EHRs
  • Medical Devices
  • Pop Health Tech
  • Precision Medicine
  • Virtual Care
  • Health equity

Holding Public Algorithms Accountable

Article

Artificial intelligence is infiltrating many institutions. Here’s how governments can handle the technology without burning the public.

algorithmic impact assessment,ai now institute,ai medical ethics,hca news

In the near future, artificial intelligence (AI) might wield considerably more power than it does today. Imagine, for a second, a scene in which algorithms monitor the deteriorating health of war veterans, searching for signs of heart failure, courtesy of the US Department of Veterans Affairs (VA). On the other hand, various government agencies could use such technologies to determine who’s eligible to receive Medicaid benefits and who’s illegally selling prescription drugs online.

You won’t need to imagine for much longer. Each of those cases is either happening now or will be soon. The connective tissue in all of those uses, however, is not necessarily their function or design but who will employ them: public entities—some sort of government.

Each algorithm raises its own set of ethical concerns, though some appear less troubling than others. Still, the fact that governments will use AI to perform controversial tasks has taken anxieties around algorithms from science-fiction nightmares to serious and specific policy debates. Today, the AI Now Institute at New York University released a report (PDF) on how public agencies—like, say, the VA—can use best practices to ensure that citizens may hold the algorithms accountable.

>> Read: A New Ethical Wrinkle for Medical Algorithms

The practical framework proposes the establishment of algorithmic impact assessments. Like environmental processes of a similar name, such an exercise would be designed to smash the “black box” nature of algorithms, nudging governments to evaluate their AI for biases and other baked-in issues, create review processes, and bring the public into the discussion.

“The turn to automated decision-making and predictive systems must not prevent agencies from fulfilling their responsibility to protect basic democratic values, such as fairness, justice, and due process, and to guard against threats like illegal discrimination or deprivation of rights,” the authors of the report wrote.

They listed 5 components that must make up any public algorithmic impact assessment. Here they are, word for word.

  • Agencies should conduct a self-assessment of existing and proposed automated decision systems, evaluating potential impacts on fairness, justice, bias, or other concerns across affected communities.
  • Agencies should develop meaningful external researcher review processes to discover, measure, or track impacts over time.
  • Agencies should provide notice to the public disclosing their definition of “automated decision system,” existing and proposed systems, and any related self-assessments and researcher review processes before the system has been acquired.
  • Agencies should solicit public comments to clarify concerns and answer outstanding questions.
  • Governments should provide enhanced due process mechanisms for affected individuals or communities to challenge inadequate assessments or unfair, biased, or otherwise harmful system uses that agencies have failed to mitigate or correct.

If public bodies follow this framework, they will strengthen “the public’s right” to understand how these high-tech tools affect their lives, boost in-house expertise of these notoriously secretive systems, promote accountability, and empower citizens to debate and even dispute the power of these algorithms, the AI Now Institute authors wrote.

The policy proposal goes on to dive into the nuts and bolts of the framework and how it might treat things like public record requests, research access, and fundraising.

The big idea behind the report, however, is that people have a right to know which sort of algorithms are mapping out their fates.

No longer do “futurists” get brushed off for bringing up such concerns, as AI is not the stuff of the future; it is real, and its implications are of great importance. (Even when, say, Elon Musk paints a bleak picture of a dystopian future, eliciting some eye rolls, many people take his warnings seriously.) That might be especially true for medicine.

Although the report doesn’t specifically deal with healthcare, it acknowledges that AI draws on medical data. The VA, CMS, and other health-minded agencies, federal and local, have taken to using AI to perform some functions, though the scope of its spread inside these bodies is unclear. When observers discuss public decision-making algorithms, they often bring up predictive policing and the like, but healthcare AI also makes life-altering decisions.

Healthcare leaders have spilled much ink and filled many a conference session examining the ethics of various automated technologies. Still, the industry—and the federal government—have yet to pin down watertight ethical standards for algorithms. Perhaps, then, the marriage of medicine and public service will breed an early test kitchen for a new set of medical privacy standards. After all, few areas are more life-or-death than medicine.

Get the best insights in healthcare analytics directly to your inbox.

Related

Rise of the Anti-Opioid Algorithm

Ethical Concerns for Cutting-Edge Neurotechnologies

VA Plans to Use AI to Track Deteriorating Health in Veterans

Related Videos
Image: Ron Southwick, Chief Healthcare Executive
George Van Antwerp, MBA
Edmondo Robinson, MD
Craig Newman
© 2024 MJH Life Sciences

All rights reserved.