• Politics
  • Diversity, equity and inclusion
  • Financial Decision Making
  • Telehealth
  • Patient Experience
  • Leadership
  • Point of Care Tools
  • Product Solutions
  • Management
  • Technology
  • Healthcare Transformation
  • Data + Technology
  • Safer Hospitals
  • Business
  • Providers in Practice
  • Mergers and Acquisitions
  • AI & Data Analytics
  • Cybersecurity
  • Interoperability & EHRs
  • Medical Devices
  • Pop Health Tech
  • Precision Medicine
  • Virtual Care
  • Health equity

Biden’s executive order on AI: What it means for healthcare

News
Article

Among other steps, the Department of Health and Human Services will develop a safety program to address unsafe healthcare practices involving artificial intelligence.

President Biden issued an executive order on artificial intelligence Monday aimed at ensuring the protection of consumers and national security, and the order includes provisions regarding healthcare.

President Biden has issued an executive order to develop safeguards for the use of AI across critical sectors, including healthcare. (Image credit: ©Zack Frank - stock.adobe.com)

President Biden has issued an executive order to develop safeguards for the use of AI across critical sectors, including healthcare. (Image credit: ©Zack Frank - stock.adobe.com)

Specifically, the White House says the U.S. Department of Health and Human Services will develop a safety program to deal with unsafe healthcare practices involving the use of AI.

The department will set up a process to take reports of improper practices and “act to remedy” harms related to the use of AI, the White House said in a fact sheet.

The White House says the executive order will also accelerate AI research, including grants for AI research in healthcare. The government will also encourage more research with the National AI Research Resource, which is designed to give researchers access to important data.

In addition, the executive order calls for the developers of powerful AI systems to share their safety test results with the government.

Companies developing tools that could pose a risk to national security, economic security or “national public health and safety” will have to notify the government in developing models and sharing the results of safety tests before the companies release the models to the public, the White House says.

The Coalition for Health AI, a collaborative of academic health systems, tech companies, and other health organizations, offered praise for the Biden administration’s executive order. The Coalition has also developed a blueprint for utilizing AI in healthcare.

In posts on X, formerly known as Twitter, the Coalition said the order supports innovation while setting some guide rails and regulatory expectations. The Coalition said, AI holds “immense potential, from advancing clinical research to streamlining healthcare delivery. But, like the Hippocratic oath, our first principle must be to ‘do no harm’ as we embrace these tools.”

The White House also says the executive order will establish an “advanced cybersecurity program” to utilize AI tools to detect and repair vulnerabilities in software. Cybersecurity experts have said AI tools could help healthcare organizations and other critical sectors improve security, but they warn attackers are already using AI-powered tools to hack into systems.


Healthcare leaders have hailed the promise of AI to dramatically change healthcare and improve screening patients at risk for cancer and other serious illnesses and conditions.

Kaiser Permanente has been using AI-powered models to analyze which patients in hospitals may be at higher risk of deteriorating or could require intensive care. Mayo Clinic researchers have been studying the use of artificial intelligence to identify pregnant patients who may be at risk for complications, as well as patients who could have greater likelihood of suffering a stroke.

However, critics also note that AI tools can prove harmful if they use inaccurate information.

The executive order notes the potential of AI to exacerbate discrimination and bias in healthcare. It directs agencies to provide guidelines to contractors to avoid bias in AI algorithms, and the order calls on the Justice Department to develop practices to investigate civil rights violations related to AI.

Researchers at Stanford School of Medicine tested chatbots and found they provided answers that perpetuated racial bias, according to findings published Oct. 20 in Digital Medicine.

In questions on topics such as kidney function and lung capacity, the AI tools had “instances of promoting race-based medicine/racist tropes or repeating unsubstantiated claims around race,” the authors wrote. They urged clinicians to be cautious in the use of AI tools in treatment decisions.


Related Videos
Image credit: ©Shevchukandrey - stock.adobe.com
Image: Ron Southwick, Chief Healthcare Executive
Image credit: HIMSS
Related Content
© 2024 MJH Life Sciences

All rights reserved.