• Politics
  • Diversity, equity and inclusion
  • Financial Decision Making
  • Telehealth
  • Patient Experience
  • Leadership
  • Point of Care Tools
  • Product Solutions
  • Management
  • Technology
  • Healthcare Transformation
  • Data + Technology
  • Safer Hospitals
  • Business
  • Providers in Practice
  • Mergers and Acquisitions
  • AI & Data Analytics
  • Cybersecurity
  • Interoperability & EHRs
  • Medical Devices
  • Pop Health Tech
  • Precision Medicine
  • Virtual Care
  • Health equity

New IBM Tech Spots AI Bias, Explains Decision Making in Real Time

Article

The company touted its “trust and transparency” software services as a hammer to the black box of artificial intelligence.

ibm watson bias,big data ai bias,ai bias healthcare,hca news

Credit: IBM

When artificial intelligence (AI) thinkers and doers discuss concerns surrounding technologies, they often focus on bias. Any number of predispositions can be baked into an algorithm, hidden in a data set or somehow conceived during a project’s execution. And biases can distort insights generated by an AI system, rendering them moot or even harmful.

But IBM says it has a solution. The company announced today the release of a software service with “trust and transparency capabilities,” which it said, “automatically detects bias and explains how AI makes decisions — as the decisions are being made.” IBM considers this technology a “major step” toward hammering the black box, an ability that, if proven true and scalable, could chip away at the strong wall of resistance to AI in healthcare and other industries.

>> READ: The Rising Clamor for Explainable AI

The software follows IBM’s development of trust and transparency principles, a philosophical and policy-driven exercise to help guide the development and implementation of ever-advancing AI technologies.

“It’s time to translate principles into practice,” Beth Smith, IBM’s general manager of Watson AI, said in a statement. “We are giving new transparency and control to the businesses who use AI and face the most potential risk from any flawed decision making.”

In theory, this software seems a strong safeguard against flawed analytical efforts. In reality, IBM has a lot to prove.

The computing solutions powerhouse has faced great criticism in recent months and years for its own alleged shortcomings in the AI space. For some time, health-tech key opinion leaders have cast into question the effectiveness of Watson’s healthcare capabilities, even accusing Watson for Oncology of bias. This past summer, reports surfaced showing IBM communications suggesting that Watson had recommended “unsafe and incorrect” cancer treatments, a flaw that the company said it has since corrected.

Against that backdrop, it will likely take overwhelming evidence and extensive external buy-in for IBM to position itself as the chief algorithm cop.

One such step in that direction is the company’s release of the open-source AI Fairness 360 toolkit, an archive of algorithms, code and tutorials that researchers and anyone else may use in their own anti-bias work. (Again, these tools are designed for working AI models, not mere training data.)

Still, bias might be the darkest cloud hanging over AI — aside from the cries of those who fear their jobs might be replaced by algorithms — and combating it is integral to the tech’s success. (See here, here and here for just a few perspectives on the importance of this sort of effort.)

So, how does IBM plan to identify and remove biases from AI?

First, the new software runs on the IBM Cloud and works with AI systems in a “wide variety” of industries, featuring compatibility with various frameworks. The technology can be programmed to fit any organizational use. It pings “unfair outcomes” in real time and recommends data that could mitigate bias in “easy-to-understand” terms, explaining why it made the decision and recording performance details, through a visual dashboard.

Second, IBM is also offering consulting services that are intended to scrub bias from decision making via stronger business processes and human-AI interfaces.

IBM is chasing an ever-growing market. In a study of 5,000 executives, the company found that 82 percent of enterprises — and 93 percent of “high-performing enterprises” — are “considering or moving ahead with AI adoption” in the hunt for increased revenues. But more than half fear liability issues, while a similar percentage said they don’t have the skills to bring AI into their business.

One noteworthy but easily overlooked nugget in IBM’s announcement: The simple usability of this new software could prove capable of “reducing dependency on specialized AI skills.”

What does that mean? It’s clear that this technology is itself an AI solution to a rotten fruit of human error, bias. But it is also a response to the embryonic AI workforce and the high price of entry for corporations who want to launch such initiatives. Add it all up, and this resembles an attempt to correct several problems of AI with more AI.

Get the best insights in healthcare analytics directly to your inbox.

Related

Who Is Responsible When AI Fails?

The Numbers Behind the First FDA-Approved Autonomous AI Diagnostic System

Holding Public Algorithms Accountable

Recent Videos
Image: Ron Southwick, Chief Healthcare Executive
George Van Antwerp, MBA
Edmondo Robinson, MD
Craig Newman
Related Content
© 2024 MJH Life Sciences

All rights reserved.