• Politics
  • Diversity, equity and inclusion
  • Financial Decision Making
  • Telehealth
  • Patient Experience
  • Leadership
  • Point of Care Tools
  • Product Solutions
  • Management
  • Technology
  • Healthcare Transformation
  • Data + Technology
  • Safer Hospitals
  • Business
  • Providers in Practice
  • Mergers and Acquisitions
  • AI & Data Analytics
  • Cybersecurity
  • Interoperability & EHRs
  • Medical Devices
  • Pop Health Tech
  • Precision Medicine
  • Virtual Care
  • Health equity

Google Finds that AI Can Improve Ophthalmologist's Accuracy

Article

“AI and physicians working together can be more accurate than either alone,” said lead researcher, Rory Sayres, Ph.D.

artificial intelligence

A new study from the Google Artificial Intelligence (AI) research group found that physicians and algorithms working together can improve the physician’s and algorithm’s accuracy when screening patients for diabetic retinopathy.

The researchers wanted to see if their algorithm could do more than simply diagnose disease, so they created a new computer-assisted system that could “explain” the algorithm’s diagnosis. The system improved ophthalmologists’ diagnostic accuracy and the algorithm’s accuracy.

To avoid physicians over-relying or under-relying on the machine too much, the researchers developed two types of assistance to help physicians read the algorithm’s predictions: grades and grades plus heatmap.

Grades were a set of five scores that represented the strength of evidence for the algorithm’s prediction. Grades plus heatmap used a heatmap that measured the contribution of each pixel in the image to the algorithm’s prediction to enhance the grading system.

Ten ophthalmologists read each image once under one of three conditions: unassisted, grades only and grades plus heatmap.

For the grades-only condition, readers graded more accurately with model assistance than without. Grades plus heatmaps improved accuracy for patients with diabetic retinopathy, but reduced accuracy for patients without diabetic retinopathy.

Both forms of assistance increased readers’ sensitivity.

Assistance with the algorithm increased the accuracy of the retina specialist above that of the unassisted reader or model alone and increased grading confidence and grading time across all readers. The degree of improvement depended on the physician’s level of expertise.

Without assistance, general ophthalmologists were significantly less accurate than the algorithm, with ophthalmologists performing with 46.3 percent accuracy, compared to the algorithm, which performed at 57.9 percent accuracy. Retina specialists were not significantly more accurate than the algorithm, performing at 62.3 percent accuracy. And with assistance, general ophthalmologists matched but did not exceed the model's accuracy, while retina specialists started to exceed the model's performance.

“What we found is that AI can do more than simply automate eye screening, it can assist physicians in more accurately diagnosing diabetic retinopathy,” said lead researcher, Rory Sayres, Ph.D., Google Research. “AI and physicians working together can be more accurate than either alone.”

With physicians fearing that AI could one day replace the radiologist, this study shows that the technology is best used in tandem with a physician.

The study will be published in the April edition of Ophthalmology.

Get the best insights in healthcare analytics directly to your inbox.

Read More About AI

How to Get Ahead of the AI Curve

Implementing AI Is Simpler Than You Think

Healthcare Orgs Less Urgent to Invest in Big Data and AI

Related Videos
Image: Ron Southwick, Chief Healthcare Executive
George Van Antwerp, MBA
Edmondo Robinson, MD
Craig Newman
© 2024 MJH Life Sciences

All rights reserved.