• Politics
  • Diversity, equity and inclusion
  • Financial Decision Making
  • Telehealth
  • Patient Experience
  • Leadership
  • Point of Care Tools
  • Product Solutions
  • Management
  • Technology
  • Healthcare Transformation
  • Data + Technology
  • Safer Hospitals
  • Business
  • Providers in Practice
  • Mergers and Acquisitions
  • AI & Data Analytics
  • Cybersecurity
  • Interoperability & EHRs
  • Medical Devices
  • Pop Health Tech
  • Precision Medicine
  • Virtual Care
  • Health equity

At Mount Sinai, Researchers Teach Machines the Language of Radiology

Article

Before machine learning can try to replace radiologists, it has a lot to learn from them...like how to read their notes

(Eric Oermann, MD, of the Icahn School of Medicine at Mount Sinai (left), and John Zech, medical student at the Icahn School of Medicine at Mount Sinai (right). Screenshot from Mount Sinai promotional video).

Before machine learning can try to replace radiologists outright, it has a lot to learn from them...like how to read their notes. At the Icahn School of Medicine at Mount Sinai, however, a group of researchers are showing the algorithms the way.

Researchers from the school trained computer software by feeding it thousands of radiologist reports associated with head CT scans. Comparing the reports to other tomes of text—like books, news stories, and Amazon product reviews—they found that “The language used in radiology has a natural structure, which makes it amenable to machine learning,” according to Eric Oermann, MD.

Oermann is a neurosurgeon who focuses on aritificial intelligence (AI). He's also an instructor in the Department of Neurosurgery at the Icahn School, and senior author on a new study based around the machine learning program. He believes that models built on these large radiological text datasets will be important to the future of the field.

Rather than just trying to identify patterns in the images, the models use deep learning and natural language processing to pair recurring concepts and terms in the written text with the corresponding manifestations in the scans. Oermann compared the data to oil, and said that these methods were a way of teaching the algorithms to "refine" and extract it. One of the models in the study was nearly 91% accurate.

“With this method, a large labeled corpus can be generated for applications such as deep learning,” the authors concluded.

“This study really is laying the groundwork for training that next generation of machine learning tools,” Oermann said in a brief video from Mount Sinai that accompanied its official announcement of the study.

Whether radiologists will survive the rise of artificial intelligence and machine learning has been a point of debate—outside of the specialty, mostly. Experts have frequently told Healthcare Analytics News™ that the technology will either eliminate them or make their lives significantly easier.

The Radiological Society of North America, which runs the Radiology journal that published Oermann’s new study, even hosted a machine learning exhibit a few months ago at its 2017 annual meeting.

“We’re going to be the ones that are going to help train these algorithms, and we’re going to be central to helping to validate whether they’re working efficiently, effectively, and accurately,” Adam E. Flanders, MD, told HCA News at the time. Flanders helped set up the exhibit. Like Oermann, he was hopeful about the technology.

“We’re looking at it with a lot of excitement, not a lot of fear and dread,” he said.

Related Videos
Image: Ron Southwick, Chief Healthcare Executive
George Van Antwerp, MBA
Edmondo Robinson, MD
Craig Newman
© 2024 MJH Life Sciences

All rights reserved.