OR WAIT null SECS
A new study found that manual editing is critical.
Speech-recognition software has emerged as a valuable tool for doctors and other clinicians, but the technology doesn’t appear ready to be used as the sole component of the transcription process, according to a new study.
Writing for the JAMA Network, researchers from Brigham and Women’s Hospital, Harvard Medical School, and other prestigious institutions noted that, in 217 randomly chosen clinical notes, 7.4 percent of the words were incorrectly transcribed by the automated software. But the error rate dropped to 0.4 percent after a transcription reviewed the clinical note sand 0.3 percent after a physician signed off on the notes.
“The comparatively low error rate in signed notes highlights the crucial role of manual editing and review in the [speech recognition]-assisted documentation process,” the researchers concluded.
Speech-recognition software, of course, has grown more common among providers looking to cut down the time clinicians spend in electronic health record (EHR) systems and, consequently, burnout. Although the technology began to show promise in the 1980s, it took time for the advancement to reach a level at which health systems were comfortable using it at scale. Now, however, 90 percent of hospitals are gearing up to expand their use of the tech, according to the study.
As such, the authors set out to study the accuracy of speech-recognition software, homing in on back-end systems, which entail automated voice capture and text translation, along with editing performed by a professional transcriptionist and physician review.
The cross-sectional study analyzed 217 clinical notes, across two hospitals, in 2016. Researchers reviewed the documents at four separate stages, noting the time taken to dictate and review each note.
In each stage, error rates involved clinical information in 15.8 percent, 26.9 percent, and 25.9 percent of cases, respectively, according to the results. What’s more, 5.7 percent, 8.9 percent, and 6.4 percent were deemed “clinically significant,” the study found.
“Although the adoption of [speech-recognition] technology is intended to ease some of the burden of documentation, that even readily apparent pieces of information at times remain uncorrected raises concerns about whether physicians have sufficient time and resources to review their dictated notes, even to a superficial degree,” the researchers wrote.
From here, healthcare observers should further investigate how clinicians use the technology, its place in their workflows, and its accuracy. The findings also underscore the need for clinician training and education to reduce errors, the researchers said.
Get the best insights in healthcare analytics directly to your inbox.