• Politics
  • Diversity, equity and inclusion
  • Financial Decision Making
  • Telehealth
  • Patient Experience
  • Leadership
  • Point of Care Tools
  • Product Solutions
  • Management
  • Technology
  • Healthcare Transformation
  • Data + Technology
  • Safer Hospitals
  • Business
  • Providers in Practice
  • Mergers and Acquisitions
  • AI & Data Analytics
  • Cybersecurity
  • Interoperability & EHRs
  • Medical Devices
  • Pop Health Tech
  • Precision Medicine
  • Virtual Care
  • Health equity

New Emotion-Detecting Voice Analysis Could Have Big Implications for Healthcare

Article

Researchers in Russia said they’ve trained a computer program to detect the emotions behind a speaker’s words. That technology could help physicians better diagnose patients.

hca news,alexander ponomarenko,national research university higher school of economics

Virtually every physician has had a patient declare she “feels fine,” only to later discover the patient is in the middle of a serious medical issue.

In such situations, doctors are left to trust their intuition or emotional intelligence to reveal the truth behind the patient’s words. However, new research suggests computers could soon take over that task by analyzing a patient’s speech to determine emotions and possibly stress.

The promising data come from Russia’s National Research University Higher School of Economics (HSE), where scientists have shown that computers programs can be trained to recognize not just spoken word, but also the emotions behind those words. Think of it as high-definition voice recognition.

The findings have implications for the healthcare industry, as well as a host of other fields.

The study attacks the problem of audio analysis in a counterintuitive way—through graphics. The team used audio samples from actors expressing a range of human emotions, then converted the audio into spectrograms, or visual representations of frequencies and other audio variables.

The result was a series of reference images used to evaluate speech and classify it into one of 8 emotions: neutral, calm, happy, sad, angry, scared, disgusted, and surprised. Ultimately, the computer program correctly identified human emotion with 71% accuracy, versus random chance, which would be accurate 12.5% of the time.

Alexander Ponomarenko, MSc, a researcher and lecturer at HSE and a co-author of the study, told Healthcare Analytics News™ the technology is still in its early stages, but the smarter analysis of patient emotions could have clear implications for healthcare.

“The more we know about the patient, the better,” he said.

Ponomarenko posited that the technology could be particularly useful in mental health settings, such as when a patient is undergoing psychotherapy.

“Maybe the computer system can be used to automatically track a patient's condition and automatically log emotional state during the treatment,” he said.

Indeed, earlier research has suggested a similar strategy of leveraging computer-aided emotion detection to diagnose patients suffering from psychological issues.

A 2011 study by Japanese researchers looked at voice recognition as a way to better identify patients suffering from post-traumatic stress disorder among the military veteran community.

It found that, “the techniques of emotion recognition may be used for screening of mental status in military situations.” However, the study noted that considerable additional research would be needed before the technology could be reliably deployed in real-world scenarios.

The new research brings healthcare professionals a step closer to that goal.

Ponomarenko said stress levels could be a target of voice analysis software, though he said it wasn’t the focus of the current research.

“I think that it is possible to detect the level of the stress in voice, but it is needs detailed research,” he said. “Also, the hypothesis about connection of patient health and the emotions should be tested as part of a special study.”

In the meantime, Ponomarenko and colleagues said there is additional work to be done to perfect the emotional categorization capacity of the software. At this point, happiness and surprise appear to be the toughest emotions to categorize. The researchers said happiness was often misidentified as fear or sadness. Surprise was often misinterpreted as disgust.

The study is titled, “Emotion Recognition in Sound.” It was first presented at the International Conference on Neuroinformatics in late August. The findings were published last month in the book, “Advances in Neural Computation, Machine Learning, and Cognitive Research.”

Recent Videos
Image: Ron Southwick, Chief Healthcare Executive
George Van Antwerp, MBA
Edmondo Robinson, MD
Craig Newman
Related Content
© 2024 MJH Life Sciences

All rights reserved.