• Politics
  • Diversity, equity and inclusion
  • Financial Decision Making
  • Telehealth
  • Patient Experience
  • Leadership
  • Point of Care Tools
  • Product Solutions
  • Management
  • Technology
  • Healthcare Transformation
  • Data + Technology
  • Safer Hospitals
  • Business
  • Providers in Practice
  • Mergers and Acquisitions
  • AI & Data Analytics
  • Cybersecurity
  • Interoperability & EHRs
  • Medical Devices
  • Pop Health Tech
  • Precision Medicine
  • Virtual Care
  • Health equity

AI Model Passively Detects Cardiac Arrest through Smart Speakers


The AI tool had a sensitivity of 97.24%.


A new proof-of-concept tool to monitor people for cardiac arrest while they are asleep detected agonal breathing events 97% of the time from up to 20 feet away, according to the findings of a study published in npj Digital Medicine.

Using the artificial intelligence tool, the researchers obtained an area under the curve of .9993 with a sensitivity of 97.24% and specificity of 99.51%. The false positive rate achieved was between 0 and 0.14% over more than 80 hours of polysomnographic sleep data.

“Cardiac arrests are a very common way for people to die, and right now, many of them can go unwitnessed,” said Jacob Sunshine, M.D., from the anesthesiology and pain medicine department at University of Washington. “Part of what makes this technology so compelling is that it could help us catch more patients in time for them to be treated.”

Sunshine and his research team developed the tool using real agonal breathing — abnormal breathing patterns characterized by gasping and labored breathing — instances captured from 911 calls to Seattle’s Emergency Medical Services. The research team collected 162 calls between 2009 and 2017 and extracted 2.5 seconds of audio to make 236 clips.

The researchers augmented the number of agonal breathing instances with label preserving transformations, which the study authors wrote is a common machine-learning method applied to sparse datasets. Researchers augmented the data by playing the recordings over the air over distances of about three, 10 and 20 feet. The recordings included the interference from indoor and outdoor sounds at different volumes and some clips included a noise cancellation filter.

Smart devices including an Amazon Alexa, an iPhone 5s and a Samsung Galaxy S4 captured the audio recordings to get 7,316 positive samples.

The negative dataset consisted of 83 hours of audio captured during polysomnographic sleep studies across 12 patients. Streams included instances of hypopnea, central apnea, obstructive apnea, snoring and breathing. The streams also contained interfering sounds that could be present while a person is sleeping, like a podcast, sleep soundscape and white noise.

The model was trained on one hour of audio data from the sleep study and other interfering sounds. There were 7,305 samples of these audio signals.

The remaining 82 hours of sleep data of nearly 118,000 audio segments validated the performance of the model.

The researchers applied k-fold cross-validation of the models and obtained an area under the curve of 0.9993, with a sensitivity and specificity of 97.24% and 99.51%.

Then, the researchers ran the model trained over the full audio stream collected in a sleep lab to evaluate false positive rate. They found a false positive rate of 0.144%. The frequency filter reduced the false positive rate to 0.00085% when it considered two agonal breaths within a duration of 10 to 20 seconds.

The researchers recruited 35 individuals outside of the sleep lab to better evaluate the use of the model.

The individuals used their smartphone to record themselves while sleeping for a total of 167 hours.

The research team retrained the model with an additional five minutes of data from each participant. The false positive rate without the frequency filter was 0.217%, corresponding to 515 of the more than 236,000 audio segments used as test data.

The false positive rate reached 0.001% when the research team applied the frequency filter.

While the researchers said this a good proof of concept, the team needs to get more cardiac arrest-related 911 calls to improve the accuracy of the algorithm and ensure that it generalizes across a larger population.

“A lot of people have smart speakers in their homes, and these devices have amazing capabilities that we can take advantage of,” said co-corresponding author Shyamnath Gollakota, Ph.D., an associate professor at the Paul G. Allen School of Computer Science and Engineering at the University of Washington. “We envision a contactless system that works by continuously and passively monitoring the bedroom for an agonal breathing event and alerts anyone nearby to come provide CPR.”

If there’s no response, the device could automatically call 911, Gollakota added.

Get the best insights in digital health directly to your inbox.


AI Model Shows Promise in Predicting Lung Cancer, Google Study Finds

AI Directs Catheters to Target to Treat A-Fib

AI Tool Identifies, Locates Aneurysms from Brain Scans

Related Videos
Image: Ron Southwick, Chief Healthcare Executive
George Van Antwerp, MBA
Edmondo Robinson, MD
Craig Newman
© 2024 MJH Life Sciences

All rights reserved.