Can DeepFaceLIFT Replace Old Models for Identifying Pain?

The novel tech represents an improvement on the visual analog scale, but does not account for several patient variables.

Researchers at the Massachusetts Institute of Technology have created a novel system to help physicians draw the line between patients with genuine pain and those who are addicted to opioids and display drug-seeking behaviors (DSB).

The new pain-measurement system goes by the moniker DeepFaceLIFT, and MIT researchers say it uses technologically-supported pain metrics to bring the medical community a step closer to remedying the DSB phenomenon.

The two-way system aims to replace the self-reported visual-analog scale (VAS), which is widely considered to be a unidimensional and context-dependent model for gauging a patient’s pain. What’s more, patients seeking narcotics can easily overstate their pain with this model, resulting in wasted time and resources for physicians.

To combat problems associated with the self-reported pain scale, DeepFaceLIFT bases its metrics on Neural Network and Gaussian process regression models in an effort to personalize the estimation of self-reported pain through hand-crafted personal features and multi-task learning.

“We show on the benchmark dataset for pain analysis (The UNBC-McMaster Shoulder Pain Expression Archive) that the proposed personalized model largely outperforms the traditional, unpersonalized models,” study authors said. “Additionally, DeepFaceLIFT automatically discovers the pain-relevant facial regions for each person, allowing for an easy interpretation of the pain-related facial cues.”

The authors used the UNBC-McMaster Shoulder Pain Expression Archive dataset containing 200 image sequences gathered from 25 subjects, totaling 48,398 frames that include the Prkachin and Solomon Pain Intensity (PSPI) scale, and observed pain index (OPI) scores per each frame. The DeepFaceLIFT method was then personalized based on a subject’s complexion, age and gender — features that the authors claim were selected based on their straightforward labeling and effect on how individuals express pain.

Despite the input of these personal features for DeepFaceLIFT’s interpretable learning model, the study does not address all potential conflicts that could arise due to differences in how individuals express pain. Patients with Autism Spectrum Disorders (ASD), chronic pain, or other conditions that determine pain response were not accounted for in the study.

Because the study only analyzed the expressions of 25 subjects from a dataset, it is likely that DeepFaceLIFT can improve on traditional pain-measurement, but is not a foolproof method for automatically estimating subjective, self-reported VAS pain scores.

“In the future, we plan to further investigate the relationships between VAS and other pain scores (such as PSPI) as well as how they relate to facial AUs,” the authors said. “As context is another important aspect not explored in this paper, we plan to derive more advanced statistics that could potentially capture additional information and improve estimates of subjective pain.”

With the goal of advancing applications of pain estimation in clinical settings, the authors plan to continue advancing DeepFaceLIFT’s personalization process by expanding its recognition capabilities.

The study, entitled “DeepFaceLIFT: Interpretable Personalized Models for Automatic Estimation of Self-Reported Pain,” was published recently in the Journal of Machine Learning Research.