The findings could guide policies relevant to the development of AI.
Patients’ preferences of receiving clinical diagnoses from artificial intelligence (AI) and human clinicians were generally unaffected by the COVID-19 pandemic.
The findings may guide policies that are relevant to the development of AI-based healthcare.
Wai-Kit Ming, M.D., Ph.D., M.P.H., and colleagues aimed to quantify and compare patient’s preferences for AI clinicians and traditional clinicians before and during the COVID-19 pandemic. They further wanted to assess whether preferences were affected by the pressure of the pandemic.
The team designed a web-based questionnaire to collect participants’ demographic information and investigate patients’ preferences for different diagnosis strategies. The questionnaire had seven similar hypothetical scenarios and respondents had to choose a preferred diagnosis strategy for each one. The investigators used the propensity score matching method to match two different groups of respondents — a 2017 group and a 2020 group — with similar demographic characteristics. There were 2,048 respondents who completed the questionnaire and included in the analysis.
Participants could choose different levels of healthcare services for each diagnosis attribute. Patients were prompted to hypothesize which diagnosis methods or attributes had a large impact ono their decision. The team then included six diagnosis attributes and their respective levels in their questionnaire: diagnostic method, outpatient waiting time before the start of the diagnosis process, diagnosis time, accuracy, follow-up after diagnosis, and diagnostic expenses.
The discrete choice experiment had two parts, the first of which required respondents to fill in their demographic information, including age, sex, and educational level. The second part required participants to consider seven different scenarios. For each one, respondents were supposed to imagine they were in an outpatient queue waiting for a diagnosis and were asked to choose a preferred diagnosis strategy. Then at the end of the questionnaire, the respondents were asked to estimate the number of years it would take for AI clinicians to surpass human clinicians.
There were 1,317 patients who completed the questionnaire in 2017 and 84.7% believed AI clinicians would surpass or replace human clinicians. Of those recruited in 2017, 40.1% were matched to the 528 respondents who were recruited in 2020.
Those in both groups generally believed accuracy was the most important diagnosis attribute. The importance value of accuracy was 38.53% in the 2017 group and 40.55% in the 2020 group. Diagnosis time was the least important attribute (2.69% in 2017 and 1.16% in 2020). Respondents in both groups preferred to receive combined diagnoses from both AI and human clinicians over AI-only diagnoses or human clinician-only diagnoses (2017: OR, 1.645, 95% CI, 1.535-1.763; 2020: OR, 1.513, 95% CI, 1.413-1.621).
A latent class model identified three classes with different attribute priorities. In the first class, preferences for combined diagnoses and accuracy were constant in 2017 and 2020, and high accuracy was preferred. In the second class, the matched data from 2017 and 2020 were similar, Combined diagnoses from both AI and human clinicians and outpatient waiting time of 20 minutes were consistently preferred. In the third class, respondents in both groups preferred different methods, with those in 2017 preferring clinician diagnoses while respondents in 2020 preferred AI diagnoses.
The study, “Preferences for Artificial Intelligence Clinicians Before and During the COVID-19 Pandemic: Discrete Choice Experiment and Propensity Score Matching Study,” was published online in the Journal of Medical Internet Research.