OR WAIT null SECS
Artificial intelligence was more likely than humans to advocate for electronic cigarette use, a new study found.
Don’t believe everything you read.
It turns out that social bots, which use artificial intelligence (AI) to push certain narratives online, have been peddling unproven information on social media about vaping’s ability to help people quit smoking cigarettes. According to a new study on Twitter bots from the University of Southern California’s (USC) Keck School of Medicine, automated accounts were “significantly more likely to post hashtags that referenced smoking cessation and new products compared to human users.”
What does that mean? Although social media is gaining greater standing as a public health research tool, investigators must distinguish between actual posts and those promoted by AI, the study authors wrote. What’s more, it is clear that social bots are marketing new electronic cigarette products, sometimes as an alternative to smoking, the researchers noted.
“Social bots can pass on health advice that hasn’t been scientifically proven,” Jon-Patrick Allem, PhD, lead author and a researcher in the school’s Department of Preventive Medicine, told USC. “The jury is still out on if e-cigarettes are useful smoking cessation tools, but studies have shown that the chemicals in vape juice are harmful.”
Scientists, he added, are examining whether vaping damages the respiratory and cardiovascular systems. And given how the anti-vaccine movement has taken off largely online, it appears that bad information can affect offline behavior, he said.
From December 2016 through April 2017, Allem and his team crawled Twitter, isolating tweets with words like “e-cigarette,” “vaping,” and “ejuice,” according to the study. Then they scrutinized retweets, mentions, follower ratios, content, and emotion levels to pinpoint which Twitter users were human and which were social bots, employing a “BotOrNot” algorithm as the final judge.
The results: Social bots were more apt to use hashtags like #quitsmoking and #health, which are often posted by people who said they kicked their smoking habits through vaping. Humans typically posted hashtags such as #vape, #vapelife, #vapenation, drawing on behavior, identity, and community. The AI also advertised vaping supplies, according to the study.
“Social bots may perpetuate misinformation about the efficacy of e-cigarettes in cessation, thus requiring education campaigns to serve as a vehicle to correct this misinformation,” the authors wrote.
But even the hashtags used by humans could serve to bond vape users, placing them in an echo chamber and normalizing the action, the researchers added. Most of these tweets came from the Mid-Atlantic and the Southwest, they found.
Going forward, Allem said he intends to explore other ways in which social bots are endorsing bad behaviors.
“There are many unhealthy choices social bots can promote, and our future research will focus on the other areas such as tanning beds, supplements, fad diets, or sugary drinks,” he said. “People need to be aware of these fake social media accounts, and public health campaigns should be implemented to counteract the most dangerous unhealthy behaviors these bots are encouraging.”
The study, “E-Cigarette Surveillance With Social Media Data: Social Bots, Emerging Topics, and Trends,” was published yesterday in the Journal of Medical Internet Research.
It is but the latest inquiry into how social media at large is either affecting public health or could be used to identify concerns. Researchers have scoured Twitter, specifically, in recent months for insights into attention deficit/hyperactivity disorder, the opioid crisis, and depression, often capitalizing on the power of AI.