
Twitter Bots Are Spreading Bad Health Information
Artificial intelligence was more likely than humans to advocate for electronic cigarette use, a new study found.
Don’t believe everything you read.
It turns out that social bots, which use artificial intelligence (AI) to push certain narratives online, have been peddling unproven information on social media about vaping’s ability to help people quit smoking cigarettes. According to
What does that mean? Although social media is
“Social bots can pass on health advice that hasn’t been scientifically proven,” Jon-Patrick Allem, PhD, lead author and a researcher in the school’s Department of Preventive Medicine,
Scientists, he added, are examining whether vaping damages the respiratory and cardiovascular systems. And given how the anti-vaccine movement has taken off largely online, it appears that bad information can affect offline behavior, he said.
From December 2016 through April 2017, Allem and his team crawled Twitter, isolating tweets with words like “e-cigarette,” “vaping,” and “ejuice,” according to the study. Then they scrutinized retweets, mentions, follower ratios, content, and emotion levels to pinpoint which Twitter users were human and which were social bots, employing a “BotOrNot” algorithm as the final judge.
The results: Social bots were more apt to use hashtags like #quitsmoking and #health, which are often posted by people who said they kicked their smoking habits through vaping. Humans typically posted hashtags such as #vape, #vapelife, #vapenation, drawing on behavior, identity, and community. The AI also advertised vaping supplies, according to the study.
“Social bots may perpetuate misinformation about the efficacy of e-cigarettes in cessation, thus requiring education campaigns to serve as a vehicle to correct this misinformation,” the authors wrote.
But even the hashtags used by humans could serve to bond vape users, placing them in an echo chamber and normalizing the action, the researchers added. Most of these tweets came from the Mid-Atlantic and the Southwest, they found.
Going forward, Allem said he intends to explore other ways in which social bots are endorsing bad behaviors.
“There are many unhealthy choices social bots can promote, and our future research will focus on the other areas such as tanning beds, supplements, fad diets, or sugary drinks,” he said. “People need to be aware of these fake social media accounts, and public health campaigns should be implemented to counteract the most dangerous unhealthy behaviors these bots are encouraging.”
The study, “E-Cigarette Surveillance With Social Media Data: Social Bots, Emerging Topics, and Trends,”
It is but the latest inquiry into how social media at large is either affecting public health or could be used to identify concerns. Researchers have scoured Twitter, specifically, in recent months for insights into

















































