
A Half-Dozen AI Lessons from 2017
Hype, reality, mental health diagnoses, ethical concerns, and more.
Artificial intelligence (AI) and machine learning have both been some of the most buzzed-about concepts in recent years, and they will undoubtedly continue to be. In 2017, more examples of its potential applications came to light, as did more calls for realism. Here’s Healthcare Analytics News™'s staff choices for the best AI lessons of the year.
People generate billions of data points on Twitter and Facebook daily. By studying language used on Twitter, different studies found they could quickly and correctly identify mental health conditions ranging from
Facebook hopes so.
“We are starting to roll out artificial intelligence outside the US to help identify when someone might be expressing thoughts of suicide, including on Facebook Live,” the company’s vice president of product management, Guy Rosen, wrote in late November. The program, already in effect, will use pattern recognition to detect posts or videos where users
5. Watson might be a genetics wizard.
Plenty of ink has been spilled over IBM Watson’s potential and limitations as a predictive or suggestive tool for cancer treatment (HCA News even focused on it in with our
A study published in July in Neurology Genetics demonstrated that
“Clinical and research leaders in cancer genomics are making tremendous progress towards bringing precision medicine to cancer patients, but genomic data interpretation is a significant obstacle, and that’s where Watson can help.” said Vanessa Michelini, Watson for Genomics Innovation Leader, IBM Watson Health.
Healthcare figures might not often read construction industry research journals, but Automation in Construction recently published a fascinating study with health implications. Injuries among physical labors like masons can cost the healthcare system billions of dollars annually, not to mention cause great pain and distress for the workers themselves.
A team at a Canadian university is working to study
3. AI can treat both physical and mental conditions.
Here’s a pair of excellent studies that display AI’s treatment potential in both physical and mental conditions. In the first,
On the mental health front, there’s work out of the University of Southern California and Carnegie Mellon University that showed how veterans with post-traumatic stress disorder (PTSD) were more open in
2. The future is full of science fiction-y ethical concerns.
A team neuroscientists, neurotechnicians, clinicians, ethicists, and engineers called the Morningside Group wrote a lengthy warning in Nature about brain-computer interface (BCI) technology.
“People could end up behaving in ways that they struggle to claim as their own,” they wrote. BCI technology could be vulnerable to exploitation by hackers, and they also worry that it could fundamentally alter individual agency and people’s “private mental life.” They also worried that BCI-enhanced humans could become themselves weapons in international warfare. Fearing something of
1. You can be both an optimist and a realist about AI.
Stanford’s Jonathan H. Chen, MD, PhD, and Steven M. Asch, MD, MPH, wrote an important commentary in June about
“I’m hearing very smart people buying into ideas when they don’t quite know what they’re buying into, basically. They’re hearing promises that they’re not going to follow up on, and I don’t think they quite understand what those mean,”
“I actually think AI has huge potential for medicine,” he said, but, “If there’s a backlash because people overhype what’s possible too early on, it becomes harder to invest in the longer-term work.”

















































