• Politics
  • Diversity, equity and inclusion
  • Financial Decision Making
  • Telehealth
  • Patient Experience
  • Leadership
  • Point of Care Tools
  • Product Solutions
  • Management
  • Technology
  • Healthcare Transformation
  • Data + Technology
  • Safer Hospitals
  • Business
  • Providers in Practice
  • Mergers and Acquisitions
  • AI & Data Analytics
  • Cybersecurity
  • Interoperability & EHRs
  • Medical Devices
  • Pop Health Tech
  • Precision Medicine
  • Virtual Care
  • Health equity

A Half-Dozen AI Lessons from 2017

Article

Hype, reality, mental health diagnoses, ethical concerns, and more.

Artificial intelligence (AI) and machine learning have both been some of the most buzzed-about concepts in recent years, and they will undoubtedly continue to be. In 2017, more examples of its potential applications came to light, as did more calls for realism. Here’s Healthcare Analytics News™'s staff choices for the best AI lessons of the year.

People generate billions of data points on Twitter and Facebook daily. By studying language used on Twitter, different studies found they could quickly and correctly identify mental health conditions ranging from depression and PTSD to ADHD. “You worry a little about companies and insurers monitoring people’s posts…there’s a negative side,” one of the authors on the latter study told HCA News. “The positive side is that it can actually help, we hope.”

Facebook hopes so.

“We are starting to roll out artificial intelligence outside the US to help identify when someone might be expressing thoughts of suicide, including on Facebook Live,” the company’s vice president of product management, Guy Rosen, wrote in late November. The program, already in effect, will use pattern recognition to detect posts or videos where users may have expressed thoughts of suicide in order to notify first responders.

5. Watson might be a genetics wizard.

Plenty of ink has been spilled over IBM Watson’s potential and limitations as a predictive or suggestive tool for cancer treatment (HCA News even focused on it in with our second-ever cover story). But treatment paths aren’t Watson’s only use in the fight against cancer.

A study published in July in Neurology Genetics demonstrated that Watson could analyze a tumor sample using whole genomic sequencing (WGS) and produce actionable clinical data in about 10 minutes. Comparatively, the study showed that human analysis required 160 hours to achieve similar results.

“Clinical and research leaders in cancer genomics are making tremendous progress towards bringing precision medicine to cancer patients, but genomic data interpretation is a significant obstacle, and that’s where Watson can help.” said Vanessa Michelini, Watson for Genomics Innovation Leader, IBM Watson Health.

Healthcare figures might not often read construction industry research journals, but Automation in Construction recently published a fascinating study with health implications. Injuries among physical labors like masons can cost the healthcare system billions of dollars annually, not to mention cause great pain and distress for the workers themselves.

A team at a Canadian university is working to study how expert masons contort their bodies while they work to avoid injury. They are asked to build small concrete walls while wearing a motion sensor suit. The data from the sensors is fed into an algorithm that can parse out “expert” and “novice” poses, and the team is looking to develop a training program that uses video recordings and motion sensor suits to give apprentices instantaneous feedback on their motions.

3. AI can treat both physical and mental conditions.

Here’s a pair of excellent studies that display AI’s treatment potential in both physical and mental conditions. In the first, stroke victims get playful with robots, mimicking their motions and vice versa in a form of gamified physical therapy. The work is being done at Ben-Gurion University of the Negev (BGU) in Israel. “People report that when they play the mirror game they feel a sense of togetherness and closeness,” study author Dr. Shelly Levy-Tzedek said. “I think of it as a robotic revolution in rehabilitation.”

On the mental health front, there’s work out of the University of Southern California and Carnegie Mellon University that showed how veterans with post-traumatic stress disorder (PTSD) were more open in talking about their symptoms with a simulated human being than they were with anonymous surveys. "By receiving anonymous feedback from a virtual human interviewer that they are at risk for PTSD, they could be encouraged to seek help without having their symptoms flagged on their military record," USC’s Gale Lucas said.

2. The future is full of science fiction-y ethical concerns.

A team neuroscientists, neurotechnicians, clinicians, ethicists, and engineers called the Morningside Group wrote a lengthy warning in Nature about brain-computer interface (BCI) technology.

“People could end up behaving in ways that they struggle to claim as their own,” they wrote. BCI technology could be vulnerable to exploitation by hackers, and they also worry that it could fundamentally alter individual agency and people’s “private mental life.” They also worried that BCI-enhanced humans could become themselves weapons in international warfare. Fearing something of a superhuman arms race, the Morningside Group suggested that the world may need to cook up international treaties (like those that regulate nuclear weapons) for the use of BCI technologies.

1. You can be both an optimist and a realist about AI.

Stanford’s Jonathan H. Chen, MD, PhD, and Steven M. Asch, MD, MPH, wrote an important commentary in June about avoiding the “trough of disillusionment” while riding the AI hype coaster.

“I’m hearing very smart people buying into ideas when they don’t quite know what they’re buying into, basically. They’re hearing promises that they’re not going to follow up on, and I don’t think they quite understand what those mean,” Chen told HCA News in September. He and his colleague considered the lessons of the “AI winter” in the 1980s. AI’s capabilities must be respected, but it can’t be expected to solve all of healthcare’s problems.

“I actually think AI has huge potential for medicine,” he said, but, “If there’s a backlash because people overhype what’s possible too early on, it becomes harder to invest in the longer-term work.”

Related Videos
Image: Ron Southwick, Chief Healthcare Executive
George Van Antwerp, MBA
Edmondo Robinson, MD
Craig Newman
© 2024 MJH Life Sciences

All rights reserved.