• Politics
  • Diversity, equity and inclusion
  • Financial Decision Making
  • Telehealth
  • Patient Experience
  • Leadership
  • Point of Care Tools
  • Product Solutions
  • Management
  • Technology
  • Healthcare Transformation
  • Data + Technology
  • Safer Hospitals
  • Business
  • Providers in Practice
  • Mergers and Acquisitions
  • AI & Data Analytics
  • Cybersecurity
  • Interoperability & EHRs
  • Medical Devices
  • Pop Health Tech
  • Precision Medicine
  • Virtual Care
  • Health equity

How AI Could Thwart The Next Large-Scale Cyberattack

Article

The University of Aberdeen has received nearly $10 million to use AI to determine the human behaviors that lead to attacks, and how to modify them.

The University of Aberdeen is taking a leading role in preventing a large-scale cyberattack — like the WannaCry attack that paralyzed the United Kingdom’s National Health Service in May – from happening again.

The Scotland-based university made the announcement on its website on Tuesday that it has received the equivalent of nearly $10 million to work with the UK Engineering and Physical Sciences Research Council (EPSRC) to prevent future malware attacks. Over three years, researchers will examine the human behaviors that open the door to hackers.

In a news release from the University, lead researcher Dr. Matthew Collinson said most cyberattacks begin with an exploitation of human behavior, usually in the form of a phishing email.

“One of the main problems faced by companies and organizations,” Collinson said, “is getting computer users to follow existing security policies, and the main aim of this project is to develop methods to ensure that people are more likely to do so.”

The grant will help support the university’s Supporting Security Policy with Effective Digital Intervention (SSPEDI) project. SSPEDI will study and determine better ways for organizations to ensure compliance with their cybersecurity protocols and policies. Researchers hypothesize that this will hinge on accounting for “individual personalities and motivations” in end users. A critical component of compliance, they say, is to make end users understand the risks involved and persuade them to comply.

The research generated by the grant will also coincide with ongoing Artificial Intelligence (AI) research at the university. Collinson said in the news release that these efforts will support each other.

“In terms of AI, we will investigate how intelligent programs can be constructed which can use dialogue to explain security policies to users, and utilize persuasion techniques to nudge users to comply,” Collinson said.

Collinson said researchers will use Sentiment Analysis to determine participants’ attitudes as they pertain to security policies. This approach will help pinpoint what makes end users less likely to follow security protocol.

Related

Cybersecurity: How the World Measures Up, Country by Country

Hospitals Were Collateral Damage in Colossal Cyber Attack

For Hospitals, the Ransomware Threat is Here to Stay

Related Videos
Image: Ron Southwick, Chief Healthcare Executive
© 2024 MJH Life Sciences

All rights reserved.