OR WAIT null SECS
The company's DeepMind subsidiary and the UK's Royal Free health system may not have done enough to inform patients their data was in use. The two are working to develop an alert application to analyze risk of kidney disease.
As the United States settled in to celebrate its independence, a subsidiary of one of its most successful companies was bringing ire to English institutions. British-based DeepMind, a Google-owned AI firm, apparently had access to roughly 1.6 million identifiable patient records through a questionable arrangement with the UK’s Royal Free National Health Service Foundation Trust.
In developing Streams, an app for the analysis of blood tests to determine risk of kidney disease, DeepMind collaborated with the Royal Free. The Information Council (ICO) on Monday announced that “the Royal Free NHS Foundation Trust failed to comply with the Data Protection Act when it provided patient details to Google DeepMind.”
The ICO’s statement pointed to “several shortcomings in how the data was handled, including that patients were not adequately informed that their data would be part of the test.”
Identifiable data is integral to what Streams does. As DeepMind’s site points out, it needs to convey identifying information like age, name, date of birth, and NHS number to convey to physicians just which patient’s test has indicated a need for further kidney care. A posting from February on Royal Free’s site champions the application for saving their nurses’ time and potentially averting patient risks associated with acute kidney injury (AKI) complications.
Ultimately, DeepMind aims to prove Streams successful and apply its working principles to warning systems for other health conditions. Although it is an AI firm, the company says Streams does not apply AI techniques to their available patient data, claiming that they had intended to but that the “state of data and information flow in the NHS was not as good” as hoped.
The data at hand, however, did contain personal information, including diagnoses of bipolar disorder, depression, and HIV. DeepMind’s access to the information has been known since New Scientist learned the details of the arrangement more than a year ago, spurring the ICO investigation that this week determined its illegality.
Information Commissioner Elizabeth Denham posted a blog in response to the situation, stressing that the “price of innovation didn’t need to be the erosion of legally ensured fundamental privacy rights.” She encouraged better discretion on behalf of NHS Trust member looking to collaborate with tech firms.
DeepMind itself is not accused of wrongdoing, but its own independent review panel noting that the original 2015 arrangement that resulted in this controversy could have been more detailed, and that the firm could have done better at informing the public of what it was doing.
"People are concerned about the power of big technology firms, and we felt that we should hold DeepMind to a very high standard because of its link to Google," said Dr. Julian Huppert, who heads the independent review panel.
Naturally, this is not the first Google-related run-in with the legality issues of voluminous data aggregation projects. In 2013, the company was fined by German privacy regulators for a situation in which it was found that Google Street View had collected data from Wi-Fi routers worldwide during its mapping expeditions. The company’s policies regarding data collection for targeted advertising have long been a source of consternation as well. Given the company’s penchant for privacy controversy, and the fraught world of patient data security, it will certainly have close eyes upon it in future healthcare endeavors.