The FDA and other regulators must navigate changes brought by artificial intelligence and other tech while protecting intellectual property rights.
Duke University researchers are trying to determine how regulators can balance trade secrets and high-tech due diligence.
As artificial intelligence (AI) and big data become increasingly enmeshed in the healthcare system, the U.S. Food and Drug Administration (FDA) and other regulators are being asked to evaluate technical software and algorithms while at the same time protecting the intellectual property interests of software developers.
The tension surrounding the need for trade secrets and a thorough understanding of healthcare interventions acutely grips those at the forefront of healthcare technology. But potential solutions for the problem have yet to be fully explored.
Now, researchers at Duke University say they are trying to find a path forward. The Duke Law Center for Innovation Policy and the Duke—Robert J. Margolis, M.D., Center for Health Policy have been awarded a $196,000 grant from the Greenwall Foundation to investigate how emerging AI-based treatments can be explained — to patients, caregivers and physicians — without the need to spill trade secrets. The Greenwall Foundation is a New York-based organization that supports research into bioethics.
In a press release announcing the grant, Arti Rai, J.D., a law professor and co-director of the Center for Innovation Policy, said the need for trade secrecy is not new and, “the importance of trade secrecy as an innovation incentive may have increased as a consequence of challenges associated with securing and enforcing software patents.”
This conundrum has raised questions for players at all stages of the development process, from software developers and FDA regulators to patients, providers and insurers.
Rai will work alongside Gregory Daniel, Ph.D., MPH, the deputy director for policy and a clinical professor at Duke-Margolis, as well as the center’s managing associate, Christina Silcox, Ph.D. The trio will publish a white paper designed to provide legal and regulatory background information on AI-based clinical diagnostic support software. A number of other Duke faculty members will also work on the project, which will culminate with the white paper, articles in peer-reviewed publication, and a conference at Duke’s office in Washington, D.C.
The FDA has cleared a number of software products that incorporate AI in recent years, but challenges remain for health-tech companies.
“Developers in this space will need to learn about FDA clearance and approval processes,” she said. “Thus far, all of the AI-enabled software has been cleared through the 510K Pathway or the De Novo Pathway, both of which are less complicated and less expensive than the pre-market approval required for Class 3 devices.”
The 510K submission process allows companies to skip the pre-market approval (PMA) process if they can show their product is equally safe and effective as another product that is already on the market and that did not require PMA.
The De Novo process was created for products that are novel — they don’t have a direct predicate — but are considered low or moderate risk.
Class 3 devices sustain or support human life, carry a risk of substantial injury or aim to prevent serious impairment of human health. Products in this category are required to undergo more scrutiny and a lengthier approval process.
Silcox said the questions covered under the grant aren’t topics for a near future; they’re topics at the center of a “burgeoning area in healthcare today.”
Silcox’s colleague Daniel said in a press release that it’s critical to approach that landscape with clear knowledge of both the medical and legal issues.
“By working in collaboration with Duke Law, we can move much more quickly to identify real-world policy approaches to support emerging technologies that incorporate AI in helping physicians and patients make better healthcare decisions,” he said.
Get the best insights in healthcare analytics directly to your inbox.