• Politics
  • Diversity, equity and inclusion
  • Financial Decision Making
  • Telehealth
  • Patient Experience
  • Leadership
  • Point of Care Tools
  • Product Solutions
  • Management
  • Technology
  • Healthcare Transformation
  • Data + Technology
  • Safer Hospitals
  • Business
  • Providers in Practice
  • Mergers and Acquisitions
  • AI & Data Analytics
  • Cybersecurity
  • Interoperability & EHRs
  • Medical Devices
  • Pop Health Tech
  • Precision Medicine
  • Virtual Care
  • Health equity

Pharma Looks to Convert Bigger Data into Smaller Costs

Article

Can drug manufacturers use colossal swaths of data to deliver new pharmaceuticals more efficiently and cost-effectively? And if so, will those savings some day make their way to the patients?

____________________________________________________________________

There is a perception that a drug is just chemicals in capsule or a serum, and that once the first has been made and approved, infinite more can be produced at modest cost.

But that first one typically costs hundreds of millions, if not billions, of dollars.

As consumer drug prices have risen steadily over time, so too have pharma's R&D costs. A 2013 Forbes article estimated the cost of R&D on a single drug brought to market at $350 million, a number that may be lowered by smaller companies with a single drug in their portfolios. For makers that released more than 3 drugs in the preceding decade that the study covered, the median development cost for each leaped to over $4 billion. For those that released more than 4, Forbes found the cost hit $5 billion. A 2012 Nature Reviews Drug Discovery study stated that “the number of new drugs approved per billion US dollars spent on R&D has halved roughly every 9 years since 1950, falling about 80-fold in inflation-adjusted terms.”

But what if drug manufacturers could use colossal swaths of data to deliver new pharmaceuticals more efficiently and cost effectively? And if they did, would those savings some day make their way to the patients?

COURSE CORRECTION

Big data and the pursuit of precision medicine may help the pharmaceutical industry course correct. As firms across industries begin to harness and analyze the wide swaths of data at their disposal, there is a belief that the pharmaceutical industry can make better drugs at reduced development costs.

“Less than 5% of the data that’s available is analyzed, so just think of the opportunity there,” Erwin’s Danny Sandwell says. Sandwell knows more than a bit about ubiquity in the world of data: His company has become synonymous with the modeling of large data sets (search any job site for “Erwin modeler” for countless reinforcements).

“There is a limit to the scale, in terms of how much data you can stuff into it and have it perform. We started to see a lot of new data that was especially coming from the web, where it wasn’t just sales data, general ledger, that type of stuff: It was becoming less formalized, loose information,” he explains, citing an influx of data in all different forms, from videos to tweets to handwritten documents. “Big data was technology that started to respond to these other massive streams that could not be managed by traditional.”

Matt Brauer, a senior scientist for Genentech’s Research and Early Development arm, approaches big data from a biology and genomics perspective. In addition to the development role of analyzing data points within countless studies, genomics itself is inherently a big-data problem.

“We can have a trial with a thousand patients, and that’s not a huge trial,” Brauer says, “But if we start getting genomic data on those patients we’re talking about millions of data points now.”

Genentech is looking for biomarkers to partition its patient populations. “We discover with this data ways to be more precise in the treatments that we give them,” he says. This lowers the likelihood of patients spending money on drugs that won’t work for them and assuming the risks associated with the drugs, it also makes the trial process less expensive, he added.

“We can basically have the same statistical power with a smaller trial, which means the trials are less expensive, which means the drug development process is less expensive. The more we can get ahead of time with our patient population, the cheaper it is to develop the drug, and the less risk the patients ultimately take.” Beyond biomarkers, similar principles apply in the search for pathways to target in drug creation, Brauer says.

THE LIMITS AND PROMISE OF BIG DATA

There are some limitations, however. Genomics are developing, and although there are now millions of genomic data points from thousands of patients, the meaning of every marker and mutation is not yet understood. Speakers at the June Precision Medicine Summit in Boston reiterated that big data, as it pertains to genomics, is in its infancy. Still, the speakers said, the work is already producing results.

In addition to using big data to fine tune cohorts and identify pathways, big data can help identify new uses for old drugs. Atul Butte, MD, PhD, is the director of the Institute of Computational Health Sciences at the University of California, San Francisco. He is also the founder of NuMedii, where he uses proprietary methods that he developed at Stanford to mine big data for known biomarkers in diseases and gene expressions, matching them to drugs that may address them.

Butte notes that there is competition, with a rapid formation of companies in Boston, Pittsburgh, and the San Francisco Bay Area that are determined to leverage the power of machine learning and artificial intelligence in medicine.

“These contract research organizations are all over the world, and there are now websites that aggregate these companies: Assay Depot, Science Exchange. There are many others. So, you could learn how to do all of this testing for yourself or you could find a company that might be able to do the work for you. When the price is so low that you can order the same experiment from so many independent companies, the quality issue just goes away,” he said. The rapid rise of these companies also offers the ability to cross-check the results from one organization with the results from another, and another, and another, he said.

That progress is to the point that the quality of work done by many traditional research institutions and laboratories has also been questioned—and, in some cases, been found not to be reproducible—whereas newer, cheaper research groups can potentially do the job in less time and at a higher quality.

CHALLENGES AHEAD

But despite the hype and possibility, big data still comes with a wealth of challenges. Although the pursuit of precision medicine becomes more evident and effective by the day, it is still developing. It isn’t clear what savings, if any, will be passed down to patients,and there remains fine-tuning to be done in the data application.

Brauer says he believes the cost-effectiveness is detectable in Genentech’s work. “We’re starting to design trials in a different way now, because we can get basic biomarker data.” He says that t he doesn’t think the potential has been nearly reached yet. As such, he says, many pharmaceutical companies have begun to develop operations that more closely resemble Silicon Valley startups, replete with people who know coding first and, maybe, biology second.

A big hindrance to data’s potential in pharmaceuticals lies in the synthesis of the data. This is a 2-pronged problem: the sheer volume of data and data transparency.

“Pharma companies, payers, and electronic health records are sources of data, but each of these are siloed right now,” Butte says. A 2013 McKinsey report encourages “breaking the silos that separate internal functions and enhancing collaboration with external partners,” furthering the notion that it isn’t just segmented between entities, but often within them.

The external sharing discrepancy can cause jams before the approval stage, which becomes extremely costly if a company receives a rejection and is forced to scuttle their investment or sink more into additional studies to prove safety and efficacy.

Another difficulty is just how big the data is. The concept has had growing pains, according to Sandwell. “The challenge with big data is that in a lot of cases it’s sort of a black box, it’s very tough to integrate with a lot of traditional data because it’s so different. And in business, people saw it as something really cool, but they weren’t going to bet the farm on it and they weren’t going to trust it in the way they did this other data for the last 30 or 40 years.”

Brauer also spoke to the desire to avoid letting big data become a black box, and many consider pharma to be a late adopter as far as data analytics go. Given the industry’s consistent profits, pharma didn’t have the burning imperative compared with other spheres.

Some experts argue that the initial cost of development can only partially explain the complex conditions that keep pharmaceuticals expensive for US consumers, a 2016 JAMA review on high consumer costs contends that R&D costs serve as an excuse for pharmaceutical pricing instead of a true reason, citing the industry’s profit margins.

Regardless, the data is there, growing, mostly unexplored, and full of potential. Brauer brings it back to a question of ethics.

“I think we sort of have an ethical obligation to get as much value out of that data as possible,” he says. “This is not science fiction, there’s a path forward for this, and I think in the next few years we’ll start to see that.”

Related Videos
Image: Ron Southwick, Chief Healthcare Executive
George Van Antwerp, MBA
Edmondo Robinson, MD
Craig Newman
© 2024 MJH Life Sciences

All rights reserved.