David R. Henderson  

FDA and Patent System Discourage Research on Early Stage Cancer

PRINT
The Economist Who Didn't Bark... Sympathy for the devil?...

Last week I highlighted a comment by Econlog reader Jim Glass. Since then, I've looked closely at the study he cited. The study is titled "Do Firms Underinvest in Long-Term Research? Evidence from Cancer Clinical Trials." It's published in the American Economic Review, Vol. 105, No. 7, 2015. Here's an ungated version.

The title is accurate but misleading. It gives no hint that either the patent system or the FDA contributes to the problem. Of course a title can do only so much in a few words. But even the abstract, although it mentions the patent system, is silent about the FDA's role in causing the problem.

Fortunately, the study is much better than the title and the abstract. The authors are Eric Budish of the University of Chicago's Booth School of Business, Benjamin N. Roin of MIT's Sloan School of Management, and Heidi Williams of MIT's Economics Department.

Here are their opening sentences:

Over the last five years, eight new drugs have been approved to treat lung cancer, the leading cause of US cancer deaths. All eight drugs targeted patients with the most advanced form of lung cancer, and were approved on the basis of evidence that the drugs generated incremental improvements in survival. A well-known example is Genentech's drug Avastin, which was estimated to extend the life of late-stage lung cancer patients from 10.3 months to 12.3 months. In contrast, no drug has ever been approved to prevent lung cancer, and only six drugs have ever been approved to prevent any type of cancer.

Why would this be? Think about a drug company that invests in R&D on a drug to prevent lung cancer. If the test of efficacy is its extension of life, think how many years--and how many test subjects--the drug company would have to run the experiment. It could well be decades. How long do patents last? 20 years. So if the company thought it had a winner for preventing cancer, and patented that winner, the odds are low that it would have any meaningful time in which to collect the monopoly proceeds.

Who decides that the relevant test for efficacy is extension of life? The FDA. Why couldn't the FDA use other measures? It could. The authors suggest instead that the FDA use surrogate endpoints, and point out that the FDA has allowed surrogate endpoints to be used for drugs to prevent other diseases.

They write:

This evidence suggests that--in the case of hematologic cancers--apparently-valid surrogate endpoints were effective in increasing R&D investments on innovations that would otherwise have had long commercialization lags, and that the resulting increases in R&D translated (in this case) into real gains in patient health. While much attention has been focused on the risks and costs of using surrogate endpoints that may imperfectly correlate with real improvements in patient health, our analysis is--to the best of our knowledge--the first attempt to use the historical record to quantify how the availability and use of a valid surrogate endpoint affected R&D allocations and patient health outcomes.

The example of the Framingham Heart Study is helpful in illustrating the potential value of surrogate endpoints. Heart disease is the leading cause of death in the US, but since 1968 the age-adjusted rate of deaths from heart disease has dropped by 50 percent. Although some of these gains are due to lifestyle changes, much of the decline in heart disease has been attributed to improved pharmacological preventives and treatments for cardiovascular disease, including the development of beta-blockers, ACE-inhibitors, and statins (Weisfelt and Zieman 2007). Patients use these drugs to reduce the morbidity and mortality from heart disease, but very few of these drugs reached the market based on clinical trials using morbidity or mortality as the endpoint. Rather, almost all were approved based on evidence that these drugs lowered either blood pressure or LDL (low-density lipoprotein) cholesterol - outcomes that can be measured much more quickly than morbidity and mortality (Psaty et al. 1999). These surrogate endpoints were first identified by the Framingham Heart Study, a large-scale, multi-decade, federally-funded observational study which found that high blood pressure and LDL cholesterol are critical risk factors in cardiovascular disease. Subsequent clinical trials helped to validate these prognostic factors, which led the FDA to accept them as surrogate endpoints in cardiovascular trials (Meyskens et al. 2011). Researchers have argued that without these surrogate endpoints, it is unclear whether drugs such as beta-blockers, ACE-inhibitors, and statins would have reached the market as treatments for heart disease (Lathia et al. 2009; Meyskens et al. 2011). Note that public subsidies--such as federal support for the Framingham study--were likely important in this context, because any individual firm's investment in discovering and validating surrogate endpoints would generate benefits that largely spill over to other firms. Both our empirical evidence on the effects of surrogate endpoints for hematologic cancers and this historical case study for heart disease suggest that research investments aimed at establishing and validating surrogate endpoints may have a large social return.


How big is that return? Huge. They write:
In total, this calculation suggests that among this cohort of patients - US cancer patients diagnosed in 2003 - the longer commercialization lags required for non-hematologic cancers generated around 890,000 lost life-years.

If we value each lost life-year at $100,000 (Cutler, 2004), the estimated value of these lost life-years is on the order of $89 billion per annual patient cohort. Applying a conservative social discount rate of 5% and assuming that patient cohorts grow with population growth of 1%, the net present value of the life-years at stake is $89 billion divided by (.05 - .01) = $2.2 trillion.

It is important to note that this life-lost estimate is rough at best. Our point estimate of the value of life lost per annual patient cohort is $89 billion, with a 95 percent confidence interval that ranges from $7 billion to $172 billion; the net present value point estimate of $2.2 trillion has a 95 percent confidence interval that ranges from $170 billion to $4.2 trillion.


Interestingly, an extensive MIT press release on their study also fails to mention the negative role of the FDA. And while the release gives 3 alternative ways to deal with the problem--more use of surrogate endpoints, government funding, and altering the patent length--it does not even hint that use of surrogate endpoints would require FDA approval.

Fortunately, the study does.

HT2 Jim Glass, Dan Klein, and Jason Briggeman.


Comments and Sharing






COMMENTS (22 to date)
Daniel Klein writes:

The MIT press release is titled:

"Study: Firms 'underinvest' in long-term cancer research"

That press release and the article itself show the Orwellian nature of elite research that tiptoes around the baneful consequences of the FDA-administered banned-till-permitted system created by fool politicians.

Peter writes:

David,

I think you are judging the FDA here unfairly. Note that the authors of the paper on which you are commenting wrote:
"research investments aimed at establishing and validating surrogate endpoints may have a large social return"

Does the FDA allocate public research money among possible medical research projects? I have never heard that it does. (Could you cite some evidence that it does?) If it does not, then blaming it here makes no sense.

Maybe you want the FDA to approve drugs based on NON-VALIDATED surrogate endpoints. It already does that (and has been doing it for many years). For instance, in the area of type II diabetes treatment. The cardiologist Steven Nissen (director of the Cleveland Clinic) argued in an editorial published in the Annals of Internal Medicine (Nov 6 2012) that reliance (in drug approval decisions) on a NON-VALIDATED surrogate endpoint, namely Hb1Ac (blood glucose level), retarded progress in the development of diabetes drugs. He wrote:

"An anachronistic regulatory policy requiring only that new diabetes drugs show that they lower blood glucose levels without obvious safety problems, not that they improve clinical outcomes, allowed industry to avoid performing studies on CV [cardiovascular] outcomes [heart attacks, stroke, death]. Thoughtful academics have criticized the reliance on biochemical measures as a surrogate for clinical benefit, because numerous surrogates have failed to show a consistent link with actual clinical outcomes."
Cardiovascular Effects of Diabetes Drugs: Emerging From the Dark Ages. Annals of Internal Medicine, 2012, 157(9) (you can find this 2-page editorial with a Google search)

I agree that public money should be invested in establishing and validating surrogate endpoints. However, like Nissen, I don't see how approving drugs based on non-validated surrogate endpoints leads to therapeutic progress.

Peter writes:

Maybe of interest (ungated PDFs can all be found through Google):

Use of surrogate outcomes in US FDA drug approvals, 2003–2012: a survey. BMJ Open 2015;5:e007960. doi:10.1136/bmjopen-2015-007960

Therapeutics letter: The limitations and potential hazards of using surrogate markers. Oct-Dec 2014

Surrogate Outcomes in Clinical Trials: A Cautionary Tale. JAMA Internal Medicine, 173 (8), APR 22, 2013

Editorial: Licensing drugs for diabetes: Surrogate end points are not enough, robust evidence of benefits and harms is needed. BMJ, 11 Sept 2010, Vol. 341

The idolatry of the surrogate. BMJ, 2011;343:d7995 doi: 10.1136/bmj.d7995, 28 December 2011

David R. Henderson writes:

@Peter,
I think you are judging the FDA here unfairly.
If so, that means that the three authors are judging it unfairly also. But I strongly recommend that you read their paper.

John Goodman writes:

Excellent post

Peter writes:

David,

thanks for posting on this paper (I had not heard about it). It is interesting and important.

But regarding my first post: The authors' analysis proceeds from the assumption that we have valid surrogate endpoints. Then they show that allowing valid surrogate endpoints in drug approval strictly increases commercialization activity, firm profits, and social welfare.

In footnote 24 (p.15) they state "The use of invalid surrogate endpoints could increase R&D investments but not generate any corresponding gains in survival". And hence would not increase social welfare. It would instead redistribute wealth from patients to the pharma industry.

So when you speak ill here of the FDA you are holding it responsible for the lack of valid surrogate endpoints. That does not seem fair since the FDA - to my knowledge - does not have a budget for medical research from which it could fund the discovery and validation of surrogate endpoints.
It is misleading to say, as you state in your blogpost's title, that the FDA discourages early stage cancer research.

Yes, when the FDA requires valid surrogate endpoints for drug approval, it does discourage early stage cancer research. But this is a double-edged sword: If it were to allow non-validated surrogate endpoints we would end up with drugs that provide negative, zero or positive patient benefits and no way to know which drug belongs in which of these 3 classes. The net impact on social welfare would be indeterminate. The solution seems to be to allocate public research money for this public good: discovery and validation of surrogate endpoints.

Examples of drugs, approved on the basis of a non-validated surrogate endpoint (Hb1Ac), that provided negative patient benefits: troglitazone (marketed as Rezuline) and rosiglitazone (Avandia) for the treatment of type II diabetes.

David R. Henderson writes:

@Peter,
So when you speak ill here of the FDA you are holding it responsible for the lack of valid surrogate endpoints.
No. I’m holding it responsible for setting the standards it sets.
That does not seem fair since the FDA - to my knowledge - does not have a budget for medical research from which it could fund the discovery and validation of surrogate endpoints.
It doesn’t need a budget for that. It can just ask the drug companies and expert panels to suggest such end points.
Examples of drugs, approved on the basis of a non-validated surrogate endpoint (Hb1Ac), that provided negative patient benefits: troglitazone (marketed as Rezuline) and rosiglitazone (Avandia) for the treatment of type II diabetes.
Any method of approval will have Type 2 errors. That’s not enough of an argument. And notice, I am guessing given your examples, that these drugs are no longer widely used. So the system of letting them out there and then letting them fail on their own demerits worked.
@John Goodman,
Thanks.

Peter writes:

David,
I’m holding it [the FDA] responsible for setting the standards it sets.
If you look at this paper, you'll find that the FDA does not seem to have a consistent standard on allowing surrogate endpoints in drug approval:
Use of surrogate outcomes in US FDA drug approvals, 2003–2012: a survey. BMJ Open 2015;5:e007960
Moreover, neither the authors of the AER paper nor you have provided an argument to the effect that we would be better off by consistently allowing non-validated surrogate endpoints in drug approval.

It [the FDA] doesn’t need a budget for that [discovery and validation of surrogate endpoints]. It can just ask the drug companies and expert panels to suggest such end points.
Validating surrogate endpoints requires clinical trials which require money to be carried out. The issue is not just suggesting such end points. And expert opinion cannot substitute for validation (as you would certainly point out if I said something like this: Let's increase the minimum wage because well-credentialed economists say it's a good policy.)

Any method of approval will have Type 2 errors. That’s not enough of an argument.
Likewise, even the least stringent methods of approval will have some benefits. That's not enough of an argument either. I actually said more than that when I wrote:
"Yes, when the FDA requires valid surrogate endpoints for drug approval, it does discourage early stage cancer research. But this is a double-edged sword: If it were to allow non-validated surrogate endpoints we would end up with drugs that provide negative, zero or positive patient benefits and no way to know which drug belongs in which of these 3 classes. The net impact on social welfare would be indeterminate."

Vacslav writes:

David,

Cancer research and the process of drug development and approval in general all suffer from deeper problems.

Pharma and FDA and medical research all share the idea that an aggregate result over a sufficiently large population is a good proxy for scientific truth. It is believed that behind every individual medical outcome there is a hidden data generating process which is revealed by averaging over many repetition, preferably in a randomized trial. Aggregating over many individual results is taken as the sole tool capable of extracting scientific truth from the noisy background. And any reasonable improvement in the aggregate, average outcome is considered a worthy achievement.

The problem with this approach is that uncertainty behind medical outcomes is not just noise around the true but hidden outcome. It's not a bug. It's a feature.

Medical research, drug development and regulators all fail to recognize the true nature of medical uncertainty. This cost enormous sums of money, numerous lives lost unnecessarily, and a lot of misdirected efforts.

I study and explore these ideas in my book "Probabilistic medical decision making: a primer for scientists, professionals and policy makers."

Bedarz Iliaci writes:
high blood pressure and LDL cholesterol are critical risk factors in cardiovascular disease.

Medical science is walking back from its previous claims regarding LDL. Eggs and butter are now regarded as unproblematic and even good. Those that never bought into the LDL phobia are vindicated.

Michael Byrnes writes:

I don't really agree with the FDA efficacy requirements, but this post (and perhaps the paper it is based on?) seems to tell a very one-sided story. Any any policy, even communism, will look good if its negative effects are ignored or greatly understated.

The best example of the negative consequences of being overly accepting of surrogate endpoints is the widespread use of cholesterol-lowering drugs for primary prevention. People want to avoid heart attacks, but we don't care at all about maintaining appropriate levels of LDL cholesterol (except insofar as we believe that doing so will prevent heart attacks).

The scientific evidence suggests that some, but by no means all, cholesterol-lowering drugs can indeed prevent heart attacks when used in secondary prevention in some patient cohorts. Specifically, statins (but not necessarily other types of cholesterol lowering drugs) can reduce the risk of death in middle-aged men who have already had a heart attack or other sign of severe CVD.

By contrast, the data on risk reduction for cholesterol lowering drugs with other mechanisms of action (eg, non-statins), for statins in primary prevention, and even for statins in secondary prevention in other cohorts (for example women of any age, elderly men) are far weaker. But, because the FDA accepted lowering of cholesterol as a surrogate endpoint, cholesterol lowering drugs are widely used in these populations despite their questionable benefit for prevention of death.

Use of the wrong surrogate endpoint can have disastrous consequences.

There's another problem with the argument in this post: it's not at all clear that patent protection specifically is responsible for the focus on later stage cancers. Pharma companies want to run trials that are successful in proving efficacy, and they want to run trials that don't cost a fortune. Even without the patent issue, this is going to bias research towards later stages of disease, when trials fully powered to detect a treatment benefit can often be run with fewer patients and for shorter durations. It's plausible that patent protection is a factor, but not necessarily "the" factor.

By the way, there is potentially a way that this question (role of patent protection apart from other factors in driving development towards late stage disease) could be studied. In recent years, companies are finding ways of exploiting FDA regulations to limit the impact of the loss of patent protection. One way is around biologic therapies - the requirements for getting a biosimilar drug approved are more complex than those of a typical generic. Another is that some FDA approvals require a closed distribution system (a "risk evaluation and mitigation" system). A drug approved under such a system can only be provided to registered patients - so potential competitors have no means of acquiring the drug for the purpose of conducting biosimilarity research. If patent protection was the driving force in the focus on late disease stage therapies one might expect to see more of a focus on earlier disease stages where these workarounds exist.


Nathan W writes:

Markets aren't very good at long term investments. Why? Because shareholders want payouts next year, the year after, or maybe can wait ten or twenty years. But how many shareholders can wait 40 years?

This is the right place for government action, in the form of funding for cancer prevention. But, don't they already do that?

I think the comparison to heart health drugs it not very useful because there are known precursors to heart disease, but there are not known precursors to cancer (except lifestyle decisions and a handful of genetic markers).

ThaomasH writes:

This discussion of FDA policies and procedures needs to be put into a proper cost benefit framework for both the type one and type two errors. If it shows systemic errors, those errors should be traced to errors in the incentive framework.

At times Henderson seems to be looking only at the lost benefits of not approving safe and effective drugs and to be overlooking the costs approving ineffective or unsafe drugs.

Perhaps this has some implications for NIH investments and patent policy as well as the need to continue drug evaluation after approval, particularly those based on biochemical endpoints.

Stuart Buck writes:

It is odd to keep reading articles on the FDA and drug development that seem to be completely oblivious to the actual facts. For example, one investigation noted that 74% of the cancer drugs approved by the FDA over the past decade relied on surrogate endpoints.

It is conceivable to imagine an argument that this is still too low a percentage, but such an argument could only come from authors who didn't think that "most cancers use surrogate endpoints only on a limited, somewhat ad hoc basis."

More specifically, here's a response to the econ article from someone who is actually familiar with cancer drug development:

http://www.vinayakkprasad.com/other-writing/

Good quotes:

The reality is, when validated, we already use surrogate approvals. Where they are not validated, we don’t use surrogates—but that is for good reason: who would want to release novel chemoprevention drugs that decrease substance X levels, but have unknown effects on cancer death and survival? The bar for asking a healthy person to use a medication should be higher not lower than the bar for symptomatic patients with a dire condition.
How are these articles reviewed? Do any academic oncologists read these articles? Is it OK to have a complicated model in economics—even if it tells you nothing about reality? What is the point of such precise mathematical equations, when the inputted data is so coarse? And... who is giving out these genius awards? Have they not heard of John PA Ioannidis?
David R. Henderson writes:

@Stuart Buck,
Thanks for giving that cite by Vinay Prasad. Here’s the link.
He makes a lot of good points that I was unaware of. He has a lively and incisive writing style that’s refreshing in academia.
You highlighted a quote from him above that I would like to highlight part of. It’s this one:
The bar for asking a healthy person to use a medication should be higher not lower than the bar for symptomatic patients with a dire condition.
The word “asking” is interesting. I don’t think anyone is asking that healthy people use medication. The debate is over whether to allow them to, which is very different.
I think this gets at one of the main differences between your outlook and mine: you tend to believe in a central planner that decides what drugs we are allowed (not asked) to take. I oppose a central planner that has that power over us.

Stuart Buck writes:

People do not generally self-prescribe cancer medications. They do what their doctor recommends. So to me, this issue is very much about when people should be advised to take medication. As Vinay Prasad says, it is one thing to say that people on their deathbeds should be free to take whatever has the slightest chance of working. It is another thing entirely to say that otherwise-healthy 45-year-olds should be advised by doctors to take drugs that may lower some supposed marker of cancer when we have no idea whether that drug even lowers the actual death rate from that cancer, let alone whether it has other long-term lethal side effects that outweigh any supposed benefit.

It's not that I am such a believer in central planning, but neither do I think it makes a whole lot of sense to envision otherwise healthy middle-aged people self-medicating with highly toxic drugs that probably have no long-term benefit and often cause long-term harm (without any way of knowing either fact, if there are no procedures for gathering evidence from randomized trials).

David R. Henderson writes:

@Stuart Buck,
I accept your substitute of the word “advised” for “asked.” That’s much better.
It's not that I am such a believer in central planning, but neither do I think it makes a whole lot of sense to envision otherwise healthy middle-aged people self-medicating with highly toxic drugs that probably have no long-term benefit and often cause long-term harm (without any way of knowing either fact, if there are no procedures for gathering evidence from randomized trials).
Then the crucial piece of evidence for whether you’re a believer in central planning is whether you are willing to allow people to take drugs in the absence of such evidence. I gather that you are not willing. Am I wrong?

Stuart Buck writes:

I would never punish any individual for putting anything in their own bodies, including recreational drugs. But I would retain the requirement that before drug companies market a drug as putatively preventing some disease -- thereby setting into motion an entire ecosystem wherein doctors recommend that drug, and health insurance companies and Medicare all have to pay tens of thousands of dollars for that drug -- we ought to have some rigorous evidence that the drug has ANY benefit for anyone.

Without the clinical trials on efficacy, we might as well say that Medicare should pay $10,000 for a homeopathic cancer remedy made out of water with minute traces of sugar. Such a "drug" would be just as beneficial as some cancer drugs and would do far less harm via toxic side effects.

Charley Hooper writes:

@ Stuart Buck,

The reality is, when validated, we already use surrogate approvals. Where they are not validated, we don’t use surrogates—but that is for good reason: who would want to release novel chemoprevention drugs that decrease substance X levels, but have unknown effects on cancer death and survival?

AL amyloidosis is a fatal condition. NT-proBNP, a surrogate marker, is strongly correlated with improved cardiac function and prolonged survival. However, the FDA requires survival studies for this condition, even though a top physician said, "using NT-proBNP instead of mortality could halve the length of AL trials, from six to three years."

Who would want a drug approved based on NT-proBNP? I would. These patients are already going to die. Some of the drugs being developed for AL amyloidosis have already been shown to be very safe. Why not give these patients a chance? And a choice?

R Richard Schweitzer writes:

Start the limitation of the period of patent protections from the date when use is permitted.

John Hayes writes:

Stuart Buck,

I would like to make a different observation than David's, it seems you jump between two potential scenarios either something is approved and known to be effective or it's somewhere between harmful and useless. Most thing are somewhere in between, approved drugs are partially useful (or effective in a specific scenarios) and most other stuff isn't going to be that harmful. What's more important is that it's my observation that almost no one wants to poison themselves and even when considering approved medication they're concerned about self poison. Even when a person chooses to take medication, they're liable to stop if the effects or side effects are too similar to self poison. Avoiding side effects is a direction of pharma research as valuable as broader treatment because treatments that are discontinued aren't effective.

All-in-all I think the FDA takes too much credit for the safety of drugs. Individual decisions (which we could characterize as the market) would quickly eliminate ineffective or harmful drugs. Add a money back guarantee on drugs and you would have private incentives to purge ineffective pharma products.

Richard writes:

Not just drugs. Incredibly, the FDA must approve improved cancer screening tests (e.g., improved blood or urine tests for prostate cancer) before they may be offered in the US. This results in their being denied to US patients for years after they become widely available in Europe.

Comments for this entry have been closed
Return to top