Playing Hide-and-Seek With Psychiatric Drug Studies
by Carl Elliott
If I were in charge of distributing NIH grant money, I’d be sending a lot of it to researchers like Erick Turner, a psychiatrist at Oregon Health and Sciences University and a former FDA reviewer. You might remember Turner’s name from a terrific study of antidepressants he led a few years ago that wound up in the New England Journal of Medicine. The question he asked was simple. Does the published medical literature accurately reflect the available data on antidepressants? The methodology Turner and his colleagues used was equally simple (although exhaustively detailed and time-consuming to execute). They compared published studies of antidepressants with the studies listed in FDA drug approval packages. What did they find? Well, if a study was positive for the antidepressant, it was almost certainly published. But it the study came out poorly for the antidepressant, the study was probably never published — and when it was, the published version was given a positive spin.
This week, Turner and his colleague performed the same trick with second-generation antipsychotic drugs, this time for PloS Medicine. Although their results for the antipsychotics were not as pronounced as their results for the antidepressants, the pattern was similar. Of the 24 FDA-registered trials, 4 were never published, and, of course, those unpublished trials had all turned out poorly for the sponsor. (Three of them failed to show that the study drug was better than a placebo, and the other showed that the drug was statistically inferior to the control drug.) Fifteen of the 20 published trials were positive, however, and those that were negative showed evidence of bias in favor of the sponsor’s drug. “Some of what we found could constitute spin, some would fall into the category of shenanigans,” Turner told ABC News. “The take-home message is there are loopholes in the publication process by which doctors may be relying on information that’s incomplete or somehow skewed. The drug’s effects may be exaggerated or its safety concerns may be downplayed.”
Turner’s article adds to a growing body of evidence showing just how biased the medical literature on antipsychotic drugs has come to be. An especially striking paper, published in 2006 in the American Journal of Psychiatry, was called “Why olanzapine beats risperidone, risperidone beats quetiapine, and quetiapine beats olanzapine.” (Or, to translate that title into brand names, “Why Zyprexa beats Risperdal, Risperdal beats Seroquel, and Seroquel beats Zyprexa.”) The researchers looked at head-to-head comparative trials of antipsychotic drugs and found that 90 percent of the time, the sponsor’s drug beat its competitors. If Lilly sponsored the trial, Zyprexa won. If Janssen sponsored the trial, Risperdal won. And if AstraZeneca sponsored the trial, Seroquel won.
Critics of the drug industry often point to studies like this as evidence that we can’t trust what we read in medical journals anymore. That’s probably true, but there is also a deeper ethical problem. All of these psychiatric drug studies involve human subjects, who are taking real risks when they sign up for clinical trials. How many of them would consent to a trial if they understood that the sponsor was going to bury or spin the results in order to market its drug?