Meta-analyses are great research tools, because they allow researchers to look at data across large sets of data published by multiple studies, and see if there are more powerful (or less powerful) effects that no single study has found on its own.
So it’s always interesting to read something that a meta-analysis finds in the data that individual studies didn’t quite find.
Today, British researchers discovered, unsurprisingly, that Antidepressant Data Showed Not as Effective as Thought. I say unsurprisingly, because the researchers made a series of decisions that pretty much guaranteed their end-result.
First, they went to the original datasets and included unpublished data too. Unpublished data is usually unpublished for a reason — for instance, the study was either poorly designed (not taking into account some variable that made the conclusions useless), or maybe it had insignificant findings (e.g., placebo worked just as well as Drug A). If you include all those studies that found insignificant results, averages say that’s going to bring down the efficacy of any drug being examined. There is no drug on the market today that doesn’t have a study (likely unpublished) that showed the drug had no significant effect on whatever it was being studied.
Second, the researchers looked at data in a single slice of time (1987 – 1999). While their findings are true for that time period, in the intervening 19 years, many additional studies on the effectiveness of the seven SSRI antidepressants (only four of which made it into this study) have been published. Does that mean the researchers’ findings are invalid? No, it just means that the FDA trial data — the dataset that should be the strongest and make the most compelling argument for a drug’s approval by the FDA — was pretty darned weak when pooled and looked at together. It would be interesting if the researchers could do a similar analyses of the 19 years worth of data now acquired and see if they found similar results (an impossibility, by the way, because nearly all drug companies still don’t release unpublished data on their drugs).
Third, researchers love to argue details and specifics. Is a 1.8 point change on the Hamilton depression scale clinically significant, or do you need a 3 point change? Well, the British organization, the National Institute for Clinical Excellence (NICE) published a clinical guideline in 2004 that says you need that 3 point difference, and since those folks are far smarter than I, I agree with them. But of course the U.S.-based FDA doesn’t use British guidelines for determining clinical efficacy (although it may consult with such guidelines) and ultimately, drug approval.
Patients taking a placebo, or sugar pill, had nearly an 8 point improvement on their Hamilton depression scale, a clinician-based rating of a patient’s depression. People taking one of the four studied antidepressants had nearly a 10 point improvement on the same scale. So while people taking the antidepressant felt better than their sugar-pill counterparts, it wasn’t likely a change one could feel or that others would notice.
The upshot of this research is to show how very weak these four antidepressants’ data were, and that the FDA actually approved these drugs despite this weakness. Perhaps the weakness could not be seen individually, in each study’s data, and if that’s the case, the FDA should now be conducting their own internal meta-analyses on a single drug (or class of drugs) every year, to ensure their decisions remain valid in a more objective and empirical light.
- Antidepressants: Meet the New News, Same as the Old News from CL Psych
- British Researcher Gives Thumbs Down To Anti-Depressants from Furious Seasons