More on Infamous Paxil Study 329
In a rare behind-the-scenes disclosure (due to a lawsuit), the public is seeing for one of the first times the degree and depth some pharmaceutical companies will go to in order to publish positive results about their drug. Using the same peer-review process that is supposed to prevent abuses by researchers and drug companies and provide other professionals (and the public) with objective data. And the same peer-review process that is used by the U.S. Food and Drug Administration (FDA) to approve medications as safe and effective.
CL Psych provides us with a further analysis of Paxil study 329, one where apparently the researchers went to great lengths to find efficacy. Why the re-examination of this study?
Because another study was just published in the International Jouranl of Risk and Safety in Medicine. The new study examined the internal documents, full dataset and drafts that were released related to a lawsuit against the makers of Paxil. The damning findings from the new study?
5.1. Were the results for study 329 positive or negative?
There was no significant efficacy difference between paroxetine and placebo on the two primary outcomes or six secondary outcomes in the original protocol. At least 19 additional outcomes were tested. Study 329 was positive on 4 of 27 known outcomes (15%). There was a significantly higher rate of SAEs with paroxetine than with placebo. Consequently, study 329 was negative for efficacy and positive for harm.
5.2. Did selective reporting occur?
Claims that paroxetine was “generally well tolerated and effective” arose from selective reporting of the 15% of outcomes that were positive and selective under reporting of the other efficacy and SAE findings. The JAACAP paper has been defended on the grounds that readers could read in the results table that the two outcomes described as primary elsewhere (but not in that table) were negative.
However readers are more likely to be influenced by the abstract than the tables of a clinical trial report, as evidenced by the continued retransmitting of the false impression that study 329 found “significant efficacy on one of the two primary endpoints”. A likely cause of this misunderstanding is the conflation of ‘remission’ and ‘responder’ and especially the false statement that “paroxetine separated statistically from placebo at end point among 4 of the parameters: [including] response (i.e. primary outcome measure) . . .”.
In other words, the researchers carefully picked over the data to present only the data in the published study that were most favorable to the drug that paid for the study — Paxil. This pretty much shows the major, gaping hole in the peer-review process — that journals can only ask questions about what they’re told about. If researchers conceal the true design of a study (or negative data), then journals will get a biased picture. And then happily publish such a picture completely oblivious to the truth.
Another surprising finding was that the study wasn’t written by the authors listed. It was ghostwritten by someone with a master’s degree. You need look no further than the first draft to see the proof of this. I don’t know whether this is standard operating procedure for studies of this size, but you’d expect such authorship would be noted as it is in traditional publishing.
You can read all about the picking apart of Study 329 over at Healthy Skepticism. The scary thing is that nobody knows how widespread these kinds of biases are in the published research. This is one study out of thousands of similar peer-reviewed, published studies. Could other published studies suffer from similar problems? And if so, to what degree is the published literature tainted by these kinds of underhanded methods?
We may never know.
Grohol, J. (2008). More on Infamous Paxil Study 329. Psych Central. Retrieved on October 26, 2016, from http://psychcentral.com/blog/archives/2008/04/30/more-on-infamous-paxil-study-329/