Frank L. Schmidt, a respected professor and researcher at the University of Iowa, gave a talk at the Association for Psychological Science’s 20th convention on Saturday about how scientific data can lie. Yes, that’s right, empirical data — even that published in respected, peer-reviewed journals — regularly do not tell the truth.
Schmidt’s talk was well-attended in one of the largest ballrooms at the Sheraton Hotel and Towers in Chicago where the convention is being held. Although an uneven presentation, Schmidt’s main points came across.
One of which is that the naive interpretation of multiple datasets is often likely to be the most correct — Occam’s razor (“the simplest solution is usually the best answer”). Schmidt claims that good research finds the simple structure underlying complex data.
He summarized there are two main reasons why data can “lie” in research — sampling errors and measurement errors.
Schmidt’s biggest criticism was directed at psychological science’s fetish with significance testing — e.g., statistical significance. He wishes that psychology would move far away from its reliance and fascination with statistical significance, because it is a weak, biased measure that basically says little about the underlying data or hypothesis.
Schmidt described six myths of surrounding significance testing. One myth was that a good p value is an indicator of significance, when it’s really just an indication of a study’s power level. Another was that if no significance was found that means there was no relationship found between the variables (in truth, it may mean simply that the study lacked sufficient power).
Schmidt’s solutions are simple — report effect sizes (point estimates) and confidence intervals instead, and de-emphasize significance testing altogether.
He ended lambasting the new-found emphasis on meta-analyses in psychological research, specifically calling out the journal Psychological Bulletin. In a yet-to-be-published study, he and other researchers examined all the meta-analyses published in the Psychological Bulletin from 1978-2006 — 199 studies altogether.
The researchers found that 65% of these studies examined used a “fixed effects” model for their meta-analysis. Schmidt claimed that in fixed effects models data relationships are underestimated (by as much as 50%) and that researchers are over-estimating how precise they are (how little error there is in that estimate). Instead, Schmidt prefers “random effects” models that better account for these variations.
He also noted that in 90% of the studies examined, no corrections were made for measurement error — one of the major reasons he cites that data can “lie” in psychological research.
Given this analysis, Schmidt is suggesting that a great many meta-analyses published in peer-reviewed journals reach incorrect or faulty conclusions.
Sadly, this state of affairs is unlikely to change any time soon. While many psychological journals have adopted stricter standards for publication of research that better adhere to Schmidt’s suggestions, many still do not and seem to have no intention of changing.
What this means to the average person is that you cannot trust every study published just because it appears in a peer-reviewed journal, which is then publicized across the media as “fact” through a press release. Such facts are malleable, changing, and faulty. Only through careful reading and analysis of such studies can we understand the value of the data they present.