The Complexity of Psychology Research
A lot of times, I write about the results of some new psychology research study or scientific analysis. I boil the results down to digestible findings and try and wrap the whole thing up in simple, common-sense terms.
But sometimes what I don’t write about is often more fascinating than what I do.
The science of psychological research is, in itself, a complex and regularly contested issue. For every new study published, another study will come out that will directly refute or at the very least, call into question, the findings of the study.
One of the journals I subscribe to from the Association for Psychological Science is called Perspectives on Psychological Science. This journal publishes scholarly debates about the merits of certain aspects of the science of psychology. Every issue is chock full of experts in their given field publishing dueling peer-reviewed journal articles and studies, actively arguing what the data really are trying to say.
Now, I love a good academic debate as much as the next researcher. But I find the whole exercise a bit frustrating. Take a typical exchange from the journal:
- Researchers A and B publish a meta-analysis of some topic area in psychology.
- Journal editors get topic experts C & D to write a critical analysis and commentary about the meta-analysis.
- Researchers A and B reply to the criticism in a response.
As a professional with no specific knowledge of the topic area, after such an exchange, I’m left scratching my head: Who’s right? The original researchers, or the critics of the researchers? After reading some 20 or 30 pages, my head is swimming and both sides appear to make reasoned, logical arguments. But since I don’t know the topic area like these researchers do, I can’t reach a satisfactory conclusion.
This is one of the challenges in any field of science, and perhaps even more so in the study of psychology, where every component of a researcher’s assumption can be challenged (“Look at the way you defined negative affect, it’s no wonder you found the results you did!”).
It’s hard for me to write about these debates because on some level, they seem so esoteric.
So while I had intended to write a summary about a meta-analysis on the experimental research on rejection, after reading the meta-analysis and its critique, I found I didn’t know what I could tell you that the research “says” definitively. But I’ll give you a little flavor of the exchange:
A picture of the rejected state can be constructed from these findings. Rejection makes people feel bad. Mood is affected by rejection, as demonstrated by the moderate effect size. […]
The mood effect has direct implications for our understanding of how to counsel rejected individuals. Rejection is an emotionally distressing experience—it does not make people emotionally numb. As such, clinical psychologists and counselors should take steps to help people feel less distressed and to improve their mood. Improving mood is particularly important as mood can impact many other areas of behavior and functioning. However, such mood alleviation may not be the ultimate answer, because there is no evidence that mood mediates the effects of rejection.
This mood effect leaves open the possibility that people may try to improve their emotions to recover from rejection. This possibility has been ignored by the self-regulation account, as the previous failures to find a mood effect suggested there was no mood to regulate. We now know from this meta-analysis that mood is something that needs to be considered. Mood regulation has now become a distinct possibility (Gerber & Wheeler, 2009) [Emphasis added].
The reply to this by the critics:
The debate about emotion loses some of its importance given that emotion is essentially irrelevant to the behavioral effects of rejection, as all sides (including Gerber and Wheeler) agree. So if emotion does exist, it does not seem to matter, at least in terms of the behavioral consequences. Gerber and Wheeler’s focus on emotions following exclusion thus adheres to a recent tradition in the field that some of us have criticized (Baumeister, Vohs, & Funder, 2007): namely, the exploration of cognitive and affective phenomena that have little demonstrable relevance to anything that actually happens. […]
Thus, the main contribution of Gerber and Wheeler has been to compile a biased sample of studies and misinterpret their results so as to provide ostensible but unwarranted support for the prevalence of emotional reactions that have no known consequences. Their conclusions about emotion, numbness, and control should be disregarded. The publication of their meta-analysis based on erratic and incomprehensible codings, omissions of substantial amounts of relevant data (mostly contrary to their theory), distorted and unjustified interpretations, and misuse of cited sources casts doubt on the ability of journal reviewers to evaluate meta-analyses and hence contain a strong implicit warning about reliance on meta-analyses in general (Baumeister et al. 2009) [Emphasis added].
Ouch. That hurt.
So the first set of researchers conducted a meta-analysis that seemed to show that rejection makes people feel bad. Great finding, that. Anyone who’s ever been rejected (in a relationship, for a job, etc.) could’ve told them that. But they did a large review of studies published about rejection and thought that they had found good empirical support for this finding.
Not according to the second set of researchers. And they said, even if the meta-analysis was valid, it doesn’t matter anyway.
Gerber & Wheeler had a followup reply that basically said the critics didn’t know what they were talking about. And one of the critiques about not including unpublished and non-significant results in the meta-analysis included this sniping footnote by the researchers:
“The only research group not represented by unpublished results is the Baumeister group, despite personal requests for such studies.”
And we thought academia lacked any excitement or bloodshed!
Baumeister, R.F., DeWall, C.N. & Vohs, K.D. (2009). Social Rejection, Control, Numbness, and Emotion: How Not To Be Fooled by Gerber and Wheeler (2009). Perspectives on Psychological Science, 4(5), 489-493.
Gerber, J. & Wheeler, L. (2009). On Being Rejected: A Meta-Analysis of Experimental Research on Rejection. Perspectives on Psychological Science, 4(5), 468-488.
Grohol, J. (2009). The Complexity of Psychology Research. Psych Central. Retrieved on October 27, 2016, from http://psychcentral.com/blog/archives/2009/09/23/the-complexity-of-psychology-research/