The Challenges of Accurate Reporting on Video Game ResearchRecently, a group of approximately 230 media scholars, psychologists and criminologists sent an open letter to the American psychological community asking them to retire their flawed policy statements on media and video game violence, and refrain from similar statements in the future.

This effort is an expression of concern with the way in which research in this field has been communicated by professional advocacy groups such as the APA to the general public.

In short, previous policy statements have exaggerated the strength and consistency of media effects, implied scientific consensus where there was none, and arguably done much damage to the credibility of our field in the process.

The roots of our problems in the field of media violence are probably many. There are obvious moral crusading and politicking elements that have moved the field away from objectivity and into a rigid ideology. Some anti-media scholars have begun making sweeping smears of colleagues who disagree with them as “industry apologists.” It is unfortunate to see what has become of the culture of our research field. Certainly, it is reasonable for scholars to come to different conclusions about whether media contributes to aggression, but I think it’s more remarkable how aggressive some aggression researchers are. Perhaps conducting media violence research makes one more aggressive than watching media violence itself.

One concern I’ve had with this field is the way that data often are presented misleadingly to the public. In a recent article published in American Psychologist, I discuss how groups such as the APA and American Academy of Pediatrics promulgated comments that were little better than scientific urban legends. These include the notions of either a scientific consensus, or consistent effects, but also long-discredited comparisons with medical research such as smoking and lung cancer, or claims that the interactive nature of video games made them different from other media.

Some Problems with Meta-Analysis

I also have concerns about the way in which meta-analysis has been improperly used in this field. Granted, meta-analysis is probably improperly used quite often, particularly in a “the average effect size wins” approach to using meta-analysis to resolve academic debates. Using meta-analysis in this fashion is clearly biased in favor of those who believe in an effect and it’s not hard to show why.

Let’s imagine we have the hypothesis that asparagus causes depression. Researchers run ten different studies of this hypothesis, all identical in sample size, methodology, etc. Five of these find correlations in the range of r = .3 (a small, but practically significant correlation). The others find nothing.

Throw these together into a basic meta-analysis and the average effect size would be r = .15. Asparagus haters everywhere (myself included) declare victory. But, scientifically, this is nonsense. Meta-analysis is being used to wash away a 50 percent failure to replicate rate, something that, in fact, is quite dismal for the hypothesis in question.

As John Grohol noted in a recent blog post, one example of the misuse of meta-analysis was the 2010 meta-analysis of video game violence by Anderson and colleagues. Dr. Grohol notes a number of problems with this meta-analysis, such as the selection bias in the included studies. The authors of the Anderson et al., meta-analysis have tended to be proponents of the idea that a comprehensive search for unpublished studies should be conducted. In a recent exchange to be published in a forthcoming edition of European Psychologist, one of the authors (Dr. Brad Bushman) acknowledged they made no such comprehensive search for unpublished studies (something those of us in the field have known for some time).

Media Reporting on Meta-Analyses

However, I have more concerns about the way this meta-analysis often is communicated to imply consistency in this research field where none exists.

For instance, in a recent editorial for CNN, Dr. Bushman described their meta-analysis thusly:

My colleagues and I conducted a comprehensive review of 136 articles reporting 381 effects involving over 130,000 participants around the world. These studies show that violent video games increase aggressive thoughts, angry feelings, physiological arousal (e.g., heart rate, blood pressure), and aggressive behavior. Violent games also decrease helping behavior and feelings of empathy for others. The effects occurred for males and females of all ages, regardless of what country they lived in.

From this sweeping generalization, readers might be pardoned for thinking that the 136 articles all came to the same conclusion or that the 130,000 participants all responded to violent video games the same way. This was not remotely the case. Instead, the meta-analysis was used to sweep away failed replications and paint a picture of consistency. This is that “average effect sizes” wins misuse of meta-analysis. If you run a meta-analysis and get something different from zero, why not go ahead and imply that the entire field is consistent?

Dr. Bushman also failed to note that these were mainly bivariate relations he was reporting, and that, in many cases, controlling for something as simple as gender and (in longitudinal studies) Time 1 aggression greatly reduced the effect size estimates, often to trivial values. At times I’ve seen presentations based on data from the 2010 meta-analysis used to imply that video game violence is second only to gang violence as a cause of youth violence (despite that most video game studies are on aggression not violence), and well ahead of things like abusive parenting. That’s obviously nonsense, even if you worry about violent video games.

A meta-analysis that sweeps away inconsistencies in a field and is used to make grand proclamations of impending doom is bad science. But it makes for great headlines. Which is what I suspect our field has sadly boiled down to: running studies not to conduct objective science, but to produce headlines that will frighten parents, policy makers and other scholars as much as possible in line with a particular moral crusade.

It’s this cultural decay within our field that led such a large group of scholars to express their concern to the APA. Let’s hope the APA listens.

 


Comments


View Comments / Leave a Comment

This post currently has 3 comments.
You can read the comments or leave your own thoughts.


    Last reviewed: By John M. Grohol, Psy.D. on 5 Oct 2013
    Published on PsychCentral.com. All rights reserved.

APA Reference
Ferguson, C. (2013). The Challenges of Accurate Reporting on Video Game Research. Psych Central. Retrieved on October 23, 2014, from http://psychcentral.com/blog/archives/2013/10/05/the-challenges-of-accurate-reporting-on-video-game-research/

 

Recent Comments
  • Sister: Anon- If I were Mickey it would matter to me because there is a sister, a sibling, a blood related family...
  • Joel Hassman, MD: Nice post, I agree.
  • Jeremy P.: Let me get this straight, this women has no shame in prostituting herself, but call her a word that...
  • CONFUSED & XCITED: Wow, I never have known anyone, who could be. So, dang hateful. I first must say, I feel a lil...
  • Jay: Yesterday, after working on some childhood issues in my therapy appointment (that are really hard for me to...
Subscribe to Our Weekly Newsletter


Find a Therapist
Enter ZIP or postal code