Violence & Video Games: A Weak, Meaningless CorrelationDo violent video games lead to greater violence amongst those who play them?

While the actual answer is complex, the simple answer is easy — of course not. Just take a look at the graph at the overall decline of youth violence rates to the left (and the larger version below). Even as video game sales across the board have increased, rates of violence amongst youths has declined.

But a 2010 meta-analysis (Anderson et al.) on violent video games (VVGs) can’t be ignored. So let’s take a look at what they found.

Long-time readers of World of Psychology know that in research, it’s not always the results that paint the picture. It’s the amount of manipulations and rationales you provide for designing the study in the specific manner you did that shed light on your likely findings — long before a single datapoint is collected.

So whenever a set of researchers go outside the normative practice of standard meta-analytic procedures, well, a few red flags are going to be set off.

The first decision you have to make in a meta-analysis — that is, a study of previous research on a given topic — is what studies will you actually look at in your analysis and what studies will you ignore? This is referred to as your “inclusion” and “exclusion” criteria, and for most researchers, it’s pretty straight-forward.

Anderson et al. (2010)1 arguably began stacking the deck here, by including unpublished studies they gleaned haphazardly from other research and database searches. They also sub-divided their analysis into two groups — one that included 129 studies that did not meet a set of “best practices” for this analysis, and another set of what they defined as higher-quality research. (Who defined these “best practices?” The researchers did, of course!)2

Once a researchers has gotten rid of all the troublesome studies that might weaken their findings (by defining exclusion or inclusion criteria as needed), then it’s pretty easy to put together the remaining studies and find something significant.

Which is exactly what Anderson et al. did, in my opinion.

Is This Really a Strong Meaningful Correlation, Though?

Anderson et al. spend a lot of time talking about effect sizes and strength of correlations — in both their original study, and in their rebuttal letter (Bushman et al., 2010) to two critics (Ferguson & Kilburn, 2010) of their original study. Whenever a researcher spends so much time trying to make it sound like their small effect size is actually bigger than it is, that’s a red flag for me too.

They divide their analysis into three groupings. The first are the artificial experimental studies conducted in a lab, meant to stimulate some sort of real-world behavior and test specific hypotheses. The second are cross-sectional studies, where a person is given a survey or measure that measures their aggressiveness, hostility, attitude, etc., and asked how often they play frequent video games, how the violent the content is, and so forth. The third is a longitudinal study, where an second assessment is made of the same group of people later on, to see whether the impact of time is important.

Now, in these three groups, the overall effect sizes for aggressive behavior were .21 (.18), .26 (.19), and .20 (.20). For aggressive emotions, the effect sizes were .29 (.18), .10 (.15), and .08 (.08).

For aggressive thoughts the numbers were .22 (.21), .18 (.16), and .12 (.11). The first number represents the “highest quality” research, while the numbers in parentheses represent the analysis from all studies the researchers examined.

Notice that, in general, in the artificial, experimental (usually college-aged subjects) studies, you find the strongest effect sizes? And the correlations tend to be smaller for longitudinal studies? This suggests, to me anyways, that the long-term impact of playing violent video games isn’t really all that worrisome.

Now, the researchers argue (twice), “However, as numerous authors have pointed out, even small effect sizes can be of major practical significance.” This is true, especially when it comes to numbers to treat to prevent a disease, or using some other kind of intervention that will help with a population-wide problem.

It becomes less true when you’re using your argument that the small correlations you’ve discovered somehow can impact real-world behavior — without clearly explaining how.

After all, video games — like ’em or hate ’em — are a form of free speech, protected by the 1st Amendment of our Constitution. You obviously could no sooner ban them then you could ban guns from our country.

But the researchers inadvertently answer my question — and make my case for me — in their concluding statements:

Furthermore, when dealing with a multicausal phenomenon such as aggression, one should not expect any single factor to explain much of the variance. There are dozens of known risk factors for both short-term aggression and the development of aggression-prone individuals. To expect any one factor to account for more than a small fraction of variance is unrealistic. [Emphasis added]

Which is precisely why all this nonsense and focus on violent video games is exactly that — stupidity masquerading as something important. It doesn’t really matter one whit whether violent video games contribute to aggressive thoughts and behaviors, because there are so many other factors that contribute to such thoughts and behaviors.

Warning labels on violent video games (last time I checked, violent video games already carried such labels) won’t change much behavior, just as it doesn’t keep teens from watching R-rated movies.

Instead of pointing the finger and blaming others for one tiny factor that may contribute to such aggressive behavior, we’d be far better off spending time on factors that can have an immediate, real impact in a teen’s life. Playing video games with them. Setting reasonable limits on video game time. Interacting and talking with them more about the things that matter most to them. You know — real connection.

And so while the correlation may mean something to the researchers who endlessly argue (and care) about such minutiae, I’ll keep going back to the statistics that actually make a difference to ordinary folks:

Violent video games might have a small correlation with aggressive behavior, emotions and thoughts, but it’s a weak and ultimately meaningless connection that makes little difference in the real world.

 

References

Anderson, CA et al. (2010). Violent Video Game Effects on Aggression, Empathy, and Prosocial
Behavior in Eastern and Western Countries: A Meta-Analytic Review. Psychological Bulletin, 136, 151-173.

Bushman, BJ, Rothstein, HR, Anderson, CA. (2010). Much Ado About Something: Violent Video Game Effects and a School of Red Herring: Reply to Ferguson and Kilburn. Psychological Bulletin, 136, 182-187.

Ferguson, CJ & Kilburn, J. (2010). Much Ado About Nothing: The Misestimation and Overinterpretation
of Violent Video Game Effects in Eastern and Western Nations: Comment on Anderson et al. (2010). Psychological Bulletin, 136, 174-178.

Footnotes:

  1. This is the same study that Keith Ablow recently referred to as a ‘recent study.’ []
  2. Well, are these “best practices” criteria at least objective? The researchers would have you believe they are, since they relied on “two independent raters” to code the studies. But then let’s look at some of the criteria themselves:

    “…compared levels of the independent variable were appropriate for testing the hypothesis…”

    How is “appropriate” defined?

    “… outcome measure could reasonably be expected to be influenced by the independent variable if the hypothesis was true…”

    “Reasonably be expected” by whom? What is the measure of “reasonableness” used herein? Undefined. []