So you wonder how “Internet Addiction Disorder” is faring these days? The recent research is no more persuasive. We recently received a copy of a newsletter published by one of the proponents of this disorder, which noted,

The CyberPsychology & Behavior Journal has been a wonderful academic journal and resource for those in the online psychology field. As many of you may know, articles focus on ways that virtual reality can be used in psychotherapy, experiments in digital literacy, to articles on online dependencies and Internet addiction. In the August issue, a new study conducted at Cardiff University in the United Kingdom completed a validation test of Young’s Internet Addiction Scale.

The exact reference is: The Psychometric Properties of the Internet Addiction Test by L. Widyanto and M. Murran, CyberPsychology & Behavior, Vol. 7, No. 4, 2004, pp. 443-450.

Of course, they failed to mention what the researchers actually found in their study. So we took a look at it…

First, the researchers note how they obtained their sample, but not how the study was advertised online. Was it, “Take this study and help in Internet addiction research” or “Take this study to help us measure people’s Internet usage”?? The form of the question relates directly to the bias in your sample population. I’m not sure how this ever got through peer-review missing this information (and the fact that I sit on this journal’s editorial board isn’t helpful!). As the study’s authors note in their discussion section, regardless of how the sample was obtained, it remains self-selecting and biased. That means the entire study’s results and conclusions need to be taken with a healthy and large grain of salt, because it could be that what the researchers measured was unique to this unique population they asked.

Despite having seven different methods for obtaining subjects online, the researchers only managed to come up with 92 responses in 7 weeks. Compared to other research that uses online samples, that is a small number. The sample was also inexplicably skewed toward females (66.3%), which is unrepresentative of the Internet population in general. Also of particular interest for our purposes, nearly 60% of the sample size used the Internet for their profession. (From someone who uses the Internet in my profession, I can assure you my responses to such a questionnaire are not going to be anything like the general population’s!)

Getting to the heart of the study, validating the psychometric properties of the standard measure of this disorder, the Internet Addiction Test (which, I’d like to remind readers, was simply created by adopting the criteria for “compulsive gambling” and swapping out a few words). The researchers discovered six factors in the test, each measuring a different aspect of the purported disorder. Only one of those six factors, however, accounts for the majority of the variance in the test. Typically when designing a valid psychometric instrument, you look for your factors to be equally weighted as much as possible. This means that five items on the test (out of 20) can arbitrarily tag you as “addicted” to the Internet. Not good.

Two other findings of interest from this study are noted. One is the confirmation of a previous finding “which indicates that users who had only started using the Internet were neglecting their social lives more compared to longer term users” (something I theorized back in 1999). Second, no correlation was found between the interactivity of an Internet function and addictiveness of that function (contradicting earlier research).

As a side-note, I always find it interesting when researchers find something that disagrees with previous research, they immediately blame their sample size or sampling techniques. But they don’t mention those same problems when discussing findings that correlate positively with other research. Unless you’ve specifically tested or accounted for it, research sampling problems and sample size problems affect both positive and negative results in the same manner. You cannot ignore the problems for data that are agreeable and then emphasize the problems for data that are disagreeable.

The Internet Addiction Test, as it stands today, is not a valid psychometric instrument. First published in a book (not in a peer-reviewed journal), it is not surprising that this instrument cannot withstand scientific scrutiny. It has issues both with reliablity and validity at present. Anybody who has taken this test and assumed it meant they were indeed “addicted” to the Internet should seriously reconsider the proposition and the label. It is not a recognized mental disorder, and the research is still very much out as to whether it ever will be.

 


Comments


View Comments / Leave a Comment

This post currently has 16 comments.
You can read the comments or leave your own thoughts.


    Last reviewed: By John M. Grohol, Psy.D. on 16 Apr 2005
    Published on PsychCentral.com. All rights reserved.

APA Reference
Grohol, J. (2005). More Spin on “Internet Addiction Disorder”. Psych Central. Retrieved on September 20, 2014, from http://psychcentral.com/blog/archives/2005/04/16/internet-addiction-disorder/

 

Recent Comments
  • wordpill: You delete and entire book to get rid of writers’ block? Let’s call that the ‘chopping...
  • disequilibrium1: I so agree at Tbird. A client can be utterly compliant, rip and bleed from all her old wounds,...
  • Mickey: I found out that I have a sister from my father’s affair 27 years ago. It was and still is the family...
  • Unique: As far as Christianity is concerned, the niche in which brother Paul maintained in the holy book of the...
  • Hank Roberts: New link for an article I was sure I’d posted long ago: http://www.aia.org/groups/e...
Subscribe to Our Weekly Newsletter


Find a Therapist
Enter ZIP or postal code