Psych Central’s Online Resources Review Guidelines
We’ve been indexing and reviewing online resources since 1992, beginning with Internet newsgroups in psychology and support topics (like support groups for depression). In 1995, we expanded my efforts to include websites and in 1996, we formalized internal guidelines for such reviews that our resource editors (at the time, when I was working for another company) used to evaluate and rate websites. Yes, this was long before HONCode and a year before the American Medical Association (AMA) released their own guidelines and criteria, because we recognized early on that with billions of pages of information out there, consumers (and professionals) would need some guideposts along the way. Psych Central has always acted as that guidepost.
Over the years, we’ve informally discussed many of these guidelines, but we’re now publishing them for others to make use of them. Perhaps it will better one’s own efforts at recognizing “good” sites from “bad” sites.
A Word of Caution
No matter what type of guidelines a person or organization may publish, it is perfectly easy (and becoming easier) to publish a website with a particular agenda that adhere to every single guideline we and others publish or recommend, and still be 100% wrong and a very unhelpful resource (while posing as otherwise).
We run across these sites everyday in our online travels. Perfectly legitimate-looking websites with perfectly legitimate-looking designs, navigation, articles and other content, while offering biased or misleading information. How is this possible? Because in health and mental health, biases are subtle and rarely stated. For instance, a researcher who has spent their entire lives pursuing one possible explanation for the cause of depression has a lot of their time, ego, and publications invested in such an explanation, regardless of whether it has enough data to back it up. But the researcher isn’t going to say, “Hey, make sure you comb through my data results carefully (and don’t just take my word for it in the discussion section of my peer-reviewed, published research study), because it’s easy to portray data in ways non-statisticians may not fully appreciate.”
Websites are even more challenging, because you may never even know who’s publishing the site or what their intentions are. I know, because I’ve spent the past 16 years in this industry and know this specific niche like the back of my hand. I have a pretty good understanding of every major mental health resource out there, and can delineate their positives and negatives and even take a stab at their biases (as others can easily do so for Psych Central). But trying to explain all of this is a 300-word summary of a site is nearly impossible (though try as we do). The big mental health portals are easy, because I either ran them myself at one time or did consulting work with them in one form or another. It’s when you start digging down into smaller websites, many of which are just run by ordinary people, that it gets more and more difficult.
The History of Online Ratings
We actually began the ratings of websites in 1995, but didn’t put it down into writing to follow until 1996 when we hired others to help us keep up with all of the online resources available. We did this because we knew we had industry-specific mental health knowledge that folks like Yahoo didn’t have and didn’t care about (at the time, the big online player in search via its directory, which is still available, but largely overlooked these days). We also recognized that health and mental health concerns weren’t just like other information online — bad or poor information could ostensibly hurt someone.
So when we moved to rating websites, we would rate them on content as well as their presentation. There were so many poorly designed websites in this timeframe that while they may have had really good information, it may have been difficult or distracting to actually access. We meticulously reviewed Web resources and gave them an appropriate rating based upon four main rating categories: Content, Presentation, Ease-of-Use, and Overall Experience.
In hindsight, of course, presentation and ease-of-use, while nice, are not really vital if what they are publishing is a load of bull. You can have the prettiest, most Web 2.0 website available, and it can still publish stuff that has little to no basis in research or facts. Combined with MFA websites (Made for Adsense, a Google advertising program), we now identify a lot of bad websites with these common attributes — a slick, clean “Web 2.0” design, Adsense ads, and content with no research basis. We most often find these sites linked from social networking websites like Digg.com, which is a social networking blog that allows its users to rate articles based upon popularity. In fact, we don’t think it would be difficult to find an inverse relationship between popularity (as measured by a site like Digg.com) and legitimacy of their health or mental health content.