We all know reality from fiction, and we would all like to think we recognize “information” when we see it. But do we really?

The world of Web 2.0 is making such differentiation increasingly difficult, as publishers begin to blur the lines between facts and opinions.

Facts are tangible pieces of information having an objective reality. “The sky is blue” is a fact. An opinion, on the other hand, is one’s view, judgment or appraisal of a thing. Opinions are valuable, certainly, but generally have less meaning without knowing something about the person giving the opinion. For instance, if someone I hold in low regard gives me their opinion, I will likely discount it out of hand. If that same person provides me with a fact, however, there is nothing to discount – they are providing me with a piece of objective reality.

If an individual is seeking health information online, generally they are seeking a fairly balanced understanding of the health issue or mental health condition. This has become increasingly difficult in this day and age, however, because sometimes publishers and websites are blurring the lines between facts and opinions. Sometimes we also confuse facts with opinions, or emphasize one over the other, to the detriment of understanding.

Publishers Can Help or Hurt

A website publisher is in a unique position to help someone understand what is fact and what is opinion. To help their readers make these determinations, a publisher or website must work hard to clearly define where they are presenting factual information and where they are presenting other people’s opinions. Sometimes this isn’t as clear-cut as it might seem. For instance, if you compile a large number of people’s subjective ratings about a product or service, the ratings remain subjective (opinions), not objective (facts).

So even if a website collates a bunch of people’s opinions and gives them numbers, that doesn’t elevate the information presented to “fact.” It is still opinion. And such opinion is no substitute for medical advice, professional advice, or an empirical research study. This is a common but potentially dangerous combination of two basic logical fallacies (appeal to belief and biased sample) – if most people in a population group believe something to be true, it is true. Remember that once most civilized people believed the world was flat.

An Example: Rate My Medication

For example, let’s say a website says that 90% of its members report that Medication X gives them a severe headache. Most people would look at that data and say, “Wow, I should stay away from that medication if I can avoid it!” What isn’t reported in such simple data collection and reporting is whether the sample is representative of the population, or whether the medication actually had anything to do with a person’s likelihood of getting a headache. Headaches are very common in the general population – everyone gets them. Randomized controlled research studies have methods in place to measure whether a symptom is likely caused by a specific medication or not.

Generally people’s website self-reports, however, do not have such careful data collection methods. So if I start taking a medication and then notice an increase in headaches, I’m likely to attribute the symptom directly to the start of the medication. What I didn’t mention (since nobody asked me when I filled out the website’s form) is that I also started two other medications at the same time, and stopped drinking coffee. So while I attributed my headaches to only one of the medications, it could’ve been any of the other things.

But We Have Hundreds of Thousands of Users!

Some people believe that sheer numbers of people overcome faulty data collection methods. Seasoned researchers, on the other hand, know better. If you’re not asking all of the right questions from the onset, the data will only be as good as the questions you do ask. Without the full breadth of necessary questions, a website or publisher has absolutely no idea how good (or bad) their data really is. Without that idea, the data must therefore be considered suspect and possibly junk. If it’s mentioned, it should be mentioned in context – that the data was not collected scientifically and therefore may have little or no validity.

When rating whether someone else is “Hot or Not” on the web, or whether a news story is worth reading or not, the lack of validity and reliability in the data is not that important. These kinds of ratings are for entertainment purposes, not for making serious judgments about one’s life or health.

But what happens when a respectable publisher puts these kinds of ratings on a website (maybe with some appropriate disclaimers), providing people with the opportunity to make just such health or life decisions about their care or treatment?

Without a careful discussion of the issues surrounding the validity and reliability of the data (as we have just done), it is the equivalent of providing harmful medical advice. No website should put themselves and their users in such a position, because while it may be “sexy” to be Web 2.0-compliant, it is potentially harmful to people who are vulnerable. Context, professional discussion and careful consideration (including acknowledgment of these issues) can help provide an important balance to these concerns.

 

APA Reference
Grohol, J. (2007). Reliability and Validity in a Web 2.0 World. Psych Central. Retrieved on November 24, 2014, from http://psychcentral.com/lib/reliability-and-validity-in-a-web-20-world/000797
Scientifically Reviewed
    Last reviewed: By John M. Grohol, Psy.D. on 30 Jan 2013
    Published on PsychCentral.com. All rights reserved.

 

 

Categories