Finding Flaws in Social Media Research

Researchers need to be wary of serious pitfalls of working with huge social media data sets, according to computer scientists at McGill University in Montreal and Carnegie Mellon University in Pittsburgh.

Flawed results can have huge implications: Thousands of research papers each year are now based on data gleaned from social media.

“Many of these papers are used to inform and justify decisions and investments among the public and in industry and government,” said Dr. Derek Ruths, an assistant professor in McGill’s School of Computer Science.

For behavioral scientists, the growth of social media has seemed an unprecedented opportunity to capture, and then analyze, copious amounts of information about human behavior.

Many scientist believe such ripe data sets can help predict human behavior on a level never before imagined. In recent years, studies have claimed the ability to predict everything from summer blockbusters to fluctuations in the stock market.

But in an article published in the journal Science, Ruths and Dr. Jürgen Pfeffer of Carnegie Mellon’s Institute for Software Research highlight several issues involved in using social media data sets, along with strategies to address them. Among the challenges:

  • Different social media platforms attract different users — Pinterest, for example, is dominated by females aged 25-34 — yet researchers rarely correct for the distorted picture these populations can produce;
  • Publicly available data feeds used in social media research don’t always provide an accurate representation of the platform’s overall data — and researchers are generally in the dark about when and how social media providers filter their data streams;
  • The design of social media platforms can dictate how users behave and, therefore, what behavior can be measured. For instance, on Facebook the absence of a “dislike” button makes negative responses to content harder to detect than positive “likes;”
  • Large numbers of spammers and bots, which masquerade as normal users on social media, get mistakenly incorporated into many measurements and predictions of human behavior;
  • Researchers often report results for groups of easy-to-classify users, topics, and events, making new methods seem more accurate than they actually are. For instance, efforts to infer political orientation of Twitter users achieve barely 65 percent accuracy for typical users — even though studies (focusing on politically active users) have claimed 90 percent accuracy. Twitter users achieve barely 65 percent accuracy for typical users — even though studies (focusing on politically active users) have claimed 90 percent accuracy.

Investigators say many of the problems are also common to other fields such as epidemiology, statistics, and machine learning.

“The common thread in all these issues is the need for researchers to be more acutely aware of what they’re actually analyzing when working with social media data,” Ruths says.

Social scientists have honed their techniques and standards to deal with this sort of challenge before.

“The infamous ‘Dewey Defeats Truman’ headline of 1948 stemmed from telephone surveys that under-sampled Truman supporters in the general population,” Ruths notes.

“Rather than permanently discrediting the practice of polling, that glaring error led to today’s more sophisticated techniques, higher standards, and more accurate polls. Now, we’re poised at a similar technological inflection point. By tackling the issues we face, we’ll be able to realize the tremendous potential for good promised by social media-based research.”

Source: McGill University