How Facebook's Squishy Ethics Got Them Into TroubleAh, how quickly folks backpedal when they’re caught doing something a little less than transparent. And perhaps something a little bit… squishy, ethics-wise.

That’s what Facebook “data scientist” Adam D.I. Kramer was doing on Sunday, when he posted a status update to his own Facebook page trying to explain why Facebook ran a bad experiment and manipulated — more than usual — what people saw in their news feed.

For some Tuesday-morning humor, let’s take a look at what Kramer said on Sunday, versus what he wrote in the study.

Let’s start with examining the proclaimed motivation for the study, now revealed by Kramer:

We felt that it was important to investigate the common worry that seeing friends post positive content leads to people feeling negative or left out. At the same time, we were concerned that exposure to friends’ negativity might lead people to avoid visiting Facebook.1

To what end? Would you manipulate the news feed even further, making it seem like everybody’s life was a cherry on top of an ice cream sundae, and reduce showing negative content?

It makes little sense that a for-profit company would care about this, unless they could have some actionable outcome. And any actionable outcome from this study would make Facebook seem even less connected to the real world than it is today.2

In the study (Kramer et al., 2014), the researchers claimed their experiment was broad and manipulative:

We show, via a massive (N = 689,003) experiment on Facebook…

The experiment manipulated the extent to which people (N = 689,003) were exposed to emotional expressions in their News Feed. [...] Two parallel experiments were conducted for positive and negative emotion: One in which exposure to friends’ positive emotional content in their News Feed was reduced, and one in which exposure to negative emotional content in their News Feed was
reduced.

In these conditions, when a person loaded their News Feed, posts that contained emotional content of the relevant emotional valence, each emotional post had between a 10% and
90% chance (based on their User ID) of being omitted from their News Feed for that specific viewing.

If you were a part of the experiment, posts with an emotional content word in them had up to a 90 percent chance of being omitted from your news feed. In mine, and most people’s books, that’s pretty manipulative.

Now look how Kramer (aka Danger Muffin) minimizes the impact of the experiment in his Facebook-posted explanation:

Regarding methodology, our research sought to investigate the above claim by very minimally deprioritizing a small percentage of content in News Feed (based on whether there was an emotional word in the post) for a group of people (about 0.04% of users, or 1 in 2500)…

Ah, we go from “up to a 90 percent chance” to “very minimally deprioritizing a small percentage of content.” Isn’t it amazing how creatively one can characterize the exact same study in two virtually contradictory ways?

Was it Significant or Not?

The study itself makes multiple claims and conclusions about the significance and impact of their findings (despite their ludicrously small effect sizes). Somehow all of these obnoxious, over-reaching claims got past the PNAS journal reviewers (who must’ve been sleeping when they rubber-stamped this paper) and were allowed to stand without qualification.

In Kramer’s explanation posted on Sunday, he suggests their data didn’t really find anything anyway that people should be concerned about:

And at the end of the day, the actual impact on people in the experiment was the minimal amount to statistically detect it…3

Which directly contradicts the claims made in the study itself:

These results suggest that the emotions expressed by friends, via online social networks, influence our own moods, constituting, to our knowledge, the first experimental evidence for massive-scale emotional contagion via social networks [...]

Online messages influence our experience of emotions, which may affect a variety of offline behaviors.

Look-y there — no qualifiers on those statements. No saying, “Oh, but this wouldn’t really impact an individual’s emotions.” Nope, in my opinion, a complete contradiction from what one of the researchers is now claiming.

But Was It Ethical?

A lot of controversy has surrounded whether this sort of additional manipulation of your news feed in Facebook is ethical, and whether it’s okay to embed a global research consent form into a website’s terms of service agreement. (Facebook already manipulates what you see in your news feed via its algorithm.)

First, let’s get the red-herring argument out of the way that this research is not the same as internal research companies do for usability or design testing. That kind of research is never published, and never done to examine scientific hypotheses about emotional human behavior. It’s like comparing apples to oranges to suggest these are the same thing.

Research on human subjects generally needs to be signed off on by an independent third-party called an institutional review board (IRB). These are usually housed at universities and review all the research being conducted by the university’s own researchers to ensure it doesn’t violate things like the law, human rights, or human dignity. For-profit companies like Facebook generally do not have an exact IRB equivalent. If a study on human subjects wasn’t reviewed by an IRB, whether it was ethical or moral remains an open question.

Here’s “data scientist”4 Kramer’s defense of the research design, as noted in the study:

[The data was processed in a way] such that no text was seen by the researchers. As such, it was consistent with Facebook’s Data Use Policy, to which all users agree prior to creating an account on Facebook, constituting informed consent for this research.

However, Kashmir Hill suggests that the Facebook Data Use Policy was changed 4 months after the study was conducted to explicitly allow for “research” use of Facebook data.

They also seemed to have fudged in getting a university’s IRB approval for the study. Hill earlier reported that Cornell’s IRB didn’t review the study.5 None of the researchers have stepped forward to explain why they apparently told the PNAS editor they had run it by a university’s IRB.

The UK Guadian’s Chris Chambers offers up this summary of the sad situation:

This situation is, quite frankly, ridiculous. In what version of 2014 is it acceptable for journals, universities, and scientists to offer weasel words and obfuscation in response to simple questions about research ethics? How is it acceptable for an ethics committee to decide that the same authors who assisted Facebook in designing an interventional study to change the emotional state of more than 600,000 people did, somehow, “not directly engage in human research”?

Icing on the Cake: The Non-Apology

Kramer didn’t apologize for doing the research without user’s informed consent. Instead, he apologized for the way he wrote up the research:

I can understand why some people have concerns about it, and my coauthors and I are very sorry for the way the paper described the research and any anxiety it caused.

People aren’t upset you did the research, they’re upset you did the research on them without their knowledge or consent. And sorry Facebook, but burying “consent” in thousands of words of legal mumbo-jumbo may protect you legally, but it doesn’t protect you from common sense. Or people’s reactions when they find out you’ve been using them like guinea pigs.

People simply want a meaningful way to opt-out of your conducting experiments on them and their news feed without their knowledge or consent.

Facebook doesn’t offer this today. But I suspect that if Facebook wants to continue doing research of this nature, they’ll be offering this option to its users soon.

An Ethics Case Study for All Time

This situation is a perfect example of how not to conduct research on your users’ data without explicit consent from them. It will be taught in ethics’ classes for years — and perhaps decades — to come.

It will also act as a case study of what not to do as a social network if you want to remain trusted by your users.

Facebook should offer all users a genuine apology for conducting this sort of research on them without their explicit knowledge and permission. They should also change their internal research requirements so that all studies conducted on their users go through an external, university-based IRB.

 

Further reading

Facebook fiasco: was Cornell’s study of ‘emotional contagion’ an ethics breach?

Facebook Added ‘Research’ To User Agreement 4 Months After Emotion Manipulation Study

 

Reference

Kramer, ADI, Guillory, JE, Hancock, JT. (2014). Experimental evidence of massive-scale emotional contagion through social networks. PNAS. www.pnas.org/cgi/doi/10.1073/pnas.1320040111

 

Footnotes:

  1. Which they already told us in the study: “A test of whether posts with emotional content are more engaging.” []
  2. Facebook seems less connected with my real life, seeing as my own news feed seems to have largely gone from posts about people’s lives to “links I find interesting” — even though I never click on those links! []
  3. Which is a researcher-squirrely way of saying, “Our experiment didn’t really find any noteworthy effect size. But we’re going to trumpet the results as though we did (since we actually found a journal, PNAS, sucker-enough to publish it!).” []
  4. I use quotes around this title, because all researchers and scientists are data scientists — that’s what differentiates a researcher from a storyteller. []
  5. In fact, they note Hancock, a named author on the paper, only had access to results — not even the actual data! []

 


Comments


View Comments / Leave a Comment

This post currently has 3 comments.
You can read the comments or leave your own thoughts.


    Last reviewed: By John M. Grohol, Psy.D. on 1 Jul 2014
    Published on PsychCentral.com. All rights reserved.

APA Reference
Grohol, J. (2014). How Facebook’s Squishy Ethics Got Them Into Trouble. Psych Central. Retrieved on November 20, 2014, from http://psychcentral.com/blog/archives/2014/07/01/how-facebooks-squishy-ethics-got-them-into-trouble/

 

Recent Comments
  • Thank you: Thank you for writing this. I was ghosted six years ago by someone I was in a long distance relationship...
  • So so sad: Thanks Rationaliser for your comments. You are right, despite me trying to be positive and even giving...
  • denise47: great idea maybe this will work for me in social settings thank you
  • Kristine: Thank you! Your words have opened up my eyes to a new way of thinking. I have spent my life trying to...
  • pattisun: This article is spot on!! I gained 80 pounds my first year on Paxil. To be fair I quit smoking that same...
Subscribe to Our Weekly Newsletter


Find a Therapist
Enter ZIP or postal code