I've had a lively and intelligent debate with my friend Shel Holtz via email about Facebook's recent feed experiments, as published in the most recent PNAS. To recap what Facebook did, they experimented on the feeds of approximately 700,00 users to adjust the emotional content of their news feeds. With one group, negative emotions were largely removed from their stream, and with another, positive emotions were removed. The net effect was that for users who largely saw only negative content, their own updates became more negative (and vice-versa). In other words, in the language of the study, the emotional content of our social media stream is "contagious."
There has been quite an uproar over this study, with many voices on the web calling it unethical. Shel recently wrote a thoughtful post on the subject, and he raised a very interesting and potentially troubling point: isn't this just A/B testing? Is there any difference between what Facebook did, and Buzzfeed testing 25 different headlines to see which one got the most clicks? Essentially, isn't this all just part of optimizing your site, and don't we as marketers want to manipulate emotions? Isn't that the point of what we do?
I don't agree that this was a simple A/B test, but the distinction is a fine one. I want to address a comment that Shel left me on his blog briefly, not because I enjoy picking fights with Shel (I think it's the first!) but because his point of view is extremely reasonable and logical, and entirely emblematic of a legitimate, contrarian point of view on this study.
Let's say I'm an advertiser. I create three different versions of a commercial. My goal is to get people all weepy. (You've seen these -- starving children for World Vision, injured animals for the ASPCA, polar bears on small pieces of ice for the World Wildlife Fund.) I run a different commercial in each of three geographic areas to see which one results in more donations, then choose that one for national distribution.
It is clearly an experiment to see if I can manipulate people's emotions.
Sorry, I just don't see what Facebook did as any different from this at all. It happens all the time. The only difference here was that in this case, in addition to figuring out what makes for a better News Feed (as Facebook does daily with hundreds and hundreds of similar tests), a couple researchers were invited to participate so they could write a paper about it.
Here is where Shel is 100% dead on: Facebook has been optimizing its content like this since the beginning. Buzzfeed does, as well. In fact, any site that is responsible for attracting eyeballs is practically obligated to do this kind of testing, aren't they? And ultimately, to Shel's point, if I am filming a TV spot for a charity, you betcha I want to make you "feel bad."
But there is a subtle difference, and it can be found in a little faulty parallelism in Shel's first paragraph, above. In the example he gave, the advertiser wants to see which commercial makes people "all weepy." But that's not what gets measured, is it? What was measured, in that example, was donations. Human subjects weren't actually studied. A financial variable was studied, and content optimized for that variable.
So, how would you *actually* measure what was stated as the goal: "to make people all weepy?" You'd call me. I would recruit a number of qualified respondents, inform them that they were being studied and request their informed consent. I would then try to make them cry, with content, or onions, or Ordinary People or whatever. If I were going to publish this in an academic journal, I would have to run my methods past a review board.
Facebook tried to make people weepy, not optimize content. They didn't observe, they intervened. They did not get informed consent, and they did NOT submit their data collection methods for review, merely the already-collected data. Was anyone harmed by this? Well, this may be up to a court someday. I only offer two things to consider. One, the "eggshell skull" principle--it doesn't matter how 'fragile' a victim might have been, damage is damage. The second: do you have a teenager? Does that teenager sometimes have a tough time with their emotions? If Facebook were eliminating positive content from their feed, wouldn't you like to know about it?
Yes, we get manipulated every day by the media. But when the study moves from bits and bytes to humans, there is a Federal Common Rule in play. Is manipulation unethical? I leave this to wiser minds. Is this *study* unethical? That is a different story, I think.