This defense of Facebook by Tal Yarkoni is telling in so many ways. Let me count some of them.

Yarkoni begins by taking note of the results of the experiment:

The largest effect size reported had a Cohen’s d of 0.02–meaning that eliminating a substantial proportion of emotional content from a user’s feed had the monumental effect of shifting that user’s own emotional word use by two hundredths of a standard deviation. In other words, the manipulation had a negligible real-world impact on users’ behavior. To put it in intuitive terms, the effect of condition in the Facebook study is roughly comparable to a hypothetical treatment that increased the average height of the male population in the United States by about one twentieth of an inch (given a standard deviation of ~2.8 inches). Theoretically interesting, perhaps, but not very meaningful in practice.

This seems to be missing the point of the complaints about Facebook’s behavior. The complaints are not “Facebook successfully manipulated users’ emotions” but rather “Facebook attempted to manipulate users’ emotions without informing them that they were being experimented on.” That’s where the ethical question lies, not with the degree of the manipulation’s success. “Who cares if that guy was shooting at you? He missed, didn’t he?” — that seems to be Yarkoni’s attitude.

Here’s another key point, according to Yarkoni:

Facebook simply removed a variable proportion of status messages that were automatically detected as containing positive or negative emotional words. Let me repeat that: Facebook removed emotional messages for some users. It did not, as many people seem to be assuming, add content specifically intended to induce specific emotions.

It may be true that “many people” assume that Facebook added content, but I have not seen even one say that. Does anyone really believe that Facebook is generating false content and attributing it to users? The concern I have heard people express is that they may not be seeing what their friends or family are rejoicing about or lamenting, and that such hidden information could be costly to them in multiple ways. (Imagine a close friend who is hurt with you because you didn’t commiserate with her when she was having a hard time. After all, the two of you are friends on Facebook, and she posted her lament there — you should have responded.)

But here’s the real key point that Yarkoni makes — key because it reveals just how arrogant our technological overlords are, and how deep their sense of entitlement:

It’s not clear what the notion that Facebook users’ experience is being “manipulated” really even means, because the Facebook news feed is, and has always been, a completely contrived environment. I hope that people who are concerned about Facebook “manipulating” user experience in support of research realize that Facebook is constantly manipulating its users’ experience. In fact, by definition, every single change Facebook makes to the site alters the user experience, since there simply isn’t any experience to be had on Facebook that isn’t entirely constructed by Facebook…. So I don’t really understand what people mean when they sarcastically suggest — as Katy Waldman does in her Slate piece — that “Facebook reserves the right to seriously bum you out by cutting all that is positive and beautiful from your news feed”. Where does Waldman think all that positive and beautiful stuff comes from in the first place? Does she think it spontaneously grows wild in her news feed, free from the meddling and unnatural influence of Facebook engineers?

Well, I’m pretty sure that Katy Waldman thinks “all that positive and beautiful stuff comes from” the people who posted the thoughts and pictures and videos — because it does. But no, says Yarkoni: All those stories you told about your cancer treatment? All those videos from the beach you posted? You didn’t make that. That doesn’t “come from” you. Yarkoni completely forgets that Facebook merely provides a platform — a valuable platform, or else it wouldn’t be so widely used — for content that is provided wholly by its users.

Of course “every single change Facebook makes to the site alters the user experience” — but all changes are not ethically or substantively the same. Some manipulations are more extensive than others; changes in user experience can be made for many different reasons, some of which are better than others. That people accept without question some changes while vigorously protesting others isn’t a sign of inconsistency, it’s a sign that they’re thinking, something that Yarkoni clearly does not want them to do. Most people who use Facebook understand that they’ve made a deal in which they get a platform to share their lives with people they care about, while Facebook gets to monetize that information in certain restricted ways. They have every right to get upset when they feel that Facebook has unilaterally changed the deal, just as they would if they took their car to the body shop and got it back painted a different color. And in that latter case they would justifiably be upset even if the body shop pointed out that there was small print in the estimate form you signed permitting them to change the color of your car.

One last point from Yarkoni, and this one is the real doozy: “The mere fact that Facebook, Google, and Amazon run experiments intended to alter your emotional experience in a revenue-increasing way is not necessarily a bad thing if in the process of making more money off you, those companies also improve your quality of life.” Get that? In Yarkoni’s ethical cosmos, Facebook, Google, and Amazon — and presumably every other company you do business with, and for all I know the government (why not?) — can manipulate you all they want as long as they “improve your quality of life” according to their understanding, not yours, of what makes for improved quality of life.

Why do I say their understanding and not yours? Because you are not consulted in the matter. You are not asked beforehand whether you wish to participate in a life-quality-improving experiment, and you are not informed afterwards that you did participate. You do not get a vote about whether your quality of life actually has been improved. (Our algorithms will determine that.) The Great Gods of the Cloud understand what is best for you; that is all ye know on earth, and all ye need know.

In addition to all this, Yarkoni makes some good points, though they’re generally along the other-companies-do-the-same line. I may say more about those in another post, if I get a chance. But let me wrap this up with one more note.

Tal Yarkoni directs the Psychoinformatics Lab. in the Psychology department at the University of Texas at Austin. What do they do in the Psychoinformatics Lab? Here you go: “Our goal is to develop and apply new methods for large-scale acquisition, organization, and synthesis of psychological data.” The key term here is “large-scale,” and no one can provide vast amounts of this kind of data as well as the big tech companies that Yarkoni mentions. Once again, the interests of academia and Big Business converge. Same as it ever was.

8 Comments

  1. I think this post is full of rather uncharitable characterizations of what I said in my post, which I would encourage readers to read and judge for themselves. That said, I'll comment on several points here.

    “Who cares if that guy was shooting at you? He missed, didn’t he?”

    Please read the comments on my blog. I was quite clear that I was not pointing out the small effect size as an argument against ethical concerns, but to give people an intuitive understanding of what we're talking about.

    Yarkoni completely forgets that Facebook merely provides a platform — a valuable platform, or else it wouldn't be so widely used — for content that is provided wholly by its users.

    I think it's quite clear from what I wrote that I'm in no way suggesting that the content doesn't come from the users. What I was asserting is that the news feed has, since its inception, always been filtered by Facebook per an algorithm that nobody knows. This means that there is no "default" or "natural" news feed for people to have expectations about. The news feed has continuously changed, and undoubtedly often in ways that are much, much sharper departures than what we're seeing here. That point has been completely neglected by Waldman and others, who write as if they believe that the existing news feed was something like an unadulterated window into what one's friends and family are posting. It isn't, and has never been.

    By way of illustration, magine it turned out that Facebook was already displaying news items that were heavily biased to accentuate the positive. Would you then complain that this is unethical, and that Facebook should bring back more negative items? What if it turned out that, to the contrary, showing people more negative items made them click on ads more? My point is very simply that there is no "normal" here outside of what Facebook's algorithm determines you should see.

    That people accept without question some changes while vigorously protesting others isn’t a sign of inconsistency, it’s a sign that they’re thinking, something that Yarkoni clearly does not want them to do

    This is nothing but an ad hominem attack. If I didn't want people to think, I wouldn't have written my post. I will charitably assume that the same is true in your case.

    Most people who use Facebook understand that they’ve made a deal in which they get a platform to share their lives with people they care about, while Facebook gets to monetize that information in certain restricted ways. They have every right to get upset when they feel that Facebook has unilaterally changed the deal,

    Of course they do. And I'm arguing that Facebook *hasn't* unilaterally changed the deal, because it would be naive in the extreme to think that this kind of A/B testing isn't being done all the time, everywhere, by virtually every major company. Moreover, it's not clear how it's qualitatively different from the way marketers have *always* tried to manipulate people into buying their products.

    As I said in my post, I think it's well worth having broader discussions about the kind of society we want to have, and how we want our data to be used. But to pretend that this particular case is unusual, or came out of nowhere, strikes me as naive. [continued. below]

  2. One last point from Yarkoni, and this one is the real doozy: “The mere fact that Facebook, Google, and Amazon run experiments intended to alter your emotional experience in a revenue-increasing way is not necessarily a bad thing if in the process of making more money off you, those companies also improve your quality of life.” Get that? In Yarkoni’s ethical cosmos, Facebook, Google, and Amazon — and presumably every other company you do business with, and for all I know the government (why not?) — can manipulate you all they want as long as they “improve your quality of life” according to their understanding, not yours, of what makes for improved quality of life.

    I really don't know how you got from what I said to what you suggest I'm saying. In the very sentence right after the one you quote, I point out that your mother and your boss also constantly try to manipulate your behavior, but you don't resent them purely on those grounds. You would resent them if they tried to manipulate you for their own gain and gave you nothing in return. And I think that's exactly the case with Facebook, Amazon, Google, and so on. Nowhere did I imply (and certainly not think!) that it's Facebook that gets to decide whether you're getting anything out of the exchange. To the contrary, I don't think anyone's vote but yours counts. If you don't want your data used in this way (despite authorizing it when you agreed to the Terms of Service), I think that's a perfectly reasonable sentiment, and the right response to that is to (a) close your Facebook account and (b) write your congressional representatives to request that they pass better data privacy laws. It seems very obvious to me that only you are the person who gets to decide whether you gain something from Facebook's service or not.

    Tal Yarkoni directs the Psychoinformatics Lab. in the Psychology department at the University of Texas at Austin. What do they do in the Psychoinformatics Lab? Here you go: “Our goal is to develop and apply new methods for large-scale acquisition, organization, and synthesis of psychological data.” The key term here is “large-scale,” and no one can provide vast amounts of this kind of data as well as the big tech companies that Yarkoni mentions. Once again, the interests of academia and Big Business converge. Same as it ever was.

    I appreciate the plug. If anyone's interested in any of my research, they're welcome to contact me. I don't appreciate your impugning my motives or work though, and would like to note that I have never received any research funding or data from industry, and for that matter, have never collaborated with anyone at Facebook, Twitter, or any other social media company.

  3. Thanks for this response, Dr. Yarkoni. I am sure that I did indeed read some things uncharitably, for which I apologize (more on this later). But I still have some major concerns about your position. Details:

    “I was quite clear that I was not pointing out the small effect size as an argument against ethical concerns, but to give people an intuitive understanding of what we're talking about.” I don't think you were clear about that at all, but I am happy to accept this as a clarification and withdraw any complaints on that score.

    “I think it's quite clear from what I wrote that I'm in no way suggesting that the content doesn't come from the users. What I was asserting is that the news feed has, since its inception, always been filtered by Facebook per an algorithm that nobody knows.” I certainly agree with that point, which is, I think, non-controversial. However, this is what you said in your post: “Where does Waldman think all that positive and beautiful stuff comes from in the first place?” And your answer was that it comes from the Facebook algorithm, which is clearly does not. A filter is not a source, and you were not clear about that, I think. It seems to me that you were so concerned to demonstrate Facebook’s role in what people see in their timeline that you erased the content that users provide. I do not suggest that you did that intentionally.

    “This is nothing but an ad hominem attack. If I didn't want people to think, I wouldn't have written my post. I will charitably assume that the same is true in your case.” That is an absolutely valid criticism, and I apologize. Let me rephrase my complaint in what I hope is a more constructive way by quoting you further:

    “If you don't want your data used in this way (despite authorizing it when you agreed to the Terms of Service), I think that's a perfectly reasonable sentiment, and the right response to that is to (a) close your Facebook account and (b) write your congressional representatives to request that they pass better data privacy laws. It seems very obvious to me that only you are the person who gets to decide whether you gain something from Facebook's service or not.”

    To which I would reply that this is an unnecessarily binary way of thinking about the issue. It’s absurd to suggest that the only alternatives people have are to accept whatever Facebook is doing or close their Facebook accounts. It is perfectly legitimate for people to protest some of Facebook’s policies — especially the policies that are buried in thousands of words of ToS legalese, a practice that has come under criticism in many courts — and try to get Facebook to change those policies, so they can continue to use a service they value.

    [cont'd]

  4. … Similarly, human experiences aren’t divided into the binary of manipulative/not-manipulative. You write, “I point out that your mother and your boss also constantly try to manipulate your behavior, but you don't resent them purely on those grounds. You would resent them if they tried to manipulate you for their own gain and gave you nothing in return.” If by “manipulate” you mean “change,” that’s one thing; but that’s not the usual connotation of “manipulate.” To manipulate someone is to seek to alter their behavior by covert or deceitful means, without acknowledging that that’s what you’re trying to do — which is precisely what Facebook did in this case. If my mother or my boss tried to manipulate me, according to this typical use of the term, I would absolutely resent it. But if they came to me directly and told me that there are ways in which I needed to change, I would have no cause for resentment.

    So I was wrong to say that you were trying to prevent people from thinking, and again, I apologize. But I think the unintentional effect of the simplistic binaries of your argument — accept or shut down, manipulate or don't manipulate — is to shut down the kind of thinking we need in this situation. People have every right and every reason to argue for Facebook to behave in non-manipulative ways, according to that standard meaning of “manipulate.” But this requires complex negotiations, give and take on both sides, not A/B thinking.

    As for me, I did choose the “close my account” option, in early 2007, and have never regretted it — I think Facebook is a deeply corrupt company and even, in a non-trivial sense of the term, evil — but I understand why people don’t want to do that if they don’t have to.

    “I don't appreciate your impugning my motives or work though, and would like to note that I have never received any research funding or data from industry, and for that matter, have never collaborated with anyone at Facebook, Twitter, or any other social media company.” Thanks for this clarification. I think what I need to apologize for here is a lack of clarity: I did not mean to impugn your motives, but to suggest that someone whose professional goal “to develop and apply new methods for large-scale acquisition, organization, and synthesis of psychological data” has a pretty strong incentive for being sympathetic to other organizations that share that goal. If that’s wrong, or offensive, please let me know.

    And again, thanks for the reply.

  5. What a fetid swamp this back-and-forth is. I hardly dare wade in, considering it’s a battle between ironic mockery and snark. However, I have one idea to reinforce, which is that the opt in or out (wholesale) choice presented by FB and other social media is a de facto limitation users absolutely must accept. There is utterly no negotiation of the ToS. Further, it is no shield that this practice is widespread. I get why it’s been normalized, but the same can be said of genocide under the right conditions. Prof. Jacobs opines (correctly) that FB is deeply corrupt and nontrivially evil. The obvious choice is to opt out completely, but there has never been a lack of people willing to be herded into selling their souls for something sugarcoated.

  6. "Fetid swamp"? I thought it was a reasonably polite exchange of views.

    Re: "There is utterly no negotiation of the ToS," there can be, but if and only if there are widespread and credible threats to leave the service altogether. However, few users' threats are credible and Facebook knows it.

  7. ".. the news feed has, since its inception, always been filtered by Facebook per an algorithm that nobody knows."

    Who says this isn't bothering already? People give Facebook some leeway over this for the following reasons:

    a. The News Feed algorithm is there to provide the user with "more interesting content" i.e. this manipulation happens for the user's, not Facebook's own good.

    b. The algorithm is secret. So no one can figure out whether it serves Facebook primarily, which would be a problem, as was the case with the "study".

    c. Not sure if I'm correct but I believe you do retain the option to have a non-filtered newsfeed (Alternative to "Top Stories" or "Most Recent" posts). But maybe I'm wrong or this is no more the case.

    d. They have no other choice, unless they want to leave Facebook.

    So clearly, people are already weary of how Facebook manipulates the their communication, and in this instance they have overstepped.

Comments are closed.