The Problem Isn’t Just Facebook: It’s You Too.

Lately, news of the Facebook emotional contagion study and Facebook Messenger’s permissions, have flooded feeds and inboxes. The former was a paper published in the Proceedings of the National Academy of Sciences in June 2014. The paper described an experiment where the algorithm controlling some users’ news feeds were altered to show more “happy” or “sad” posts. Then, the subsequent posts made by those users were tracked to determine if the emotional tone of their news feed affected the emotional tone of their produced content. The latter news story comes primarily from click-bait articles claiming that Facebook has “crossed the line” with its “new” messenger mobile application by requiring extraordinary permissions to users’ phones. While the list of permissions required for Facebook’s mobile messenger app is accurate, their descriptions are not. However, these articles have led to my news feed being inundated with declarations of boycotts and exclamations about the tyranny of Facebook.

A parody image of the game Peasant's Quest from Homestar Runner.

Face off Against the Facebook Dragon (and lose because Facebook always wins) in End User’s Quest! Made by Social Scientists and the Media

I want to present a radically different perspective on Facebook. This isn’t to say that my friends and anthropologically minded colleagues have not raised important points about ethics, power, and control when it comes to the Internet. However, I feel that many discussions about Facebook (and other large web presences e.g. Google, OKcupid or Twitter) have largely become routine and tired in public social science. Each new feature, experiment, interface element, or app is discussed within tropes from tragic fairy tales. Facebook is portrayed as a big greedy dragon taking advantage of the peasants, stealing their data, and toying with their lives. The users are treated as peasants running about on fire, helpless as they wait for a hero who will never come.

Yes, the ethics of data collection and usage are important discussions. But, the framing of that discussion must step away from painting Facebook as evil and tyrannical and the user as helpless. This kind of narrative does little to aid problem solving. Rather, it exacerbates the problems by alienating the Internet’s creatives (whether they be the web designers of small businesses or the developers at Facebook) and creating paradoxical relationships between end users and developers that hinder fruitful discussions.

Privacy on the Internet is a myth. So, why do we keep telling it?

The problem is, there is no such thing as privacy on the Internet.1 At least not in the common way most end users think about privacy, that is as something that is unknown to others or shared as a secret through trusted relationships. When we discuss privacy on the Internet, we describe it as control over viewership, as in “who can view my data.” The problem arises, when this become conflated with privacy as secrecy. When users become convinced that posting on Facebook with the privacy setting “friends” means that only their friends can ever view or use the post, they become vulnerable and lose control through their own ignorance of the system.

When posting anything, anywhere on the Internet, it isn’t like whispering a secret in a friend’s ear and trusting they won’t tell. It is more like yelling into a bullhorn across Time Square to a friend on the other side. Most people won’t care about what you are saying and will just ignore you. But, anyone who is part of that landscape could listen in if they wanted to. There is no privacy—in the sense of secrecy—in this act. When users post to any website, there is a team of people, including system admins, data managers, and research and development staff, who have access to that post in raw form, as part of a larger data set, or as part of descriptive aggregate data.

We need to start talking about data security instead of data privacy. Security means that there are technical and social procedures in place to prevent data from being shared. Most importantly, talking about security does not render the system or its administrators and owners invisible. Security can be a negotiation between the user and the platform provider to agree on terms for how their data is protected and shared. Privacy, especially among American users, is a personal and emotional topic that many feel is non-negotiable.

When Facebook Messenger is installed on an Android device, Android requires that the user be notified of what aspects of the user’s phone will be used by the application. This process is what has created the panic surrounding the app recently. Facebook Messenger needs access to take pictures and videos, record audio, directly call phone numbers, receive text messages (SMS), and read user contacts. The information that has been circulating has suggested that this means that Facebook will use your phone’s camera and microphone to spy on you and that it will start texting and calling your contacts while impersonating you. Some articles have even gone as far to suggest that there is no legitimate reason for these permissions. Before talking about anything further, let me clear this up: none of this is true. How these permissions are actually used is documented by Facebook and those reasons are pretty straightforward. For example, Facebook Messenger needs access to your camera and microphone so that you can send photos and videos (with audio) to friends from within the app. Even the original author of the article that spawned the latest string of posts, when confronted with this, stated that his point was that “hackers” could take advantage of you via the app not that Facebook would.

Unsurprisingly, the discussion of the Facebook Messenger app has centered on privacy. The focus on privacy generally leads users and social scientists alike to start from this framework when discussing the dangers of using technology. But, how do we have a fruitful discussion of realistic dangers in the framework of privacy?

The Android permissions notice makes it clear that privacy is not possible. In order for your data to be transmitted through Facebook, it has to pass through the non-private space of Facebook. This makes sense. If you take a photograph with your phone, it is not possible to share it (other than physically handing someone your phone) without also sharing it with your service provider and the application that you use to share it, such as Facebook or Instagram or Google Chat. But, the framework of privacy tells us that our data should not be shared with anyone but who we intend. So, how do we reconcile a system that tells us it is impossible to have privacy with our insistency of having privacy? Primarily, we pretend that we do have privacy when we do not. We become complacent with our privacy settings and ignore the non-private bits of the system that we do not regularly see. However, we cannot avoid the privacy issue all of the time. When we install a new app, perform upgrades, or read an article online that points it out, we are forced to confront it. This is not a productive way to work through these issues.

If we were to approach Facebook Messenger from a position of security rather than privacy, we could enter into a realistic dialogue about how data is actually used. We could talk about the connections between operating systems like Android and apps like Facebook Messenger to understand how secure the systems are and adjust our usage of it appropriately. Privacy then becomes the result of the user’s actions and decisions more so than the nature of the system.

By telling people they have no responsibility to understand how the Internet works, we make them more vulnerable not less.

The discussions of the Facebook social contagion study have focused primarily on one ethical issue: The experiment’s participants were unaware they were involved. It is common practice in academic research to obtain informed consent prior to conducting any experiment on human subjects. However, how does this translate to Facebook?

The IRB that reviewed the project for the academic researchers who were involved approved the study because Facebook’s terms of service very clearly states that they use data for “troubleshooting, data analysis, testing, research and service improvement.” and because altering the news feed algorithm and tracking the results is common practice at Facebook. However, many online discussions of the study claimed that the terms of service was not sufficient for consent as it is too difficult to understand and users do not read terms of service. Additionally, many claimed that users did not know that the news feed was filtered, let alone regularly manipulated.

Terms of service are generally long and complicated. However, when you have a service as complex as Facebook or Google, there is really no way to convey everything in a single blurb. It is fanciful to expect Facebook to provide a robust and complex service, yet have a short, uncomplicated terms of service that maintains transparency at the same time. Facebook has made fairly significant improvements over the years to its terms of service by reducing the content, adding graphics, and making the language simpler. This isn’t to say that there is no room for improvement, because there is much Facebook can do to improve. However, the discussion of the social contagion study has largely left no room for terms of services to be considered as a method for consent because the terms of service are, in the language of the Internet, tl;dr.

This argument essentially means that Facebook should provide an impossible document and that users have no responsibility for their own knowledge of it even if Facebook could produce it. This would be like an anthropologist handing an interlocutor an informed consent form, then having an authority say with a wink, “You don’t have to read that it is too difficult for you to understand. Just go ahead and sign it and we will talk about how evil that anthropologist is later for invading your privacy.”

Some suggested that Facebook should have notified users before or after the study that they were participants. Yet, in the context of normal operating procedures at Facebook and virtually every other website or app provider, this is onerous due to the frequency of these types of tests and experiments. In order for any website or application to function and improve (including improving security), it must conduct regular tests and experiments and monitor activity. {Even this website is monitoring you right now with Google Analytics.}

OKcupid was right, “if you use the Internet, you’re the subject of hundreds of experiments at any given time, on every site. That’s how websites work.” You cannot have services that are tailored to each user without those services experimenting on you constantly. Every single change to functionality or design is tracked and monitored to ensure that it is supporting the service’s needs and goals.

Further, it isn’t just the services on the Internet that do this kind of experimentation on users. When you shop for groceries with your rewards card, you are being experimented on. When you enter the library through a gate counter, you are being experimented on. When you call a support line for your cable, you are being experimented on. Even your friends might be experimenting on you for blog posts and news articles.

Given the pervasiveness of this practice, why are users and social scientists so surprised by the social contagion study? Much like how users deal with privacy, users tend to ignore the labor and practices that build the services they use, preferring to imagine these services coming into being purely from priorly established knowledge. Likewise, many social scientists work within contexts of social justice and power and thus ignore the nature of web development in favor of narratives about the exploited user. So, how these services are developed are not commonly considered by users or the academe. When the social contagion study resulted in a published paper, the practice was thrust to the foreground of people’s discourse, making people feel suddenly violated. This was exacerbated by there being no clear connection between the experiment and Facebook’s for profit activities, which users tend to more readily accept as part of the business of the Internet.

There is much that Facebook and other services could do to improve communication and transparency. However, when we treat Facebook as a tyrannical, power hungry dragon and users as helpless, ignorant peasants, we are actually making users more vulnerable. Services like Facebook must study its users in order to provide the services those users want. Facebook must provide some sort of documentation as to what that means, such as terms of service. However, when we tell people that they are incapable of understanding and not responsible for knowing how that technology works or what those terms of service mean, we are creating an environment for exploitation.

I am not saying we should blame the victims of legitimate digital malfeasance. I am saying that we (social scientists and other public writers) are part of the problem.

If we want to truly tackle the issues that the Internet presents, including surveillance, financial exploitation, and malicious manipulation, I suggest that it would be more fruitful to begin with more holistic questions about the nature of relations between the Internet’s creatives, end users, and the technology between them. We must escape the tropes that constrain our discussions to evil web presences and their innocent end users.

I am not saying we should be blaming the victims of legitimate digital malfeasance. However, I am saying that Facebook did not create the complex systems through which we introduce technology into our lives. We (as a society) did. We helped create users who are uneducated about how Internet technologies works. We helped create the environment where we use technologies without learning what they do. We created myths that allow us to ignore the realities of data security. Facebook lives and operates in this space. But we created it and allowed it to fester.

So, we are not the prince gallantly running in to save the peasants from the dragon. We are the king who disbanded the army and allows the dragon to nest next door.


1 I was called out on Reddit for not discussing encryption and similar security technologies. I think this is a good discussion. To read it or jump in head on over to Reddit.

4 Comments

  • AD says:

    So why do they need to “directly call phone numbers”? The problem is that the wording is not specific enough… it is much to open of an agreement.

    • Directly call phone numbers: This permission allows you to call a Messenger contact by tapping on the person’s phone number, found in a menu within your message thread with the person — (Official FB Source)

      However, more to the point, your issue isn’t actually with Facebook. Rather, the issue you have is with Android and the phone manufacturers that use it, which sets the scope of permissions. Android apps get all or nothing access. Google, however, appears to users as providing transparency and thus is the “good guy” while Facebook, who has only two choices (1) provide requested feature and request permissions people find suspicious of (2) fail to provide requested features, looks like the “bad guy”.

      So, I stand by my original argument. The issue here isn’t just Facebook, it is the whole of multiple technologies and the myths we tell about them that lead to these types of situations. 🙂

  • Charlie says:

    I agree completely. One of the big misunderstandings of those (including me) who have had internet social networking for the majority of their social lives is to think that the social networking platform is some sort of public (in the sense of Times Square) space. It is less like that, and more like you are at someone’s house. Their house: their rules. Facebook fully owns and controls the software and platform you are using to network, and to think that they would not tweak it for their own personal gain is foolish. Facebook is not providing a social networking platform out of the goodness of their hearts. That is not to say that we should stop using it, just that we need to be aware that our profile is not “ours” really.

    • Yes, fair point Charlie! I was going mostly for describing a scene with lots of people in it. 😉 But, I think your analogy works better in expressing the control and structure of the venue (i.e. Facebook). Maybe I should say it is like visiting a privately owned club. Your ID gets checked on the way in. There are secluded booths and a large dance floor. And, there’s lots of noise. Ha!
      Seriously, though. Thank you for your comment, very helpful!

1 Trackback

Leave a Reply

Your email address will not be published. Required fields are marked *