What Everyone Likes

September 22, 2020

I've had a lot of conversations over the last months about how for-profit social media actively interferes with efforts to solve humanity's big challenges. Along the way, I've been surprised how many people agree with the basic premise--Something Has Gone Wrong. What has not surprised me is that most people find it hard to take that analysis a little farther to come up with actions they can take to be part of the solution. For-profit social media is an intimidating hydra to imagine opposing. In this post, I'm going to present a way of understanding the issue that helps point out ways to take action.

The first priority is to understand the problem. Social media raises questions about privacy from other users, privacy from network operators, and privacy from third parties (advertisers, government propaganda machines, etc). Most of those questions about privacy raise further questions about security. But to start, we're going to set those questions to the side. Instead, we're going to take the position of someone who is trying to influence public opinion--for instance, to destabilize a wealthy and powerful country by interfering with its elections and political legitimacy.

Sir Terry Pratchett's novel Night Watch gives a wonderful small-scale example of how this kind of manipulation works. In this section, we find ourselves at a political soiree (similar to a $10k / head fundraising dinner) where a sneaky plan is being carried out to turn opinion against the current ruler.

On her apparently random walk to the buffet table, Madam happened to meet several other gentlemen and, like a good hostess, piloted them in the direction of other small groups. Probably only someone lying on the huge beams that spanned the hall high above would spot any pattern, and even then they'd have to know the code. If they had been in a position to put a red spot on the heads of people who were not friends of the Patrician, and a white spot on those who were his cronies, and a pink spot on those who were perennial waverers, then they would have seen something like a dance taking place.

There were not many whites.

They would have seen that there were several groups of reds, and white spots were being introduced into them in ones, or twos if the number of reds in the group was large enough. If a white left a group, he or she was effortlessly scooped up and shunted into another conversation, which might contain one or two pinks but was largely red.

Any conversation entirely between white spots was gently broken up with a smile and an "Oh, but now you must meet--" or was joined by several red spots. Pinks, meanwhile, were delicately passed from red group to red group until they were deeply pink, and then they were allowed to mix with other pinks of the same hue, under the supervision of a red.

In short, the pinks met so many reds and so few whites that they probably forgot about whites at all, while the whites, constantly alone or hugely outnumbered by reds or deep pinks, appeared to be going red out of embarrassment or a desire to blend in.

I love this passage because it describes the mechanism of manipulating opinion so clearly. The manipulators do not argue, they don't invoke us-vs-them, and they certainly don't try to use logic to achieve their goal. Instead, they try to shape how the world appears to their victims. It's gaslighting at a cottage-industry scale, with socialites carefully orchestrating the party so that it appears to everyone as if one side is unanimously agreed to be correct. Because this tactic relies on deliberate deception of its victims, I would describe it as immoral--a bad thing to do.

I bring this up because when I talk about social media people often want to know what they can do. This is a noble impulse; much of the progress we've seen in the real world is the result of people earnestly asking this question. In the real world, if you are a motivated person who exists near a place of crisis, there is usually a way for you to use your resources and your body to improve the situation a little.

However, when it comes to artificial, human-created systems like for-profit social media, this is not the case. If we use Sir Pratchett's analogy, it's as if each person at the party is blindfolded, and must rely on the network itself for their picture of what's going on. Since the network's profitability depends on its ability to get users in front of advertisers, the network operators try to make sure that users see whatever maximizes engagement. This means that the network tries to surface whatever content will get the most sustained attention from its users, and makes it possible for advertisers and others to buy access to that attention.

Notice that this dynamic has an important difference from the "real world" scenario that I outlined earlier, where you can have a positive impact on any crisis that you or your resources can reach. Because the social network's algorithm largely controls who sees what, it can surface only those things likely to increase engagement. Unless the algorithm assigns high value to what you say, you may not ever appear to anyone else at all.

Now, if we believe in Hanlon's Razor-- that is, "never attribute to malice that which is adequately explained by stupidity"-- then we're supposed to assume (at least at first) that no one is being intentionally nefarious. Let's make that assumption and look at where it leads us.

The first thing we can imagine is very simple and requires no malice at all. A project manager walks over to an engineer's desk and says "I think that we can tell how good our service is by how much time people spend on it. Stands to reason, right? If people enjoy our service, they'll spend more time here, and if they spend more time here, our advertisers are happier and we're making bank. Win-win-win. So what I want you to do is, I want you to design an algorithm that helps people find the content that gets them to stay longest." I just made this up, but I think it's a pretty fair representation and it doesn't require bad intent. However, it does require the conflation of "time users spend interacting with the service" and "value that users get from the service." In PR terms, this is convenient--it's the publicly-palatable rationale for any decision that gets made to support the goal.

So what does that algorithm look like? The first important fact here is that we don't know--the big networks keep their algorithms pretty close to the vest. But we do know bits and pieces, and we can make some general assumptions that are probably pretty close.

First, the algorithm would pay attention to how different items are received when they're posted. Do people read all the way to the end? Do they like, share, comment or subscribe? Any item that generates this kind of engagement will be promoted to a wider audience.

Next, the algorithm will make predictions about what is likely to generate engagement from different users. To do this, it needs a model of the item ("what is this post 'about' and what other features does it have") and a model of the user ("what features are common to the items that this person spends time engaging with.")

Note that none of this requires any kind of intent at all on the part of the network operator. No one need set out to say "If someone is interested in woodworking, or hate speech, or rejecting science, show them things like that" There is not even any guarantee that that's what will happen--the algorithm might find that controversy and disagreement generate engagement, and so it might promote posts that are opposite what a user thinks. The algorithm can only knows three things: what the user does, what the item "looks like," and what the goal is.

So what does an item "look like" to an algorithm like this? We have a couple of fascinating windows into this. Have you ever seen Netflix show you a category of movies with a crazy name like "Summer Camp Movies From 1980-1995" or "Atmospheric Revenge Thrillers With Two Male Co-Stars?" These categories are likely assembled by looking at labels associated with things you have watched in the past. Each movie would have many labels: "Movies over 2 hours long," "Movies with a female lead," "Movies featuring natural settings," etc. The algorithm would look for overlap-- labels shared by several movies you watched. It would also include your behavior--if you started watching ten different horror movies but turned them all off after the opening credits, the algorithm might not recommend more horror movies.

An interesting example of what can go wrong with an algorithm like this is the recent scandal over whether Twitter's image cropping algorithm has white bias. People noticed that the algorithm that Twitter was using to automatically crop an image seemed to reliably center white faces instead of faces of BIPOC. Twitter stated that the algorithm had been "test[ed] for bias" but said that they would try to fix the problem. In situations like this, it's very hard to understand why the algorithm made a choice that it did. Machine Learning algorithms need to be trained on lots of data--a 2018 Twitter blog post states:

A region having high saliency means that a person is likely to look at it when freely viewing the image. Academics have studied and measured saliency by using eye trackers, which record the pixels people fixated with their eyes. In general, people tend to pay more attention to faces, text, animals, but also other objects and regions of high contrast. This data can be used to train neural networks and other algorithms to predict what people might want to look at.

Using these criteria, along with what we know about racism and subconscious bias in the United States, it's not a stretch to imagine that the algorithm was drawing a "logical" conclusion--most of Twitter's audience might "rather" see a white person than a Black person, if you judge solely by what generates engagement. And there have been other examples of this happening. In each case, there is no evidence that the designers intended the results they got. Instead, deploying an algorithm that used certain information to draw conclusions, and instructing the algorithm to try to achieve a certain result, led to consequences that no one thought to avoid.

So when we think about using and advocating for better social media, we need to understand that the conversation can't just be about specific instances of racism or bias appearing in the output of these algorithms. We also need to focus on the practice of introducing algorithms that may operate as feedback loops to magnify bias and inequality that already exist, as well as all of the other problems that can follow when you let "engaging with" something stand for "valuing" it. And when we are using these systems, we should be mindful that we're not only looking at what other users of the service are posting (and we're definitely not looking at what "everybody" is posting, or what "public opinion" looks like.) We're looking at what an algorithm thinks will keep our eyes on the screen. And it doesn't know or care why.