On Security, Part 001: Social Media Values

January 1, 2021

A couple of months ago, I wrote a post describing my approach to security. In it, I distinguished between a value, as something physical or conceptual that is worth protecting, and a security mechanism, which is a technique like "use a password" for protecting a value. I also suggested that a security designer has three responsibilities:

  1. To identify the characteristics of value in the context of the system.
  2. To identify a set of likely threats to that value.
  3. To implement efficient controls that guard against the threats, ideally by raising the cost of an attack higher than the likely reward.

These points are intentionally abstract--they do not correspond to specific security choices like allowed password lengths. Instead, they are meant to expose the specific beliefs and values that each specific security mechanism is meant to support. By going on the record about what we think we are protecting with our security choices, we make it easier to evaluate questions and suggestions in an accountable and verifiable way[1]. In this post, I'm going to address the first point--identifying values-- in the context of social media systems. By making this explicit, my later decisions about security mechanisms can be evauated, re-evaluated, and refined in a consistent way.

Characteristics of Value in Social Media

"Social media" is a big category, so we're going to have to stick to high-level values, where the differences between something like a blogging platform and something like facebook don't matter very much. The following are not in any particular order; I suspect that everyone has their own ordering.

These points are my first attempt at articulating the values that should be protected within social media systems. I expect to return to this list often to justify my security choices. I also expect that I've probably gotten some things wrong and missed some things, so I'll need to revise this list as I discover errors. Finally, it is expected that tensions will arise between these values--certain decisions will require elevating one value over another in a given context. While that cannot be avoided, one explicit goal is to acknowledge when it occurs, and to respect pluralism, letting each person follow their own conscience.


  1. Computer security writing suffers from a lack of shared context. For instance, in writing like this you can see a quote like "I went over some reasons for keeping access tokens out of the browser," which is an extremely confusing statement if you include "session token" in the category of "access tokens." This context-fragmentation is a huge barrier to productive debate even when we are talking about a specific security mechanism such as "tokens." The situation gets much worse when we make assumptions about the values that we mean to protect.

    For instance, let's take the informative and often-cited article about not using JWTs for session tokens This article, like many others, argues that one reason for preferring an older technology over a new one is that when using a new technology "you will either have to roll your own implementation (and most likely introduce vulnerabilities in the process), or use a third-party implementation that hasn't seen much real-world use."

    This is a useful point to consider, and in the context of that article, it is a well-qualified statement that holds true within its context. But when we look more abstractly at the claim "Use an existing, tested technology rather than a new, untested one," we can see a really big problem with it. Specifically, it omits the possibility that the underlying values of the existing, tested technology, rather than its implementation, might be the problem. For instance, whenever you see a service that lets you "sign in with Google," or another common third-party, they are likely using a technology called oauth. Oauth is considered extremely secure--it lets Google worry about keeping your user password secure, so that when you want to log in to a different site, that site can just verify your identity with Google rather than making you remember yet another password.

    But what if part of your value set is that it's dangerous to give a company like Google that level of control over such a broad swathe of the web? Well, newer protocols like IndieAuth exist, which mostly follow the Oauth pattern but make it easier for anyone to have their own personal "Sign in with X" service. This means that Google no longer controls your online identity--losing access to your Google account doesn't mean that you lose access to other services, and Google no longer gets notified every time you log in to any service.

    If we uncritically apply the rule "use tested technologies, not new untested ones," then we risk missing this entire category of nuanced, value-driven conversations. I would modify that rule a little bit to make it more safely applicable: "When your values suggest using a given security mechanism, try to use the best-tested implementation of that security mechanism you can find. But do not use any security mechanism, even if well-tested, without interrogating its implicit values." ↩︎

  2. Because of the way that information exists in reality, it is not possible to physically guarantee that information will stay confidential after it has first been shared. However, the next best thing is to create clear expectations about the allowed boundaries for information to spread, and to establish context-sensitive sanctions for violations of those boundaries. ↩︎