Psychochild's Blog

A developer's musings on game development and writing.

9 March, 2017

Series on online anonymity and privacy, part 3
Filed under: — Psychochild @ 9:32 AM

In my last post, I talked a bit about the harm in removing anonymity and privacy. This post, let’s take a look at why it’s futile to try to remove it, and why enforcement is nigh impossible in reality.

The previous posts provide a lot of context and background for this one, so I recommend you read those first.

It is useless to try to remove anonymity and privacy

There’s my primary thesis for this series. I think that trying to remove anonymity and privacy in online spaces is pointless. Far from reducing bad behavior, I think it would simply harm the online medium.

There are three main reasons I think this. First, as I covered before, a lack of anonymity and privacy would expose the vulnerable. Instead of having these voices participating at a level they feel comfortable, we would simply lose them. If anyone who doesn’t want to expose themselves fully gets excluded, then we lose a lot of potentially important voices.

Second, enforcing the elimination of anonymity would be impossible. Since we’re talking about changing a fundamental assumption of online interaction, there needs to be a way to enforce this change. If you can’t enforce the rule, then what is the point of even having the rule? I’ll explain later why enforcement is effectively impossible.

Third, we’ve already seen that a lack of anonymity simply doesn’t work. There are already spaces where we use our real names and persistent identities. And the awful behaviors we want to eliminate still exist.

Enforcing a lack of anonymity is impossible

Online anonymity is the current default, although many places try to get you to establish an identity. Most sites online require you to “sign up” to create an account to use a service. This provides some sort of persistent identity, even if it’s not necessarily tied to your offline identity. Some sites, like Facebook, try to tie your online identity closely with your offline one. Other sites, such as my blog, let you post with whatever identity you want. But, if you don’t provide a previously approved identity you’ll be held in moderation queue until I approve it.

If we were to adopt Facebook’s preferred system and eliminate online anonymity, we would need some way to enforce it. We would need some sort of persistent identity and a way to make sure that people didn’t create “fake” identities for purposes of harassment and abuse. And here’s the important question who would enforce this persistent identity? The biggest problem is that there is no good way to enforce this.

The first answer to spring to mind is probably the government. But, in the U.S., we tend to be wary of the government enforcing identity too much. There have been endless debates about “national ID cards” and the fear of the government tracking us through such a system. The few systems we do have for identity, such as Social Security numbers and driver’s licenses, tend to be continually scrutinized. So, the government probably isn’t the best answer, at least in the U.S. Although, some places do use an ID number; in Korea, many sites require users to provide a “resident registration number”. In theory this allows sites to track people and make sure they don’t create false accounts. In practice, it just encourages light identity theft; many people talk about “borrowing” grandma’s number to create an alt account.

The other option would be to have some sort of central repository for identities, perhaps controlled by a corporation or an organization with participating corporations. The problem here is that the information would be too tempting. Hackers and identity thieves would love to get access to this, so it would be expensive to maintain and defend. And, even if the repository started with noble intentions, Facebook has demonstrated that there is big money to be made from identities. Eventually someone would see the “money being left on the table” and want to profit from this information. And, we should remember that Facebook did go through a period where they tried to eliminate “fake” accounts, only to find out there are a wide variety of legal names out there that don’t fit the “norms” we have for names.

What if such an organization were controlled by individuals instead of corporations? Well, the answer to that is easy: if we could trust other individuals we wouldn’t need a way to enforce identities. All it takes is a bad actor to gain control and all our information could be abused, exposed, or used for profit.

What about some sort of mythical encryption system that enforces identities? The problem is that encryption is not very easy to use. Even something as easy as HTTPS doesn’t get used as much as it should. Requiring people to use some sort of encryption, especially if it’s at all cumbersome, is going to put people off.

In the end, it’s going to take a lot of work to verify and enforce a lack of anonymity. And, there is simply no good way to organize enforcement that isn’t prone to abuse in one way or another.

Taking away anonymity doesn’t work

I think there’s a fundamental problem with the assumption that we can take away anonymity and it will reduce or eliminate a lot of bad behavior. Because, we can see that this simply does not work even when there is a reduced amount of anonymity.

Facebook’s business model relies on you sharing your personal information with the site so that it can use it to sell to marketers. So, it strongly encourages you to use your real name and identifiable details. Yet, people still do mean, cruel, and thoughtless things on Facebook even with their real identity exposed. A quick Google search for “stupid facebook criminals” and you can see all sorts of articles where people brag on Facebook about breaking the law. And I don’t think with the reduced amount of anonymity that anyone would claim Facebook is a site free from harassment, abuse, and other bad behavior.

But, what about other sites? Twitter allows people to sign up with sockpuppet accounts, so obviously this is the reason why Twitter seems to be so toxic, right? In my opinion, the real culprit is the mob mentality. Because abuse is broadcast, people feel emboldened that they are not alone in their vile behavior toward a target. They see abuse, harassment, etc. and think that this is acceptable behavior because they see others doing it. This is especially true when the target is high profile; others can see the abuse being heaped upon the person, and they feel more confident joining the mob in hurling more abuse.

But, what about moderation? Well, hiring people is expensive, and would make running some services prohibitive. It’s easy to say there should be a human monitoring things, but the reality is that so much information is generated that it’s hard to monitor it all. Even if you only review “reported” violations, this is still difficult as people will attempt to use such tools to abuse and harass others. Automated systems don’t work, because as I’ve said repeatedly in the past, code can’t determine intent. The phrase, “You’re such a liar!” will be interpreted differently between casual acquaintances and close friends. And because some of the context that determines the level of a relationship may happen outside of the site, automated systems can’t tell if this phrase is an in-joke between good friends or a bit of harassment.

In the next post, I’ll talk about how we designers should look at privacy, and how we can deal with it better in our spaces.

This series took a lot of time to write, research, and edit. Consider supporting me on Patreon and give me the freedom to spend more time writing on topics like this. Thanks!


  1. My only objection to this is that it’s a big leap from saying “people still do mean, cruel, and thoughtless things on Facebook” to “Taking away anonymity doesn’t work”. It would be just as easy to say that education doesn’t work (because these people all went to school, yet still act this way) or that community management doesn’t work (because we’ve seen bad behaviour even in games and communities where we tried hard to cultivate positive behaviour). I don’t think the negative behaviour can ever be stopped outright, but it can be mitigated through all of these methods.

    Comment by Kylotan — 14 March, 2017 @ 3:08 AM

  2. Kylotan wrote:
    It would be just as easy to say that education doesn’t work (because these people all went to school, yet still act this way) or that community management doesn’t work (because we’ve seen bad behaviour even in games and communities where we tried hard to cultivate positive behaviour). I don’t think the negative behaviour can ever be stopped outright, but it can be mitigated through all of these methods.

    The difference is that education and community management don’t really directly harm other people. Yes, someone may decide not to get an education despite being sent to school, but unless you’re a fan of child labor it’s hard to see a way where others are “harmed” by the person being sent to school.

    However, I think I’ve shown that removing anonymity or privacy from everyone causes harm. So, if we’re harming people there must be more need demonstrated than “some of the behavior could be mitigated.”

    And, I’m not sure that even if some of the behavior were mitigated that it would make a significant change. Part of why we perceive a problem with bad behavior online is because it’s often broadcast. People aren’t going to necessarily measure a modest decrease in bad behavior; they’ll still see bad behavior and wonder why nobody has done anything about it. People who get harassed aren’t going to suddenly be fine if harassment goes from X times per week to X-1 times per week, for non-trivial values of X. There needs to be a significant drop in bad behavior before people will consider the a decrease significant.

    Hope that clarifies.

    Comment by Psychochild — 14 March, 2017 @ 10:51 PM

Leave a comment

I value your comment and think the discussions are the best part of this blog. However, there's this scourge called comment spam, so I choose to moderate comments rather than giving filthy spammers any advantage.

If this is your first comment, it will be held for moderation and therefore will not show up immediately. I will approve your comment when I can, usually within a day. Comments should eventually be approved if not spam. If your comment doesn't show up and it wasn't spam, send me an email as the spam catchers might have caught it by accident.

Line and paragraph breaks automatic, HTML allowed: <a href="" title=""> <abbr title=""> <acronym title=""> <b> <blockquote cite=""> <cite> <code> <del datetime=""> <em> <i> <q cite=""> <strike> <strong>

Email Subscription

Get posts by email:

Recent Comments


Search the Blog


February 2020
« Aug    



Standard Disclaimer

I speak only for myself, not for any company.

My Book


Around the Internet

Game and Online Developers

Game News Sites

Game Ranters and Discussion

Help for Businesses

Other Fun Stuff

Quiet (aka Dead) Sites

Posts Copyright Brian Green, aka Psychochild. Comments belong to their authors.

Support me and my work on