9 March, 2017
In my last post, I talked a bit about the harm in removing anonymity and privacy. This post, let’s take a look at why it’s futile to try to remove it, and why enforcement is nigh impossible in reality.
The previous posts provide a lot of context and background for this one, so I recommend you read those first.
It is useless to try to remove anonymity and privacy
There’s my primary thesis for this series. I think that trying to remove anonymity and privacy in online spaces is pointless. Far from reducing bad behavior, I think it would simply harm the online medium.
There are three main reasons I think this. First, as I covered before, a lack of anonymity and privacy would expose the vulnerable. Instead of having these voices participating at a level they feel comfortable, we would simply lose them. If anyone who doesn’t want to expose themselves fully gets excluded, then we lose a lot of potentially important voices.
Second, enforcing the elimination of anonymity would be impossible. Since we’re talking about changing a fundamental assumption of online interaction, there needs to be a way to enforce this change. If you can’t enforce the rule, then what is the point of even having the rule? I’ll explain later why enforcement is effectively impossible.
Third, we’ve already seen that a lack of anonymity simply doesn’t work. There are already spaces where we use our real names and persistent identities. And the awful behaviors we want to eliminate still exist.
Enforcing a lack of anonymity is impossible
Online anonymity is the current default, although many places try to get you to establish an identity. Most sites online require you to “sign up” to create an account to use a service. This provides some sort of persistent identity, even if it’s not necessarily tied to your offline identity. Some sites, like Facebook, try to tie your online identity closely with your offline one. Other sites, such as my blog, let you post with whatever identity you want. But, if you don’t provide a previously approved identity you’ll be held in moderation queue until I approve it.
If we were to adopt Facebook’s preferred system and eliminate online anonymity, we would need some way to enforce it. We would need some sort of persistent identity and a way to make sure that people didn’t create “fake” identities for purposes of harassment and abuse. And here’s the important question who would enforce this persistent identity? The biggest problem is that there is no good way to enforce this.
The first answer to spring to mind is probably the government. But, in the U.S., we tend to be wary of the government enforcing identity too much. There have been endless debates about “national ID cards” and the fear of the government tracking us through such a system. The few systems we do have for identity, such as Social Security numbers and driver’s licenses, tend to be continually scrutinized. So, the government probably isn’t the best answer, at least in the U.S. Although, some places do use an ID number; in Korea, many sites require users to provide a “resident registration number”. In theory this allows sites to track people and make sure they don’t create false accounts. In practice, it just encourages light identity theft; many people talk about “borrowing” grandma’s number to create an alt account.
The other option would be to have some sort of central repository for identities, perhaps controlled by a corporation or an organization with participating corporations. The problem here is that the information would be too tempting. Hackers and identity thieves would love to get access to this, so it would be expensive to maintain and defend. And, even if the repository started with noble intentions, Facebook has demonstrated that there is big money to be made from identities. Eventually someone would see the “money being left on the table” and want to profit from this information. And, we should remember that Facebook did go through a period where they tried to eliminate “fake” accounts, only to find out there are a wide variety of legal names out there that don’t fit the “norms” we have for names.
What if such an organization were controlled by individuals instead of corporations? Well, the answer to that is easy: if we could trust other individuals we wouldn’t need a way to enforce identities. All it takes is a bad actor to gain control and all our information could be abused, exposed, or used for profit.
What about some sort of mythical encryption system that enforces identities? The problem is that encryption is not very easy to use. Even something as easy as HTTPS doesn’t get used as much as it should. Requiring people to use some sort of encryption, especially if it’s at all cumbersome, is going to put people off.
In the end, it’s going to take a lot of work to verify and enforce a lack of anonymity. And, there is simply no good way to organize enforcement that isn’t prone to abuse in one way or another.
Taking away anonymity doesn’t work
I think there’s a fundamental problem with the assumption that we can take away anonymity and it will reduce or eliminate a lot of bad behavior. Because, we can see that this simply does not work even when there is a reduced amount of anonymity.
Facebook’s business model relies on you sharing your personal information with the site so that it can use it to sell to marketers. So, it strongly encourages you to use your real name and identifiable details. Yet, people still do mean, cruel, and thoughtless things on Facebook even with their real identity exposed. A quick Google search for “stupid facebook criminals” and you can see all sorts of articles where people brag on Facebook about breaking the law. And I don’t think with the reduced amount of anonymity that anyone would claim Facebook is a site free from harassment, abuse, and other bad behavior.
But, what about other sites? Twitter allows people to sign up with sockpuppet accounts, so obviously this is the reason why Twitter seems to be so toxic, right? In my opinion, the real culprit is the mob mentality. Because abuse is broadcast, people feel emboldened that they are not alone in their vile behavior toward a target. They see abuse, harassment, etc. and think that this is acceptable behavior because they see others doing it. This is especially true when the target is high profile; others can see the abuse being heaped upon the person, and they feel more confident joining the mob in hurling more abuse.
But, what about moderation? Well, hiring people is expensive, and would make running some services prohibitive. It’s easy to say there should be a human monitoring things, but the reality is that so much information is generated that it’s hard to monitor it all. Even if you only review “reported” violations, this is still difficult as people will attempt to use such tools to abuse and harass others. Automated systems don’t work, because as I’ve said repeatedly in the past, code can’t determine intent. The phrase, “You’re such a liar!” will be interpreted differently between casual acquaintances and close friends. And because some of the context that determines the level of a relationship may happen outside of the site, automated systems can’t tell if this phrase is an in-joke between good friends or a bit of harassment.
In the next post, I’ll talk about how we designers should look at privacy, and how we can deal with it better in our spaces.
This series took a lot of time to write, research, and edit. Consider supporting me on Patreon and give me the freedom to spend more time writing on topics like this. Thanks!