The Internet Police

Legal Perspectives on Tech Series

September 1, 2019

  PDF Version

It is probably just a matter of time before people get in trouble solely for their social media posts. Imagine this scenario:

An American who, like many people, is a frequent user of social media, gets involved one night in a heated political discussion with someone she does not know. Things get heated in the back-and-forth over the course of several hours. Finally, the American gets tired, posts her last word, and retires from the exchange.

The next day, this American tries to check her computer and finds that she no longer has an account on her social media site. She checks her email and finds a notice from the site, describing how her profile has been eliminated from the site due to a violation of the site’s standards of conduct the night before. Later that afternoon, she gets a phone call from the FBI requesting that she be interviewed by two young special agents investigating her for a federal crime.

This American’s political discussion has resulted in a double whammy: she has suddenly been locked out of social media, and now she has to deal with possible criminal liability for her online posts.

Is this scenario plausible?

Consider first the pressures that are felt by the social media companies. Much of American society today believes that entities like Facebook and Twitter need to prevent their services from being used by criminals. The Russian election interference is in the minds of many people, and Silicon Valley is beginning to be sued by victims who claim that their loved ones were killed in part by communication platforms that were exploited by terrorists. The remedy for this pressure is to bolster the tech companies’ compliance departments and to incentivize them to write algorithms to minimize the risk of exploitation of their platforms by criminals. I believe these companies could write code that constantly monitors their customers’ online communications. The algorithm could constantly identify the name of the customer exactly what she posted, and why the communications might require action by the company. A set of humans could be responsible for determining whether what the server identifies is actionable. If it is, the humans make a unilateral decision to de-platform the customer. A form email to the customer follows.

What about the FBI? Americans expect the FBI to keep them safe from terrorism. When the FBI fails, it becomes a scandal. Aggressive Congressional oversight is initiated, public hearings are held, and the Inspector General gets involved. There may even be an Independent Commission. All of these entities want to determine how the FBI missed the warning signals so that it can be reformed and such catastrophic errors avoided in the future.

For terrorism at least, we seem to be moving towards a consensus that, in order for the FBI to effectively do its job, it needs the cooperation of social media companies. What about crime in general? U.S. law since 1970 has required American financial institutions to report suspicions that their customers are engaged in crime. Where they fail to do this, they get into trouble with their regulators. Currently, social media companies are not federally regulated, but it is possible this could change. When they are, might they be required to “know their customers” (like banks are now) and notify law enforcement of possible criminal conduct by them?

The factor that makes this scenario so plausible is that Silicon Valley will be the first to see crimes that can be committed exclusively on the communication platforms they offer to the public. The platforms will occasionally generate troublesome communications, which tech company compliance officers might consider in taking some corporate action. In some cases, there will be no action. For others, it might just be a matter of cutting the customer off under the terms of service, or some other more minor form of discipline. In more extreme cases, they might refer to matter to the FBI for further proactive action. For the third scenario, the company and all such companies offering similar services) will need to know exactly what the trigger point is for FBI involvement.

This question intrigued me as I thought about this future vision. If American law enforcement and tech companies get closer due to their commonality of interests, what might the police tell Silicon Valley about what type of online communications should be referred to them for action?

This article focuses on the most extreme cases – where an individual commits a federal crime exclusively by typing a message into a computer or cell phone. It is not focusing on whether an online post might be a single overt act in a wide-ranging criminal conspiracy since the pertinence of that post would require some familiarity with what the FBI knows about the scheme, which the FBI would not disclose. The tech companies, as good as they are, will not be able to program their system to uncover such non-obvious criminal posts.

Are Americans ever prosecuted solely for their online communications? The answer is yes, because of the enforcement of two federal criminal statutes.