Lessons from the Information War: Applying Effective Technological Solutions to the Problems of Online Disinformation and Propaganda

Legal Perspectives on Tech Series

September 1, 2019

  PDF Version

The U.S. government-industry relationship regarding online propaganda and disinformation has traditionally been informational. Both groups have relied upon formal information-sharing forums such as the Global Internet Forum to Counter Terrorism, the Global CT Forum, or Tech Against Terrorism to enable discussions of strategic intent. And when the U.S. Government is aware of an imminent threat of violence online, it may legally inform the social media platforms. But the social media industry has historically appeared to keep governments at arm’s length to retain control of their own business processes and to reinforce their images as independent and free information-sharing platforms.

The seven technologies above may offer mutually beneficial opportunities for collaboration between government and industry, with the goal of limiting the effects of online propaganda and disinformation. Together, these technologies avoid the trap of politically judging the content appearing online, offer new market opportunities for social media platforms and related developers, and they offer the government an opportunity to demonstrate resolve by driving broad adoption. Perhaps most importantly, these technologies are market-driven, and are not punitive, encouraging continued growth of an important business sector.

While the technological solutions provided here would intentionally support collaboration between government and industry, the international community seems to be heading in the other direction. Governments are enacting legislation that empowers them to punitively direct social media platforms to take down harmful content online. The definition of “harmful” is subjective and based on malleable government policy.

Singapore’s new Protection From Online Falsehoods and Manipulation bill, for example, enables the state to enforce the removal of “false” content on apps such as WhatsApp or Telegram. Germany’s 2018 Network Enforcement Act holds social media companies responsible for content considered illegal under Germany’s existing hate speech laws, such as “incitement to hatred.” And the latest, the Christchurch Call, requires social media companies to voluntarily respond to reports of “terrorist and violent extremist” content online.

These directive content moderation policies conflict with the U.S. public’s sense of free speech rights and even conjure nightmares of totalitarianism. Yet commentators consistently complain of the threat of ceding online content review to a commercial industry. While the German Network Enforcement Act initially received strong constituent approval, the legislation now seems to be under litigative assault, perhaps demonstrating the limits of societal support. The reality of enforcement seems to have diverged radically from the original intent of the legislation, and that lesson is important to any U.S. technology implementation in this area: technology may be applied as a helpful tool for addressing skin-deep propaganda and disinformation symptoms, but not as a comprehensive solution to the deeper illness.