Terrorism Lawsuits Against Social Media Companies
Over the last two decades, there has been growing concern over the use of Google, Facebook, and X (Twitter), and other social media platforms in international and domestic terrorism. Many victims and survivors have sued social media companies, claiming their services bore a causal relationship to the resulting political violence. The problem for most of these matters is that social media companies enjoy immunity from publisher liability, granted by Congress with Section 230 of the Communications Decency Act. Meanwhile, experts have expressed chagrin at the long-term impact of social media on our society, suggesting that some level of liability will incentivize social media companies to clean up their content.
When the Supreme Court in 2023 agreed to review a consolidated appeal of a number of terrorism lawsuits against social media companies, it looked like it was going to address the continuing viability of Section 230 immunity and perhaps open up the floodgates for more victims’ lawsuits against tech companies. Did this occur? What did the Court say about the need for civil litigation against social media companies? Is Congress more likely to enter the fray with some Internet regulation in light of the Court’s ruling?
To discuss these dynamics, the Program on Extremism at The George Washington University hosted an online discussion on Thursday, June 13, 2024 at 10 am ET. The event was moderated by PoE Senior Research Fellow Jeff Breinholt and featured insights from:
- Will Mackie, Adjunct Professor of Law, Washington & Lee;
- Annie E. Kouba, Associate at Motley Rice;
- Katie A. Paul, Director, Tech Transparency Project; Co-director and Founder, ATHAR Project.
On June 13, 2024, The George Washington University Program on Extremism hosted an event titled “Terrorism Lawsuits Against Social Media Companies.” Panelists Katie Paul, Will Mackie, and Annie Kouba discussed how social media companies can be held liable for terrorist activities on their platforms and the future of terrorism lawsuits against social media companies.
Katie Paul discussed how terrorist organizations have profited from the use of social media, while neither party is held accountable.
Social media platforms offer terrorist organizations an opportunity to recruit, fundraise, organize, and disseminate propaganda. However, the platforms are not liable for allowing terrorist ideology to be promoted on their platform, and instead can only be held liable if they actively create content or spaces for terrorist organizations. Specifically, Section 230 of the Communications Decency Act allows platforms to edit and/or blur content without having to take the entire offensive post down, largely offering online platforms immunity from civil liability based on third-party content posted online. As such, Section 230 has become central to many conversations regarding the mitigation of terrorist social media usage.
This provides a window through which social media platforms can incidentally aid the efforts of terrorist organizations operating on their platforms. Katie Paul highlighted a few examples of this. First, terrorist groups have made usage of premium subscriptions to platforms, allowing these accounts to be marked with a “blue check” denoting their verification. Another example of how social media platforms benefit terrorist organizations is by allowing antiquities trafficking on sites like Facebook Marketplace. Facebook Marketplace offers a space for illicit trades, creating a financing channel for terrorist groups like the Taliban and Al Qaeda. Terrorist organizations and networks have used social media for funding and recruitment purposes, as well as to spread propaganda and misinformation. Certain legal loopholes allow for these organizations to use social media platforms, while neither party is held accountable.
One egregious example of this is how Facebook has automatically generated Facebook groups for terrorist groups, helping broaden their reach and influence. This allows extremists to connect and broadcast material to users on Facebook. This issue is well known, and Facebook officials have been questioned on this topic in three separate congressional hearings. Here, Facebook itself is aiding terrorist organizations in increasing their presence and influence, yet is not held accountable.
Will Mackie outlined how the Supreme Court has previously ruled on cases regarding terrorist usage of social media platforms.
The internet has vastly expanded over the past few decades, and has the ability to influence people across the globe. However, the internet remains relatively unregulated and unchecked, allowing for extremist ideologies to spread to a vast audience. Prior to the Justice Against Sponsors of Terrorism Act (JASTA), it was not possible to sue platforms for secondary liability. However, after JASTA was passed, social media companies could now potentially be held liable for aiding and abetting terrorist operations on their platforms.
Last term, the Supreme Court heard two cases regarding this topic. In Twitter v. Taamneh, the primary focus was whether Twitter (now X) facilitated terrorist organizations by allowing them to promote their content on the platform. The case called into question Twitter’s potential liability due to its algorithm in addition to its inaction. Regarding Twitter’s algorithm, the company was not held liable because the algorithm behaved the same way for all content regardless of its nature, and thus did not constitute aiding and abetting. Additionally, Twitter’s inaction on this matter did not lead to liability. Consequently, in this case the Court ruled in favor of Twitter. In Gonzalez v. Google, the case not only addressed the issue of aiding and assisting terrorists, but also questioned whether Google could be held liable for publishing content despite Section 230 of the Communications Decency Act. Specifically, it examined whether Islamic State beheading videos should be allowed on YouTube for public viewing. If the platform specifically advances this type of content, and treats terrorist users differently than other users of the platform, then YouTube could potentially lose the protection offered by Section 230. Such actions could result in platforms being held liable under Section 230. However, the Court determined this was not the case and ruled in favor of Google and YouTube.
Annie Kouba explained that while there is widespread support for the reformation of Section 230, neither Congress nor the Supreme Court have succeeded in making change.
While there are varied opinions in Congress on how to reform Section 230, there is widespread bipartisan support for child safety and the security of the United States. Politicians from both sides of the aisle agree that reform is necessary. So far, enacting this change has proven to be a challenge for Congress, leaving Section 230 unaltered and social media platforms free from responsibility.
However, Kouba maintains the stance that Twitter and Google were found not liable because of the way the cases against them plead. The Supreme Court interpreted the suit to mean that the corporations would have to be held responsible for every terrorist attack carried out by terrorist networks who use their platforms, due to the lack of specificity connecting the social media platforms to specific attacks and actions. Thus, they ruled in favor of the social media giants as they determined this was an overreach.
Kouba attests that an easier way to hold social media platforms accountable might be to target them on the basis of their conduct rather than their content. Section 230 protects social media companies from being held liable for content shared on their platforms. However, challenging these corporations on the basis of their conduct may offer a path forward for holding social media corporations accountable. This would include cases in which platforms specifically provide support or resources to terrorist organizations that are not necessarily provided to all other users of the platforms.