Artificial Intelligence and Radicalism: Risks and Opportunities
In recent years, AI has become the object of many debates regarding its relation to radicalism and terrorism and the potential for exploiting a technology primarily designed to ease human life. AI is advancing at an unprecedented speed, and while it can be exploited by terrorism, it can also be a tool to combat it.
To discuss these matters, the Program on Extremism organized an online event featuring two distinguished speakers who have dedicated substantial time to exploring the risks and opportunities of AI. Hugo Micheron, a prominent French political scientist, and Ghaffar Hussain, an expert on AI, radicalization, and global jihadism, shared their insights on how AI technologies impact radicalization and terrorism. Through keynote addresses and a panel discussion, attendees gained a comprehensive understanding of both the potential benefits and the ethical and practical challenges posed by AI in this critical area.
This event was crucial for professionals, academics, and anyone interested in the future of AI and its role in global security. Attendees had the opportunity to engage in meaningful dialogue about how AI can serve as a powerful tool against radicalism and terrorism while navigating its use's complexities and responsibilities.
Speakers:
- Hugo Micheron, Professor, Paris School of International Affairs
- Ghaffar Hussain, Extremism Expert and Writer
On July 16, 2024, the Program on Extremism (PoE) at The George Washington University hosted an event entitled “Artificial Intelligence and Radicalism: Risks and Opportunities”. PoE Senior Research Fellow, Omar Mohammad, moderated a discussion among Ghaffar Hussain and Hugo Micheron to discuss the impact AI technologies have on radicalization and terrorism, its potential benefits to countering extremism, and the various challenges it imposes. Ghaffar Hussain is an expert on AI, radicalization, and global jihadism and Hugo Micheron is a professor at the Paris School of International Affairs. The following is a summary of their remarks:
Hugo Micheron
Hugo Micheron explained the pattern of extremists investing in technology, manipulating well-developed cameras, and utilizing social media for recruitment processes. He described how the rise of AI prompted two large changes that extremist groups can now utilize in their radicalization efforts. First, as leaders of extremist organizations are no longer constrained by issues such as language barriers, the scale of possibilities expanded rapidly. Additionally, propaganda materials can now be produced faster by using a few keywords. Another significant change resulting from AI’s rapid advancement is the introduction of deepfakes: images, videos, or recordings that have been digitally modified to misrepresent someone’s words or actions. Micheron indicated he is concerned with deepfakes’ ability to enable the regeneration of old ideas and materials as they allow people to “engage” with deceased leaders of extremist groups. Micheron expressed his belief that this landscape — in which AI provides new opportunities for extremist groups — necessitates increased training of social scientists and engineers on the potential effects AI can have on their work.
Micheron then discussed how the ability to trace and, consequently, monitor content would be hugely beneficial from a counterterrorism perspective. However, he explained, this feat is nearly impossible to achieve as watermarks — a feature that allows computers to detect AI content — are easily erased. Moreover, extremist content’s legality presents an additional challenge to countering radicalization strategies. Micheron emphasized how jihadist groups, in particular, became increasingly discrete by highlighting their use of nonviolent rhetoric, which prevents their content from being erased due to illegality or violation of social media policies. These challenges, coupled with the increasing radicalization due to the war in Gaza and the content surrounding it, lead Micheron to believe the world is experiencing its first global information war.
Micheron discussed a variety of ways to address extremists’ use of AI. First, he emphasized the importance of partnerships between experts and social media companies to ensure these platforms understand which keywords and prompts are utilized to produce extremist propaganda. As AI has greatly reduced the cost of developing cutting-edge technology that can rapidly analyze vast amounts of data, Micheron further supported that counterterrorism experts must engage with the AI revolution.
Looking toward the future, counterterrorism experts must work closely with these large technology companies leading the AI revolution to train their models to prevent the creation and proliferation of extremist content.
Ghaffar Hussain
Extremists have long used technology for recruitment, cataloging materials, and disseminating their ideologies. Their sophisticated and savvy use of technology surpassed many corporations. Ghaffar Hussain pointed out that technology has advanced rapidly, and extremist groups take advantage of it. Hussain offered insight into recent extremist uses of technology to radicalize people to acts of violence by sharing two anecdotes. Hussain first described a British national named Jaswant Singh Chail interacting with an AI chatbot named Sarai, which eventually convinced him to attack the royal family in 2001 to avenge Britain’s exploitation of India. Second, Hussain described a similar story of a Belgian man in 2003, who expressed his concerns about the environment to a bot named Eliza, which convinced him to commit suicide. Another example of extremist groups utilizing modern technology for recruitment purposes is their use of online gaming platforms. Extremist groups can create bots to play games online and determine which players are potential targets for radicalization based on their online activities. Hussain expressed concern over the complexity of regulating extremist content, as there are types of legal content that can still be very effective at radicalizing individuals.
However, Hussain also offered several suggestions to address the threat of AI and extremism. Since it is impossible to prevent extremists from using AI, exploring alternatives is necessary. One option is to allow users to opt into social media algorithms, rather than having algorithms be a standard element of social media platforms. Algorithms utilize AI to show users content that AI predicts they will be interested in, based on their previous content engagements. This system creates echo chambers and inadvertently promotes radicalization as users are progressively fed more extreme content. Another option is to notify users when they view material produced by AI. This allows users to responsibly consume media and improves users’ ability to evaluate which media is trustworthy. A third possibility is to counteract extremists’ use of AI by producing positive AI materials. This could include building chatbots with positive messages, using AI to counter propaganda and misinformation, and using AI to detect and label content created by AI.
Hussain’s analysis of AI offers insight into how credibility, regulation, and the dissemination of information coincide in the effort to combat extremist ideologies online. Social media functions as an attention economy, meaning that flashy content tends to be more successful than non-attention-grabbing content. With this in mind, Hussain explained that this is not a problem we can ‘fix,’ but rather an opportunity for all actors to participate in an AI revolution, similar to the development of the internet. Rather than allowing extremists to use AI to their sole advantage, positive actors can also use AI in the same way but to advocate for positive messages instead. In Hussain’s eyes, the threat of AI and extremism is something that can be managed, rather than stopped altogether. However, Hussain believes not all is bleak, as AI is the next frontier of innovation, and offers opportunities for positive impact and counter-extremism policies.