The Coming AI and Extremism Threat


June 6, 2024

The Coming AI and Extremism Threat

Extremist groups have been, and always will be, early adopters of new technologies in their pursuit of more followers and greater prominence for their cause. I recall my amazement when I was first introduced to ‘Azzam.com’ in the mid-90s when the internet was barely a thing and yet jihadists had managed to set up flashy websites giving battlefield updates from the theatres in which they were active. Azzam.com was launched in 1994, it is named after Abdullah Azzam, who is regarded as the godfather of global jihad and acted as a mentor for Usama bin Laden. The Neo-Nazi web forum ‘Stormfront’, which was set up by a former member of the Klu Klux Klan, was established in late-1996. For context, the BBC did not have a fully-fledged news website until 1997.

The internet, and associated technological tools, offer extremists and other malicious actors’ distinct advantages that are invaluable given the nature of their causes. Firstly, the ability to remain anonymous yet reach out to, and influence, an unlimited number of people around the world. Secondly, it enables them to operate in a de-territorialised space in which governance and regulation is weak and always playing catch-up. Thirdly, it allows them to control the narrative and exclude any dissenting voices who may challenge what they promote. Fourthly, it allows them to create the illusion of mass appeal and support for otherwise fringe causes and groups, which presents a degree of credibility was there was none previously.

The birth, rise and exponential growth of Artificial Intelligence (AI) is likely to be the most significant and impactful technological revolution in human history. For the first time in our existence as a species there will exist a force that is not only able to perform cognitively complex tasks but, in most cases, complete them much better and quicker than us. Tasks which took months will now take seconds and the seemingly impossible will soon be a few mouse clicks away for anyone with a device and access to the internet. Given Moore’s Law, which states that computer processing power doubles every two years, we can expect the rudimentary AI we have today to become rapidly more sophisticated in the coming years.

Given the accessibility and widespread availability of AI, the technology is also likely to be used for harmful purposes. Bad actors and extremist groups could easily deploy it to create and disseminate misinformation and extremist propaganda on an unprecedented scale. Given the sophistication of AI tools and their rapidly evolving nature, this will be very difficult to stop. Furthermore, given the business model of most large social media platforms, this harmful content could be fed into algorithms which amplify content that they think is likely to appeal to users. With state agencies often unable to keep up with technological developments in legislative terms, we could be heading for a time in which AI is used to usher in a period of political and social chaos and instability.

This issue is explored in a forthcoming book entitled ‘Cyber Security in the Age of Artificial Intelligence and Autonomous Weapons’ in which one chapter that examines the various ways in which extremist groups could use AI and what can be done about it. The below is a summary of the key findings of that chapter.

There are four main ways in which the above outlined phenomenon is likely to play out, namely; Generative AI, Chat-bots, Gaming and Predictive Analytics. Extremists could use generative AI technology to create all manner of content from propaganda videos, images, music, and translations etc. Content that previously took weeks and months to produce by individuals with a degree of technical expertise will now be simple to create for anyone. Extremists could programme chat-bots to mimic the worldview of their propagandists. AI generated accounts could be deployed on gaming platforms to identify and attract potential recruits and AI-powered analytical tools could be used to hone in on those most vulnerable to radicalisation.

With regards to regulating extremism on Big Tech platforms, such as Facebook, X and YouTube, meaningful progress is unfortunately hindered by the trade-off between societal harm and corporate profits with the latter often being the dominant factor. These platforms rely on retaining user attention because that leads to greater advertising revenue and so they are reluctant to take steps that reduce users spending time on their platforms and engaging with content they deem attractive. Unfortunately this includes all types of users and that entails a huge variety of content, including that which seeks to build sympathy for extremist causes.

One feature that can be imposed on online platforms, forums and messaging apps is the requirement to clearly identify AI bot accounts with some sort of marker. Users should have a right to know when they are interacting with a human or an AI bot since that is highly likely to change the manner in which they respond to the account in question. This quiet obvious measure has yet to be taken because it would likely reduce engagement on platforms and that means less advertising revenue. Algorithmic amplification, i.e. when platforms promote content they deem to be attention grabbing, would also need to be tackled in order to reduce the reach and efficacy of extremist AI bot accounts. Platforms should be pressured to make it easier to switch off algorithms and, thus, provide an online experience that is free of promoted and suggested content as the default, whereas currently it is the other way around.

With Big Tech, like with fast food chains, the business model is the problem and the steps that need to be taken to tackle this require political will that is currently either lacking, muddled by unclear intentions or just confused by a lack of understanding. Both the UK and the EU have recently introduced legislation designed to tackle online harms and whilst these pieces of legislation are steps in the right direction, they do have their limitations since they rely on users finding extremist content problematic and reporting it. This will not always be the case, especially with sympathisers who like the content they are consuming or if it is more targeted in its online dissemination. In the case of the EU legislation, it is primarily focused on the larger tech platforms and content that is publically accessible and, whilst that seems very sensible on the surface, it leaves gaps since the online world is splintering and there are many new platforms and forums emerging that would not be as affected by the legislation.

These legislative measures also introduce a degree of ambiguity since terms such as ‘misinformation’ and ‘harmful’ can be debated and disputed. This opens the door to abuse as governments could label content they do not like as ‘misinformation’, as has already happened on numerous occasions. This leads to a Big Tech/Big Government nexus which has the net effect of stifling free speech and political dissent in the name of tackling extremism which in the long term discredits counter-extremism efforts across the board and empowers extremist narratives.

Tackling the extremist use of AI is going to be a difficult challenge to overcome given the manner in which the political and corporate landscape is currently configured. This is further compounded by the disunity found amongst civil society groups, who often remain mired in ‘culture war’ debates and lack the common sense of values and purpose to take on extremists of all stripes. This sense of disunity and internal strife is also exploited by Big Tech platforms and is something extremist groups deploying AI are likely to take advantage of in the near future.

It seems we are almost uniquely unprepared the extremist adoption of AI. As such it is likely to start having an impact on our societies well before we can agree on effective ways to counter it. There are a lot of things that can be done but those things will only be enacted once we acknowledge the scale and nature of the threat. Current approaches to tackling online extremism are woefully inadequate, we need to be honest about why that is the case if are to move this debate forward in any meaningful way.