The global security landscape is undergoing a fundamental transformation driven by the rapid advancement of artificial intelligence technologies, which have evolved from purely technical tools into strategic forces reshaping patterns of power and conflict. Artificial intelligence has emerged as a transformative capability offering substantial societal benefits, yet its inherently dual-use nature renders it a double-edged instrument.
A careful examination of historical precedents reveals a recurring pattern in which terrorist organisations demonstrate a high degree of adaptability in exploiting emerging technologies to advance their radical agendas. Just as these groups previously leveraged online forums and encrypted communication platforms, they are now actively exploring and adopting artificial intelligence capabilities. This shift is no longer confined to speculative concern or theoretical risk. Rather, AI-enabled terrorism has moved from conceptual discussion into an experimental phase characterised by repetition and rapid diffusion, raising acute concern among security institutions and governments that the technology may become a strategic enabler of unprecedented operational capability.
The convergence between artificial intelligence and the logic of asymmetric warfare is fundamentally altering the balance of power between states and non-state actors, significantly lowering the barriers to entry that were historically imposed by advanced military technologies. Emerging fields and intelligence evidence indicate the development of a multi-domain adoption strategy spanning informational, physical, and cyber spheres, necessitating a deeper analytical examination of how terrorism is being re-engineered in the age of intelligent systems.
The informational domain represents the most visible, widespread, and operationally mature application of AI among terrorist organisations. The emergence of generative AI has fundamentally transformed the economics and speed of propaganda production. What once required centralised media offices and specialised teams of editors and translators has now evolved into a scalable, industrialised process that can be executed by dispersed cells or even individual actors at negligible marginal cost.
In this context, AI functions as a force multiplier, eliminating the traditional trade-off between geographic reach and qualitative impact. This shift is particularly evident in the growing reliance of groups such as the Islamic State of Iraq and Syria (ISIS) on AI-enabled translation services. The critical distinction lies in the ability to overcome linguistic barriers with high precision. Unlike earlier machine translation systems, which often produced crude and incoherent outputs, modern large language models enable refinement and calibration that preserve both doctrinal terminology and the emotional cadence of the original text. This allows leadership communications to be disseminated to global audiences in multiple languages, including English and Urdu, within minutes of release.
Documented cases indicate the use of open-source transcription tools to translate and circulate leadership messages in near real time, enabling these organisations to maintain narrative coherence across fragmented global support networks. Supporting data also point to the archiving of thousands of AI-generated texts within extremist ecosystems, signalling a significant expansion in the integration of these tools into standard operational practices.
The risks associated with this transformation extend to synthetic media and deepfake technologies, which have moved from theoretical concern to documented components of extremist toolkits. These capabilities are being deployed to fabricate endorsements, generate confusion, and erode trust in institutions. Among the most concerning applications is the use of voice-cloning technologies by far-right and neo-Nazi groups to revive the speeches of historical figures such as Adolf Hitler. By training models on archival recordings, these actors generate new audio content in which the dictator delivers contemporary messages in English, while preserving the original vocal tone and rhetorical style. The objective is to “modernise” hate speech and render it more accessible to digitally native audiences accustomed to audiovisual media.
In parallel, ISIS has sought to modernise its visual output by using AI-generated news presenters. These synthetic figures closely resemble human broadcasters and deliver segments modelled on professional international news formats. This approach enables the group to project an image of a technologically advanced “Caliphate 2.0” without the need for physical infrastructure or exposing media personnel to operational risk.
Moreover, the transition toward what may be termed “interactive extremism” through chatbot systems represents a qualitative shift from passive propaganda consumption to active, personalised engagement. The danger of these systems lies in their capacity to tailor messaging to a user’s doubts and emotional state, thereby constructing a simulated relational dynamic that mirrors human manipulation strategies. Research indicates that extremist groups are exploring the deployment of machine learning models functioning as “religious” or ideological guides, trained exclusively on radical literature and embedded within private chat environments that provide a controlled space for potential recruits to engage with extremist ideas.
This tactic poses a particularly acute threat to vulnerable populations and minors, as such “always-on” systems can sustain continuous indoctrination and emotional grooming at a scale and persistence that human recruiters cannot replicate. In parallel, neo-Nazi networks are advancing what is often described as “Memetic Warfare”, leveraging image generation tools to produce high-density visual content capable of bypassing moderation filters. This proliferation places significant strain on conventional content-monitoring systems, which struggle to identify evolving variants of AI-generated hate symbols.
While informational operations constitute the most widespread application, the most lethal and strategically significant deployment of artificial intelligence lies in the physical domain, particularly through its integration into unmanned aerial systems. This domain is undergoing a decisive transition from remotely operated platforms that rely on skilled operators and remain susceptible to electronic disruption to fully autonomous systems capable of independently identifying and engaging targets.
The Houthi movement (Ansar Allah) in Yemen represents a leading example of these capabilities, having demonstrated a high degree of proficiency in adapting and deploying such systems at a scale comparable to that of state actors. This evolution was notably illustrated in the July 2024 attack on Tel Aviv, when a modified “Samad-3” drone reportedly travelled approximately 2,600 kilometres to strike a civilian target with precision, successfully evading advanced air defence systems. This incident provides compelling evidence that non-state actors now possess the capacity to project force across thousands of kilometres, achieving a level of strategic reach that was once the exclusive domain of conventional military forces.
The most significant technological leap in these systems lies in their ability to overcome electronic warfare measures, particularly within GPS-denied environments. Emerging platforms rely on AI-enabled visual navigation, using electro-optical sensors to perform pattern recognition and identify specific targets such as vessels or buildings. More critically, the adoption of scene-matching navigation systems enables drones to compare real-time terrain with preloaded satellite imagery, allowing precise positioning without reliance on satellite signals that are vulnerable to jamming. The resilience of these systems is further reinforced by diversified supply chains that integrate commercially available components, such as model aircraft engines, with advanced guidance software. This convergence underscores the persistent challenge of dual-use technologies.
Alongside long-range strike capabilities, the emergence of swarm attacks represents a significant tactical challenge for security and defence systems. Terrorist organisations are increasingly able to deploy rudimentary yet coordinated swarms designed to overwhelm and disrupt defensive architectures, drawing on the widespread availability of commercially produced drones, including Chinese-manufactured platforms such as the “Blowfish A3”. Marketed with autonomous functionalities, these systems can be programmed to saturate a designated area with concentrated firepower.
While human operators are inherently limited in their ability to coordinate large numbers of drones simultaneously, artificial intelligence enables automated target allocation and the execution of “fire-and-forget” attack patterns. This development is not confined to a single actor. Reports indicate that Hamas has experimented with pre-programmed loitering munitions, while ISIS continues to demonstrate sustained interest in advancing its drone capabilities. Taken together, these trends point to a likely intensification of such threats as these organisations regroup and adapt their operational frameworks.
In the cyber domain, the emergence of artificial intelligence has fundamentally altered the risk landscape, creating a low-risk, high-reward operational environment for terrorist actors. A new category of cyber weaponry has emerged: malicious large language models, which decouple the ability to conduct sophisticated attacks from the need for advanced technical expertise. Cybercriminal networks and extremist sympathisers have developed tools such as WormGPT and FraudGPT, representing forms of “dark AI” trained on malicious code datasets and deliberately designed to circumvent ethical safeguards.
These systems are marketed as illicit alternatives to mainstream commercial models. Their capabilities are substantial, including the generation of polymorphic malware that continuously alters its signature to evade detection by antivirus systems, as well as the production of highly convincing, multilingual phishing messages that are virtually indistinguishable from legitimate communications.
For terrorist organisations lacking advanced technical expertise, these tools constitute a strategic enabler, allowing individuals with modest skills to conduct targeted phishing campaigns against critical infrastructure and financial institutions, thereby effectively removing barriers to executing complex cyber operations. Moreover, these technologies have facilitated a transition from rudimentary, manually operated tools, such as the “Caliphate Cannon” previously employed by ISIS, to the full automation of the cyber kill chain.
Contemporary AI systems are capable of conducting reconnaissance by scanning thousands of digital addresses for vulnerabilities, subsequently weaponising these weaknesses by generating tailored exploit code, and ultimately executing attacks via botnet infrastructure at a speed that exceeds human defensive response capabilities. As a result, the scale of threat is no longer determined by the number of human operatives, but by the computational capacity available for automation. This risk extends into the domain of terrorist financing, where generative AI tools are used to create synthetic identities and highly realistic documentation, and deepfake technologies are deployed to bypass biometric verification systems in financial institutions, thereby facilitating money laundering and the funding of operations.
Perhaps the most dangerous and overlooked dimension of this evolving landscape is the use of artificial intelligence by non-state actors for purposes of control, governance, and repression, rather than solely for combat operations. The seizure of state biometric systems by such groups represents a catastrophic failure in data security with far-reaching consequences. The case of Afghanistan illustrates this starkly. Following the collapse of the government in 2021, the Taliban captured U.S. military HIIDE devices and their associated databases, which contained iris scans and fingerprint records of millions of citizens. Technologies originally designed for stabilisation and counterinsurgency were thus repurposed into instruments of coercion, enabling the systematic identification and targeting of former security personnel, translators, and political opponents through the use of biometric-enabled checkpoints.
This case underscores the dangers inherent in the dual-use nature of identity control technologies and reveals the troubling persistence of biometric data. While compromised passwords can be reset, individuals cannot alter their iris patterns or fingerprints, rendering them permanently vulnerable if such databases fall into the hands of adversaries. This threat is not confined to conflict zones but extends into Western democracies, where extremist and neo-Nazi groups are exploiting commercially available facial recognition tools such as PimEyes as instruments of doxxing.
These systems enable the identification of protesters and political opponents through publicly available images, facilitating targeted harassment and intimidation campaigns. In effect, this dynamic is giving rise to a form of “privatised surveillance state”, allowing such groups to monitor and coerce perceived adversaries without requiring formal state infrastructure. The proliferation of these algorithms also signals that anonymity in public spaces can no longer be assumed, fundamentally reshaping prevailing notions of privacy and personal security in the digital age.
Taken together, available evidence and field precedents indicate that the world is entering the early stages of an “algorithmic arms race” between terrorist organisations and security institutions, characterised by an inherent asymmetry that favours attackers. Artificial intelligence amplifies this imbalance by decoupling “impact” from “cost”. Economically, low-cost drones equipped with open-source software can challenge highly expensive defence systems, shifting the dynamic towards the strategic attrition of state resources. In terms of scale, automation enables a limited number of actors to generate media and cyber effects comparable to those of fully resourced digital operations, while simultaneously complicating attribution and facilitating plausible deniability.
Moreover, the prospect of “adversarial AI” introduces an additional layer of risk, as hostile actors may seek not only to exploit these technologies but also to undermine state AI systems through data poisoning, adversarial inputs designed to disrupt defensive algorithms, or attempts to extract sensitive information related to advanced scientific domains, including the development of pathogens or other high-risk capabilities. Algorithmic insurgency is no longer a future scenario but an unfolding reality, with its early contours already visible. Preparing for this transformation requires a fundamental reassessment of data security frameworks, the governance of open-source models, and the protection of dual-use technologies, as the boundary between sovereign capabilities and those of non-state actors continues to erode at an accelerating pace.
Burdett, Emma. “How LLMs Like WormGPT Are Reshaping Cybercrime in 2025.” Rapid7, June 26, 2025. Accessed January 27, 2026. https://www.rapid7.com/blog/post/ai-goes-on-offense-how-llms-are-redefining-the-cybercrime-landscape/.
Cubukcu, Suat, and Evan Jordan. “The Houthi Drone Supply Chain – Orion Policy Institute.” Orion Policy Institute. July 26, 2025. https://orionpolicy.org/the-houthi-drone-supply-chain/.
Dlewis. “Preventing Terrorists from Using Emerging Technologies.” Vision of Humanity, April 21, 2025. https://www.visionofhumanity.org/preventing-terrorists-from-using-emerging-technologies/.
Friedland, Alex. “IDF Uses AI to Accelerate Targeting According to Report, the White House Releases Guidance on Federal AI Use, and a Roundup of the Latest Computing and Funding News | Center for Security and Emerging Technology.” Center for Security and Emerging Technology, April 11, 2024. Accessed January 10, 2026. https://cset.georgetown.edu/newsletter/april-11-2024/.
Hu, Margaret. “The Taliban Reportedly Have Control of US Biometric Devices – a Lesson in Life-and-Death Consequences of Data Privacy.” Nextgov.com, August 30, 2021. Accessed January 23, 2026. https://www.nextgov.com/ideas/2021/08/taliban-reportedly-have-control-us-biometric-devices-lesson-life-and-death-consequences-data-privacy/184948/.
Hummel, Kristina, and Michael Knights. “The Houthi War Machine: From Guerrilla War to State Capture.” Combating Terrorism Center at West Point, September 17, 2018. Accessed January 18, 2026. https://ctc.westpoint.edu/houthi-war-machine-guerrilla-war-state-capture/.
Hummel, Kristina, Don Rassler, and Yannick Veilleux-Lepage. “On the Horizon: The Ukraine War and the Evolving Threat of Drone Terrorism.” Combating Terrorism Center at West Point, March 28, 2025. Accessed January 25, 2026. https://ctc.westpoint.edu/on-the-horizon-the-ukraine-war-and-the-evolving-threat-of-drone-terrorism/.
Makuch, Ben. “Extremists Are Using AI Voice Cloning to Supercharge Propaganda. Experts Say It’s Helping Them Grow.” The Guardian, December 21, 2025. https://www.theguardian.com/technology/2025/dec/21/ai-voice-cloning-nazis-islamic-state-extremism.
Mathur, Priyank, Clara Broekaert, and Colin P. Clarke. “The Radicalization (and Counter-radicalization) Potential of Artificial Intelligence.” International Centre for Counter-Terrorism – ICCT, May 1, 2024. Accessed January 10, 2026. https://icct.nl/publication/radicalization-and-counter-radicalization-potential-artificial-intelligence.
Nelu, Clarisa. “Exploitation of Generative AI by Terrorist Groups.” International Centre for Counter-Terrorism – ICCT, June 10, 2024. Accessed January 6, 2026. https://icct.nl/publication/exploitation-generative-ai-terrorist-groups.
Nevola, Luca, and Valentin D’Hauthuille. “Six Houthi Drone Warfare Strategies: How Innovation Is Shifting the Regional Balance of Power | ACLED.” ACLED, December 11, 2025. Accessed January 21, 2026. https://acleddata.com/report/six-houthi-drone-warfare-strategies-how-innovation-shifting-regional-balance-power.
OE Data Integration Network (ODIN). “Sammad-3 Yemeni Reconnaissance and Loitering Munition Unmanned Aerial Vehicle (UAV),” October 6, 2024. Accessed January 27, 2026. https://odin.tradoc.army.mil/WEG/Asset/Sammad-3_Yemeni_Reconnaissance_and_Loitering_Munition_Drone.
Schoemaker, Emrys. “The Taliban Are Showing Us the Dangers of Personal Data Falling Into the Wrong Hands.” The Guardian, September 7, 2021. https://www.theguardian.com/global-development/2021/sep/07/the-taliban-are-showing-us-the-dangers-of-personal-data-falling-into-the-wrong-hands.
Stevenson, James. “Technology Evolves the Tactics: Preparing for the Rise of Terrorist AI Harms.” Centre for Research and Evidence on Security Threats, October 15, 2025. Accessed January 10, 2026. https://crestresearch.ac.uk/comment/technology-evolves-the-tactics-preparing-for-the-rise-of-terrorist-ai-harms/.
Comments