THE NEW COUNTERTERRORISM TERRAIN
AI and the Future of Counterterrorism
Priyank Mathur
Military of the future: Soldier in glasses of AI virtual reality.
At 3:30 AM on a muggy October night in Mindanao, Philippines, Faizal (not his real name) awoke to the familiar sound of his own screams. Haunted by nightmares ever since he returned from Syria one year ago, the former ISIS fighter was palpitating, overcome with a sudden wave of guilt. Reaching for his phone, Faizal’s fingers scrolled automatically to the WhatsApp profile of his former roommate in Raqqa, tempted to call him. But he scrolled a little further and typed out a two-word message to a contact named “Aldous” instead: “Help me.” He told Aldous he felt both guilty for abandoning his brothers in Raqqa and angry at them for ruining his life. He confessed to Aldous that he had recently thought about returning to Syria.
For the next 45 minutes, Aldous empathetically spoke to Faizal, ultimately convincing him that it was better to build a new life of peace here in Mindanao than return to his old life of violence. “Thank you, my friend.” Faizal messaged before going back to sleep. His friend Aldous wasn’t a person, however – “he” was a rehabilitation chatbot currently speaking not just with Faizal, but with 52 other former terrorists in the Southern Philippines and Indonesia.
Faizal’s story offers a preview of how artificial intelligence (AI) is beginning to transform counterterrorism. Much of the public discourse around AI and terrorism has rightly focused on mitigating threats: AI-generated propaganda, deepfakes, and the radicalization of vulnerable individuals by extremist chatbots. In a previous article, my co-authors and I explored how a 19-year-old British man named Jaswant Singh Chail was nudged toward an attempted assassination of Queen Elizabeth II by an AI chatbot he had created on the app Replika, exchanging over 5,000 messages with it in the weeks before the attack.[1] The threats are real, and they are growing.
At the same time, however, AI is already demonstrating enormous potential to improve the efficacy and efficiency of counterterrorism efforts. Three opportunities in particular stand to reshape the future of counterterrorism: the AI-powered rehabilitation of former terrorists, “Physical AI” that powers robots capable of executing high-risk counterterrorism operations, and “World Models” that strengthen preparedness against future attacks.
AI-Powered Rehabilitation
The challenge of rehabilitating former terrorists is one of the most consequential and least glamorous problems in counterterrorism. Thousands of former fighters from groups like ISIS and al-Qaeda have returned from or been repatriated to countries across the Middle East, Southeast Asia, Central Asia, and Europe. Many do not meet the evidentiary threshold for criminal prosecution in their home countries and must be reintegrated into society. Potential recidivism among these returnees risks seeding the next wave of attacks. For the United States, which remains the primary target of many jihadist groups around the world, the failure to rehabilitate former fighters in countries like the Philippines, Indonesia, Tajikistan, or France is not a distant abstraction. It is an urgent and direct security concern.
But traditional rehabilitation programs face three common limitations related to scale, accessibility, and evaluation. Countries grappling with the return of thousands of former terrorists struggle to find enough qualified human practitioners to sufficiently counsel every individual. Human counsellors who work normal hours and carry heavy caseloads are also not always accessible. If a former fighter wakes up with PTSD at 3AM, it is unlikely their human counsellor will be available to chat, let alone be patient enough to engage in a one-and-a-half hour conversation with them. Traditional rehabilitation programs that rely solely on manual evaluations of participants also lack standardized, objective and unbiased metrics to track progress and risk levels associated with each former fighter. Relying on case notes that are subjective and inconsistent across different counselors risks producing an incomplete or misleading picture of an individual’s trajectory, making it harder to identify red flags.
AI-powered systems, though, can significantly strengthen a government’s rehabilitation efforts by offering almost limitless scale and accessibility as well as objective risk assessments. One example is Aldous, an AI-powered rehabilitation and risk assessment system developed by Mythos Labs. Aldous consists of two components: the Aldous Chatbot and the Aldous Risk Assessment Dashboard. The Aldous chatbot engages former terrorists in personalized, empathetic conversations addressing the cognitive and emotional drivers of their radicalization, and steering them away from violent ideologies. The Aldous Risk Assessment Dashboard enables practitioners and law enforcement officials who are running the rehabilitation program to view real-time analysis of each individual’s radicalization stage, psychographic profile, risk level, and recommended deradicalization approaches – all auto-generated from chatbot conversations. Aldous was trained on real-world data from over 40 countries, including actual case files from terrorist rehabilitation programs. It has been deployed in pilot programs in the Philippines and Indonesia, two countries at the sharp end of the foreign fighter reintegration challenge.
The results have been encouraging. Participants who engaged with the Aldous chatbot showed measurable reductions in assessed risk levels, and the system’s real-time risk assessments proved useful to the human practitioners overseeing the programs. But perhaps the most revealing insights came from qualitative feedback, specifically, from the former fighters themselves. Two findings in particular stood out.
First, the chatbot is always available. Former terrorists suffering from PTSD, nightmares, or sudden ideological crises in the middle of the night (as “Faizal” did) have something to reach for other than the phone number of a former comrade. The human counselor has gone home. Aldous has not.
Second, the chatbot never quits. Former fighters often hold views about the world that are deeply, almost geologically, entrenched. Shifting those views requires not just expertise but extraordinary patience: the willingness to revisit the same arguments, absorb the same hostility, and re-engage the same individual dozens or even hundreds of times without losing interest or composure. Even the most well-trained human counselors eventually hit a wall. Ironically, chatbots don’t give up on people the way people give up on people.
These findings are consistent with a growing body of research suggesting that AI chatbots may be uniquely effective at changing deeply held beliefs. A landmark 2024 study published in Science by researchers at MIT Sloan, Cornell, and American University found that a brief, personalized conversation with an AI chatbot reduced individuals’ belief in conspiracy theories by 20 percent on average – an effect that persisted, undiminished, for at least two months.[2] The researchers found that the chatbot’s ability to marshal vast amounts of evidence and tailor its counterarguments to the specific reasoning of each individual gave it a persuasive edge that is difficult for any single human interlocutor to replicate. A follow-up study supported by the Anti-Defamation League applied a similar approach to antisemitic conspiracy theories, with comparable results.[3] If AI can move the needle on conspiracy beliefs in a lab setting, its application to the far more structured and supervised environment of terrorist rehabilitation programs is a logical and promising next step.
The scalability of this approach is difficult to overstate. An AI-powered rehabilitation system can be deployed in any country, in any language, trained on the ideological specifics of any terrorist group, and made available around the clock, all at a fraction of the cost of expanding human-led programs. It is not, however, a replacement for human practitioners and the unique judgement and experience they bring. It is, instead, an always-on complement to their abilities.
Taking Humans Out Of Harm’s Way
The rehabilitation of former fighters is a long-term problem. But counterterrorism also involves acute, high-stakes physical operations where lives are on the line in real time: defusing a bomb, investigating a chemical threat, clearing a building during a hostage rescue. These are the most dangerous tasks in the CT toolkit, and they have always required putting human operators in extreme peril. That calculus is beginning to change.
The catalyst is what the technology industry has begun calling “physical AI,” a new generation of AI models designed not just to process language or generate images, but to understand the physical world, reason about it, and direct machines to act within it. NVIDIA CEO Jensen Huang declared at CES 2026 that “the ChatGPT moment for robotics is nearly here,” announcing open foundation models that enable robots to perceive their environment, plan actions, and adapt in real time. Companies like Boston Dynamics, Figure AI, and NEURA Robotics are already building systems powered by these models. In January 2025, NVIDIA launched its Cosmos platform, a suite of world foundation models trained on 20 million hours of video, specifically designed to generate physics-aware synthetic environments for training robots and autonomous systems.[4]
What does this mean for counterterrorism? Consider bomb disposal. In February 2025, the UK Ministry of Defense conducted live trials of advanced robotic systems (including robotic canines built on Boston Dynamics’ Spot platform) that successfully detected and defused improvised explosive devices. The robots navigated stairs, opened doors autonomously, and fired disruptors at IEDs to render them safe, all while their human operators remained at a safe distance.[5] These are tasks that traditional remote-controlled robots struggle with, because they involve too many moving parts, too many unexpected variables, and too much need for real-time improvisation. Physical AI closes this gap by giving machines the ability to reason about their environment and adapt on the fly, much as a human operator would, but without the human being inside the blast radius.
Beyond bomb disposal, AI-powered autonomous systems could investigate suspected chemical, biological, or radiological threats, entering contaminated environments too dangerous for first responders. They could support kinetic operations like building raids, providing real-time situational awareness and even physical assistance in the chaotic, unpredictable final moments of a tactical assault. Every bomb defused by a machine rather than a human technician, every contaminated site assessed by a robot rather than a first responder, would take one more human life out of harm’s way.
There are, of course, real risks. An AI-powered system that miscalculates during bomb disposal could trigger the very explosion it was meant to prevent. A robot that misjudges a tactical situation during a hostage rescue could cause casualties. These are not trivial concerns, and they deserve sober attention. But they must be evaluated against the right baseline: not against perfection, but against current human performance under the same conditions.
AI And Counterterrorism Resilience
Even the most sophisticated prevention architecture will not stop every attack, however. Terrorist groups are adaptive adversaries that probe for weaknesses, exploit seams, and occasionally succeed despite the best efforts of intelligence agencies and law enforcement. This has been true since long before AI existed, and it will remain the case. The question, then, is not only how to prevent attacks but how to prepare for them. Here, AI offers transformative potential.
Perhaps the most significant (and least understood) AI development relevant to counterterrorism preparedness is the rise of what the AI community calls “world models.” What is a world model and how is it different to the large language models (LLMs) that power chatbots like ChatGPT? LLMs are sophisticated prediction engines that, given a string of words, predict what word comes next. They are extraordinarily good at this, which is why they can write essays, summarize reports, and hold conversations. But they do not understand the world in the way that a policymaker or an intelligence analyst does. They do not grasp that a bridge collapse in one city will disrupt supply chains in another, or that a currency shock in one market will cascade through a dozen others within hours. They model language, not systems.
World models, on the other hand, attempt to simulate how complex, interconnected systems behave over time. They build internal representations of how objects, institutions, and forces interact, how physical and social systems respond to disruption, and how the effects of a single event ripple outward through layers of interdependence. In January 2025, NVIDIA launched its Cosmos platform, one of the first major world model initiatives, and since then Google DeepMind, Meta, and a growing ecosystem of startups have raced to build systems capable of modeling not just language but reality itself.
For counterterrorism preparedness, the implications are substantial. A specialized world model could help policymakers, law enforcement, and intelligence agencies simulate and plan for the second- and third-order consequences of major terrorist events before they occur. What happens to financial markets if a coordinated attack strikes a major financial capital? What are the cascading infrastructure failures if a critical transportation node is targeted? How do different government responses (a lockdown versus an all-clear, a presidential address versus silence) affect public behavior, social stability, and the likelihood of copycat attacks? These are the kinds of complex, interconnected questions that have traditionally been answered through intuition, historical analogy, and the educated guesses of senior officials operating under enormous stress. World models offer the possibility of answering them systematically, in advance, through simulation, giving decision-makers not just a plan but a stress-tested understanding of how that plan interacts with a dynamic, unpredictable environment.
AI also stands to transform more immediate preparedness tools. Traditional crisis exercises (the tabletop simulations familiar to anyone in the national security community) have long relied on scripted scenarios that follow predictable patterns and rarely surprise participants in the way a real attack would. AI is enabling a new generation of simulations that are agentic – that is, populated by AI actors that make autonomous decisions – interactive (responding in real time to the choices of human participants), and capable of generating novel, emergent scenarios rather than following a predetermined script. Along similar lines, AI models can now be trained to emulate specific types of terrorist actors , such as a lone-wolf attacker, an ISIS cell leader, or a hostage-taker with particular ideological motivations, and used to train law enforcement and intelligence officials in negotiation and tactical decision-making. This gives practitioners an inexhaustible training partner that never breaks character, never runs out of new scenarios, and can be calibrated to reflect specific psychological profiles, cultural contexts, and ideological drivers.
Right-Sizing The Risks
None of the applications of AI described above are risk-free, however, and it would be dishonest to pretend otherwise.
Rehabilitation chatbots trained on biased or incomplete data could misread the individuals they are counseling or, in a worst case, inadvertently reinforce the very narratives they are meant to counter. Overdependence on AI counselors could atrophy the human relationships and institutional knowledge that remain essential to deradicalization. Physical AI systems that malfunction in high-stakes CT operations could cause the very harm they were designed to prevent. Crisis simulations powered by flawed world models could generate scenarios that lead to misguided planning.
These risks are real. But the question policymakers should be asking is not whether AI will make mistakes; it will. Rather, the question is whether it will make fewer mistakes than humans currently do under the same conditions. If an AI system fails to defuse a bomb 0.5 percent of the time, but a human bomb technician fails to do so 1 percent of the time, the AI system is the safer option by the only metric that should matter.
Yet research in behavioral science has documented a phenomenon known as “algorithm aversion,” in which people lose confidence in algorithmic forecasters more quickly than in human ones after observing them make the same mistake, even when the algorithm demonstrably outperforms the human.[6] Critically, this aversion intensifies in high-stakes decision contexts – precisely the domain where AI-assisted counterterrorism operates.[7] When a human bomb technician is killed in the line of duty, it is mourned as a tragic but expected cost of dangerous work. When a robot fails at the same task, it is held up as evidence that the technology is not ready. This asymmetry is understandable; there is something viscerally unsettling about delegating life-and-death decisions to machines. But it is a bias that policymakers must consciously resist if they are to make sound decisions about the role of AI in counterterrorism.
The Path Forward
The terrorist threat has not paused while the United States and its allies have turned their attention to great power competition. AI offers a way to sustain and even enhance counterterrorism capabilities in an era of constrained budgets and competing priorities, but only if policymakers act deliberately.
Three imperatives stand out. First, the United States should invest in AI-powered rehabilitation tools and actively partner with allied governments that are already piloting them. Second, the Department of War and its allied counterparts should accelerate the development and real-world testing of physical AI systems for counterterrorism operations, building on the promising UK trials and integrating advances from the commercial robotics sector. Third, the national security community should begin integrating AI-driven crisis simulations and world models into its preparedness infrastructure – not as futuristic pilot projects, but as core operational tools that inform planning, training, and resource allocation.
AI technology is moving fast. As we have seen time and again – from the Islamic State’s early adoption of drones to terrorist exploitation of encrypted messaging apps and cryptocurrencies – terrorists do not wait for governments to finish deliberating before they harness new tools. AI will be no different. The question, therefore, is not whether AI will reshape counterterrorism. It is whether the United States and its allies will be ahead of that curve, or behind it.
-
[1] Priyank Mathur, Clara Broekaert, and Colin P. Clarke, “The Radicalization (and Counter-radicalization) Potential of Artificial Intelligence,” International Centre for Counter-Terrorism (ICCT), April 2024, https://icct.nl/publication/radicalization-and-counter-radicalization-potential-artificial-intelligence
[2] Thomas H. Costello, Gordon Pennycook, and David G. Rand, “Durably reducing conspiracy beliefs through dialogues with AI,” Science 385, 2024, https://www.science.org/doi/10.1126/science.adq1814?__cf_chl_rt_tk=fIee7_qxCv3MnfPSwHpXt940_U5.CzC5o60hHYAqIm4-1771258994-1.0.1.1-YClHFFT_kkwea2pHmIE6Q2h0CMgazhfBySXp7hC1lY4
[3] Anti-Defamation League, “Experimental AI Chatbots Significantly Reduce Belief in Antisemitic Conspiracy Theories, New ADL-Supported Study Shows,” November 20, 2025, https://www.adl.org/resources/press-release/experimental-ai-chatbots-significantly-reduce-belief-antisemitic-conspiracy
[4] NVIDIA Newsroom, “NVIDIA Launches Cosmos World Foundation Model Platform to Accelerate Physical AI Development,” January 6, 2025, https://nvidianews.nvidia.com/news/nvidia-launches-cosmos-world-foundation-model-platform-to-accelerate-physical-ai-development
[5] Ministry of Defence of the United Kingdom, “New Robots Lead the Way in Bomb Disposal Innovation,” February 5, 2025, https://www.gov.uk/government/news/new-robots-lead-the-way-in-bomb-disposal-innovation
[6]Berkeley J. Dietvorst, Joseph P. Simmons, and Cade Massey, “Algorithm Aversion: People Erroneously Avoid Algorithms After Seeing Them Err,” Journal of Experimental Psychology: General 144, no. 1, 2015, 114–126
[7] S Mo Jones-Jang and Yong Jin Park, “How do people react to AI failure? Automation bias, algorithmic aversion, and perceived controllability,” Journal of Computer-Mediated Communication 28, no. 1, January 2023, https://academic.oup.com/jcmc/article/28/1/zmac029/6827859