The first quarter of 2026 marked a strategic turning point in the deployment of hard power and the management of geopolitical interaction. For decades, computing technologies remained largely confined to operational support roles, such as processing intelligence data or guiding precision munitions. January and February, however, witnessed a structural shift as military planning moved away from human-dependent decision cycles toward managing autonomous algorithmic kill chains. This transformation was formally articulated in the “Artificial Intelligence Acceleration Strategy” issued by the United States Department of War (DoW) on Jan. 9 2026. The directive aims to entrench American military dominance by rapidly integrating AI across warfighting, intelligence, and enterprise operations, while transforming the defence apparatus into what officials describe as an “AI-first” military structure.

 

This doctrine rested on strict operational parameters prioritising overwhelming lethality, rapid execution, and objective-driven systems that place mission success above all other considerations, deliberately excluding social and political variables from algorithmic decision cycles to ensure decisive superiority in battlefield decision-making. This shift was reflected in two unprecedented operations: the capture of Venezuelan President Nicolás Maduro during Operation Absolute Resolve in January 2026, and the decapitation strike targeting Iran’s Supreme Leader Ayatollah Ali Khamenei during Operation Silent Holy City, conducted within Operation Epic Fury in February 2026.

 

These operations reflected the integration of large language models, dynamic data architectures, tactical assessment algorithms, and autonomous unmanned systems, fundamentally transforming the speed, precision, and geopolitical cost calculus of neutralising high-value targets. Together, they signal that AI has moved beyond a supporting analytical role to become a strategic architect of the battlespace and a driver of kinetic execution.

Roots of the Algorithmic War

The operational doctrine adopted by the U.S. in 2026 drew much of its methodological foundation from the tactical targeting architecture developed by the Israeli military, particularly Unit 8200, during the intensive operations in Gaza between 2023 and 2025. Within intelligence circles, this Israeli algorithmic targeting chain was often described as a “mass assassination factory,” forming a key conceptual basis for the newly formulated American approach.

 

The Israeli architecture, which led the “AI war”, rested on three interlinked structural systems:

    1. The Gopsel / Habsora system: an AI tool for strategic decision support that processes vast surveillance datasets to generate an automated target bank of buildings and facilities. This system dramatically accelerated targeting, raising output from roughly 50 targets annually under human analysis to more than 100 targets per day.
    2. The Lavender database: an individual profiling system based on mass surveillance in Gaza and the West Bank. Through automated analysis of digital footprints such as social networks, communication records, and movement patterns, the algorithm evaluates individuals and places them on automated kill lists. At its operational peak, it reportedly tagged more than 37,000 potential targets.
    3. The “Where’s Daddy?” algorithm: a geolocation tracking system designed to monitor targets and trigger strikes once they return to their homes. This tactic has historically been associated with sharply elevated rates of collateral casualties among civilians and the families of those targeted.

The Strategic Framework of the Algorithmic War

To understand the deeper dimensions of the technological surge in Caracas and Tehran, it is essential to unpack the overarching strategic framework that legitimised these operations and accelerated their execution. In this context, the Artificial Intelligence Strategy issued by the U.S. DoW on Jan. 9 2026 constituted an offensive warfighting approach aimed at dismantling the bureaucratic barriers of conventional information technology. This doctrine rested on leveraging America’s asymmetric competitive advantages in capital markets, pattern-innovation capacity and the vast repository of operational data accumulated over two decades of conflict.

 

To translate this strategy into an operational reality, several pace-setting projects were launched under strict timelines and direct individual leadership, with the following pathways as the most prominent:

    • Project Swarm Forge established a competitive mechanism aimed at expanding innovative combat capabilities by integrating elite military units with commercial technology developers.
    • Project Agent Network focused on engineering autonomous AI agents to manage the broad spectrum of battle, from the strategic planning of campaigns to the precise execution of kill chains.
    • Project Ender’s Foundry was designed to accelerate cognitive simulation cycles and feedback loops between software developers and field kinetic operators.
    • The Open Arsenal track targeted compressing the cycle of converting technical intelligence into deployable operational weapon systems, reducing it from several years to only a few hours.
    • The GenAI.mil initiative ensured secure and wide institutional access to leading generative-AI models, including Gemini and Grok, for operational cadres classified at impact level five and above.

 

In parallel with this technological leap, the 2026 National Defence Strategy provided the geopolitical mandate for these offensive postures. The strategy established a decisive break with what it described as utopian idealism, in favour of adopting strict realism, giving paramount priority to securing the American homeland through the Golden Dome of America and to pre-emptively defending vital interests in the Western Hemisphere, in line with Trump’s revision of the Monroe Doctrine.

 

More importantly, the strategy proceeded to classify drug cartels and human trafficking networks as foreign terrorist organisations and unlawful combatants. This legal designation granted the military institution an unprecedented operational space to employ lethal force against narcotics networks. This conceptual framework was clearly utilised to legitimise the direct targeting of the Maduro regime.

AI and the Architecture of the Battlefield

The shift toward military automation is no longer a technical luxury or a routine modernisation track in modern armies; it has become a governing strategic doctrine that reshapes rules of engagement and global power balances. In an operational environment marked by fluidity and escalating complexity, the concept of algorithmic warfare emerged as an alternative model designed to remove human cognitive bottlenecks in favour of autonomous and exceptionally rapid decision-making mechanisms. This new pattern did not merely enhance intelligence-processing efficiency; it established a decisive structural transition from a doctrine reliant on mass kinetic force to one centred on data, speed and directed lethality, where generative models and AI systems manage the full spectrum of battle. The clearest operational embodiment of this strategic shift is evident through analysing two kinetic pathways in early 2026, in which AI moved from a back-line advisory tool to a field commander designing and executing the most complex cross-border high-value-target operations, as demonstrated by the interventions in Caracas and Tehran.

First: Operation Absolute Resolve

Operation Absolute Resolve in January 2026 marked the first operational inauguration of the algorithmic-warfare doctrine, targeting the neutralisation of the regime leader in Caracas through a cross-border extraction. It was not merely a tactical raid, but a comprehensive strategic demonstration proving AI’s ability to synchronise cyber, electromagnetic and kinetic domains to penetrate a highly fortified sovereign environment.

 

Intelligence Collection and AI-Driven Fusion

This extended kinetic escalation was employed as a strategic cover to execute the widest intelligence effort and ISR operations targeting the apex of the leadership hierarchy in Caracas. For several months, military planners worked on constructing a multi-modal, highly precise intelligence model to track the kinetic footprint, behavioural patterns, and security protocols of Venezuelan President Maduro. In this context, stealth aerial platforms, specifically the RQ-170 Sentinel unmanned aircraft, secured continuous coverage over the Venezuelan capital, as its low radar cross-section enabled it to penetrate the airspace protected by Russian S-300 systems and collect highly precise real-time telemetry that conventional platforms such as the MQ-9 Reaper could not obtain without exposure, while simultaneously the NGA processed extensive satellite observation packages and the National Security Agency absorbed vast flows of signals intelligence (SIGINT).

 

Given that the vast volume of the data repository exceeded human cognitive capacity, the U.S. military turned to Palantir’s advanced tracking algorithms to engineer the target’s comprehensive digital signature. AI systems fused seemingly scattered inputs, from convoy-movement patterns and personal habits to daily routines, to form a predictive behavioural matrix. This data architecture was further reinforced through HUMINT, with field intersections provided by a CIA source infiltrating Maduro’s inner circle, enabling real-world validation of algorithmic outputs.

 

The most significant technical breakthrough in the planning phase of Operation Absolute Resolve was the extensive use of Anthropic’s generative-AI model Claude. According to declassified intelligence documents, the model was tasked with semantic analysis of thousands of hours of audio intercepts in Spanish and Persian to detect fractures and gaps in the Venezuelan military and security command chain. This measure marked a pivotal transition from descriptive intelligence to generative tactical wargaming, as analysts moved beyond traditional fixed briefings and began interactive dialogue with the model to generate dynamic raid scenarios grounded in game theory. By simulating highly complex variables, such as the cyber-blinding of Caracas’s power grid, AI granted commanders exceptional probabilistic evaluation of infiltration options and likely friction points.

Execution and Multi-Domain Synchronisation

The operation required precise operational integration across the cyber, electronic and kinetic domains. US Cyber Command initiated the mission by creating a secure corridor through targeted cyberattacks, bringing down Caracas’s power grid and blinding air-defence radars. This localised paralysis achieved a dual objective: neutralising the field-communication capabilities of surface-to-air systems and generating a sudden psychological shock within the national defence apparatus. Simultaneously, EA-18G Growler electronic-warfare aircraft flooded the electromagnetic spectrum with dense jamming signals, causing total blindness in the Venezuelan army toward detecting the incoming force.

 

Under the umbrella of this algorithmically managed electronic and cyber suppression, a tactical extraction force, composed of Delta Force elite units and the FBI Hostage Rescue Team, conducted an insertion into Maduro’s presidential compound. The complex operation involved more than 150 aerial platforms launched from 20 bases across the Western Hemisphere, supported by fifth-generation fighters, strategic bombers, and the 160th Special Operations Aviation Regiment. The mission resulted in the capture of Maduro and his wife and their transfer to the USS Iwo Jima ahead of their trial in New York on narcoterrorism charges.

Second: Operation Epic Fury

If the Caracas operation relied on AI for intelligence fusion and simulation, the joint American-Israeli assassination of Iranian Supreme Leader Ali Khamenei in February 2026 represented an unprecedented evolutionary leap in the history of conflict, as it was recorded, within the broader Operation Epic Fury, as the first senior leadership-decapitation mission to be managed and executed entirely through a closed-loop AI architecture.

 

The strategic environment of this strike was shaped by the outcomes of Operation Midnight Hammer in June 2025. With the near-total neutralisation of the nuclear programme, the operational compass shifted from capability degradation to leadership removal. Yet targeting Khamenei posed a complex intelligence challenge due to his strict counter-reconnaissance strategy, which included randomised location changes, multi-layered armoured security rings and a comprehensive electromagnetic-jamming umbrella. Historically, the margin of error in locating him exceeded five kilometres, making traditional targeting a major risk that could lead to catastrophic failure or significant civilian casualties.

Data-Driven Assassination Chain

To penetrate this complex security shield, United States CENTCOM deployed an integrated AI package that entirely eliminated human cognitive bottlenecks from the targeting cycle. The Silent Holy City architecture relied on a strict algorithm-driven pathway, limiting human involvement to the final, lethal authorisation step. The cognitive engine of the operation rested on an unprecedented fusion of a government version of the Claude 4 Opus language model with the Palantir Foundry platform, enabling both to function seamlessly within the Joint All-Domain Command and Control (JADC2) network.

 

The field execution of the operation rested on a strict algorithmic targeting chain organised into six decisive operational phases:

    1. Intelligence sensing: The Claude model processed around 2.3 petabytes of multi-layered data, including 0.12 billion satellite-imagery packets, SIGINT feeds and HUMINT reports. With a one-million-token context window and a Persian-translation accuracy of 98.7%, this complex fusion was completed in only 90 minutes, a task that would theoretically require three hundred and 28 human analysts working for one hundred consecutive days.
    2. Target focusing: By applying predictive-lifestyle algorithms, the system detected a microscopic anomaly in the Supreme Leader’s security protocol: a 1.2-second delay during the convoy halt. From this gap, the future route was calculated with 98.7% accuracy, and a three-minute golden targeting window was identified during guard rotation as the target passed through an eight-hundred-metre radar-blind point en route to a mosque for dawn prayer.
    3. Decision simulation: Within eight minutes, AI generated 15 independent tactical plans and evaluated them autonomously against complex criteria, including penetration probability, asset survivability and collateral-damage reduction. The optimal plan received an algorithmic reliability score of 98.2%.
    4. Theatre coordination: Real-time synchronisation of tactical parameters and telemetry measurements was executed across all units, with command-latency across domains not exceeding three seconds and data-transfer time below two hundred milliseconds.
    5. Kinetic execution: The physical implementation relied on an ultra-small force package: a single MQ-9B SeaGuardian drone, an EA-18G Growler for localised electronic warfare and a miniaturised special-operations team of eight personnel for final laser designation. The drone launched two Hellfire missiles equipped with AI-driven terminal-guidance units that dynamically adjusted the strike path to ensure an error margin of less than one metre.
    6. Battle-damage assessment: Within 0.3 seconds of kinetic impact, autonomous visual-recognition software processing the drone’s live feed confirmed the complete destruction of the target and documented the absence of any collateral losses.

 

The algorithmic kill chain lasted only 11 minutes and 23 seconds. This operation demonstrated that advanced AI can substitute for mass kinetic force, as by monopolising decisive superiority in decision-speed and data processing, United States Central Command succeeded in removing the apex of the adversary’s hierarchy, despite its fortifications, with a microscopic operational footprint.

From Human Support to Algorithmic Autonomy

The cross-assessment of Operation Absolute Resolve in Caracas and Operation Silent Holy City in Tehran reveals rapidly maturing operational employment of military AI within a period not exceeding two months. Although both operations relied on the same core technical architecture, specifically the integration of Palantir data platforms with Anthropic generative models, the application pathway jumped radically from human-directed intelligence fusion to machine-led autonomous tactical execution.

 

The sharpest paradox between the two scenes emerges in the equation of kinetic mass versus scale of achievement. Maduro’s extraction required a large classical display of force that reflected a traditional military doctrine which merely used AI as an auxiliary tool, producing a loud, destructive and logistically burdensome operation. By contrast, the assassination of Khamenei demonstrated a new strategic rule: the closer algorithmic certainty approaches 100%, the less the need for dense physical force, until it nearly disappears. This shift represents an unprecedented contraction of operational friction, as AI no longer serves as an intelligence adjunct but entirely displaces it to become the sovereign authority in planning, intelligence and targeting. This operational superiority rested on a complex package of commercial and militarily adapted AI structures, including large language models, dynamic data-fusion platforms and autonomous aviation systems.

Geopolitical Reverberations: Restructuring Deterrence

These field transformations constituted a definitive deterrent message to adversaries and strategic competitors, foremost among them Beijing and Moscow. Washington demonstrated in practice that geographic barriers, sovereign borders, complex military fortifications, and electromagnetic-jamming umbrellas have lost their immunity to AI-driven intelligence. The ability of a language model to process 2.3 petabytes of multi-layered data to capture a transient three-minute security gap formally declares the obsolescence and demise of operational-security and counterintelligence doctrines that dominated the 20th century.

 

In the face of this strategic exposure, competing states will be compelled to accelerate the development of fully autonomous defensive networks and countermeasures, inevitably igniting a global race for algorithmic armament. The establishment of the U.S. DOW’s Tech Force initiative, aimed at attracting elite technical talent, constitutes an explicit institutional recognition that future geopolitical conflicts will be decided not by infantry divisions or heavy armour, but by data scientists and machine-learning engineers.

 

On the normative level, the deployment of these superior systems imposes structural challenges to international humanitarian law and to the ethics of classical warfare. The early Israeli experience with the Lavender and Gospel systems in Gaza revealed the profound risks of automation bias, whereby the human operator, overwhelmed by the massive surge of targeting outputs, becomes a mere rubber-stamping tool for life-and-death decisions made by the machine. And although the U.S. military employed these technologies to execute a precise surgical decapitation strike free of collateral losses in Iran, the same technical architecture remains bound to a logic of continuous mass surveillance and behavioural profiling based on metadata.

 

On the other hand, the intense clash between the Pentagon and Anthropic drew attention to a critical vulnerability in US defence supply chains: dependence on commercial technology companies governed by purely civilian ethical frameworks. And with advanced models such as Claude 4 Opus surpassing critical safety thresholds (ASL-3) and displaying capacities for strategic deception, self-preservation, and the direct engineering of weapons of mass destruction, sovereign control over these algorithms becomes a security and existential necessity.

 

In conclusion, the early-2026 military operations provide decisive strategic proof that AI has moved beyond the experimental phase to become the central nervous system of modern warfare. The newly formulated offensive doctrine succeeded in compressing the kill chain from months of human deliberation to minutes of autonomous algorithmic execution. From the successful intelligence synergy between Palantir systems and Claude models in dispersing the fog of war and engineering the extraction of Venezuelan leadership during Operation Absolute Resolve, to the autonomous processing of massive datasets and the penetration of Iranian defence networks to conduct a surgical decapitation strike against the Supreme Leader in Operation Silent Holy City using the A-GRA architecture and Hivemind software, the doctrine of displacing traditional firepower mass in favour of the triad of data dominance, algorithmic speed and operational autonomy is now firmly entrenched. Yet this decisive transfer of cognitive burden from the human mind to the machine portends an acceleration of military escalation beyond the limits of human control, generating structural fractures in international law and classical deterrence constraints, ultimately imposing the inexorable reality that the power monopolising the most precise operational data and commanding the most advanced neural networks will dictate the geopolitical architecture of the 21st century.

References

Andersin, Emelie. “The Use of the ‘Lavender’ in Gaza and the Law of Targeting: ai-Decision Support Systems and Facial Recognition Technology”, Journal of International Humanitarian Legal Studies 16, 2 (2025): 336-370, doi: https://doi.org/10.1163/18781527-bja10119

 

D’Urso, Stefano. “U.S. Air Force Integrates Open-Architecture for Mission Autonomy on CCAs.” The Aviationist, February 14, 2026. Accessed March 2, 2026. https://theaviationist.com/2026/02/14/usaf-integrates-a-gra-architecture-mission-autonomy-ccas/.

 

Eun-Joong, Kim, and Seo Bo-Beom. “U.S. Military’s AI Precision Arrest of Maduro in Flawless Operation.” The Chosun Daily, January 6, 2026. https://www.chosun.com/english/world-en/2026/01/06/RSIXQ24PCRDBNOSEEG6ABJNBEQ/.

 

Frazier, Allen. “Shield AI Unveils Fully Autonomous VTOL Fighter Jet.” Military.com, October 24, 2025. Accessed March 1, 2026. https://www.military.com/daily-news/investigations-and-features/2025/10/23/shield-ai-unveils-fully-autonomous-vtol-fighter-jet-what-it-means-military.html.

 

Kukreti, Shweta. “Did US Use Anthropic’s Claude AI in Iran Strikes Despite Trump’s Ban? | Hindustan Times.” Hindustan Times, March 1, 2026. https://www.hindustantimes.com/world-news/us-news/did-us-use-anthropics-claude-ai-in-iran-strikes-despite-trumps-ban-101772345758921.html.

 

Lyons-Burt, Charles. “How Defense Tech Enabled Operation Absolute Resolve.” Executive Gov, March 2, 2026. https://www.executivegov.com/articles/defense-tech-nicolas-maduro-capture-isr-cyber.

 

Newdick, Thomas, and Howard Altman. “MQ-20 Avenger Tests ‘Hivemind’ AI in Orange Flag Exercise.” The War Zone, March 5, 2025. https://www.twz.com/air/mq-20-avenger-tests-hivemind-in-orange-flag-exercise.

 

Rogg, Jeffrey. “U.S. Intelligence in A Post-Maduro Venezuela.” Just Security, January 9, 2026. https://www.justsecurity.org/128064/us-intelligence-post-maduro-venezuela/.

 

U.S. Department of War. “War Department Launches AI Acceleration Strategy to Secure American Military AI Dominance.” Press release. U.S. Department of War, January 12, 2026. Accessed March 2, 2026. https://www.war.gov/News/Releases/Release/Article/4376420/war-department-launches-ai-acceleration-strategy-to-secure-american-military-ai/.

Comments

Write a comment

Your email address will not be published. Required fields are marked *