Militaries use AI for rapid threat detection and target analysis, as adversaries deploy it to fabricate victories before verification catches up
As the regional war involving Israel, Iran, and the United States has expanded, artificial intelligence has emerged as one of the conflict’s clearest new frontiers. It is reshaping two parallel arenas at once: the physical conduct of war and the battle over perception. Militaries are increasingly using AI-enabled tools to process intelligence, shorten targeting cycles, and improve missile and drone defense. At the same time, social media platforms have become saturated with synthetic images, recycled footage, bots, and algorithmically amplified propaganda aimed at manufacturing narratives as quickly as missiles can alter facts on the ground.
The result is a conflict in which speed matters not only in the air and at sea, but also online. Increasingly, what is at stake is not just who can strike first, but who can define reality first.
John Keith King, a technology strategist and former US government communications engineer who worked on mission-critical command-and-control systems supporting senior national leadership, said artificial intelligence is already embedded across several layers of modern military operations.
One of the most important uses is intelligence fusion
“One of the most important uses is intelligence fusion,” King told The Media Line. He said AI can rapidly process huge volumes of satellite imagery, drone video, radar data, and communications intercepts, allowing military planners to identify missile launch sites, troop movements, and concealed infrastructure “with much greater speed and accuracy.”
That description broadly aligns with what officials and public reporting have revealed, even if the exact systems being used in the current war remain undisclosed. When Operation Epic Fury began on February 28, US and partner forces struck Iranian command-and-control facilities, air defenses, missile and drone launch sites, and military airfields. Adm. Brad Cooper, the commander of US Central Command, later said advanced AI tools were helping American forces process large volumes of data faster, while stressing that humans still make final decisions on when and what to shoot. A March 11 DefenseScoop report on Cooper’s remarks said the command did not publicly identify the specific AI systems being used.
King said one of AI’s most important battlefield functions is not replacing commanders, but accelerating what they can see and understand.
“AI is also heavily used for target identification and tracking,” he explained, saying computer vision systems can recognize vehicles, missile systems, aircraft, and other equipment from drone or satellite feeds and then monitor them continuously in real time.
He said that function is especially relevant in the kind of battlespace now defining the region.
“The region is characterized by missile arsenals, drone warfare, and dispersed military infrastructure,” King noted. AI, he said, helps analysts track mobile missile launchers, identify drone launch sites, and spot patterns that may indicate an imminent strike, dramatically accelerating detection and response.
For Israel, the public record is more fragmentary and more contested. International outlets reported in April 2024 that the United States was examining a report that Israel had used AI to identify bombing targets in Gaza. The Israel Defense Forces denied using an AI system to identify suspected Hamas members, saying information systems were only tools assisting human analysts in target identification. Separate reporting in 2025 claimed that US tech companies had provided AI and cloud services to the Israeli military during the wars in Gaza and Lebanon, contributing to a sharp increase in AI and computing support used to track targets more quickly. That does not by itself prove how those tools were used in the current Iran war, but it does suggest that Israel entered this regional escalation with an already expanded AI-enabled military infrastructure.
According to King, AI is also increasingly integrated into the offensive and defensive systems that define this war.
“Another major application is in autonomous and semi-autonomous platforms,” he said, noting that many drones and loitering munitions use AI-assisted navigation, object recognition, and threat avoidance to search wide areas, identify objects of interest, and relay targeting data while reducing operators’ workload.
“AI also plays a growing role in defensive systems,” he added. Missile defense networks, he said, rely on machine learning to detect incoming threats, filter radar noise, and prioritize interceptions, often within seconds.
That assessment aligns with the broader shape of the war. CENTCOM has described the campaign against Iran as heavily focused on drone and missile infrastructure, and officials have said the United States has had to defend against large-scale retaliatory barrages while rapidly striking launch sites and command nodes. Cooper said AI tools were helping leaders “cut through the noise and make smarter decisions faster than the enemy can react,” while emphasizing that final engagement authority remained human.
If AI is compressing military decision cycles, it is doing the same in the information sphere.
Yael Moshe, an OSINT team lead and intelligence product specialist for the Israeli Defense Ministry’s Coordinator of Government Activities in the Territories, said the digital front of the war is no longer secondary. It has become a battlefield of its own, fueled by AI-generated content and social media virality.
I call it digital psychological terrorism
“I call it digital psychological terrorism,” she told The Media Line. Actors such as Iran, she said, are using AI and recycled footage to flood platforms like TikTok and Instagram, targeting young audiences with fabricated realities, including fake images of Tel Aviv in ruins and exaggerated depictions of Iran’s military power.
She said these campaigns operate on two tracks at once.
“This serves two distinct arenas: manufacturing a fake ‘victory picture’ for Iran’s domestic audience, while simultaneously sowing fear globally,” she noted.
View this post on Instagram
That pattern has been documented in multiple reports. A pro-Iran propaganda network has used AI-generated disinformation and Epstein-related conspiracy framing to push anti-US and anti-Israel narratives to large online audiences. Fake AI content about the Iran war has also spread widely on X, including fabricated visuals of attacks, fake battlefield scenes, and manipulated imagery amplified by prominent accounts.
Moshe argued that much of the material is technically simple but operationally effective because it moves faster than verification.
“When we talk about fake news, we mostly see two simple tricks,” she explained: old videos from Syria or even video games are relabeled as current attacks, while AI is used to generate fake images of Israeli cities on fire. “It takes them 10 seconds to make, but by the time we prove it’s fake, millions of people have already seen it and believed it.”
She said the danger grows when such material escapes fringe channels and enters wider circulation.
View this post on Instagram
Moshe said she is not personally shaken by such content because she knows the reality on the ground and understands psychological warfare. But “the true danger arises” when fabricated material spreads across social media and “bleed[s] into mainstream media.” That, she warned, is when “a localized lie becomes a dangerous global narrative.”
That dynamic has become more visible as the war has widened. AI-generated images have falsely claimed to show captured US soldiers in Iran, while old footage has been recirculated as new strikes on Tel Aviv. Together, these examples show that the information war is not only about persuasion, but also about saturation: flooding feeds so quickly and at such a scale that verification becomes reactive rather than preventive.
Moshe also pointed to the role of platform design itself.
“Seeing people cheer when missiles are fired at us is frustrating,” she said, but added that platforms such as TikTok and X reward extreme and hateful content because it attracts views. She also said much of the apparent support for such content is amplified artificially: “A lot of this cheering isn’t just real people—it’s fake accounts and bots pushing this hate on purpose to make it look like the whole world supports it.”
She noted that fake reports about Israeli leaders, including Prime Minister Benjamin Netanyahu, being killed are part of the same psychological architecture.
“Spreading fake news about Israeli leaders dying is a classic psychological trick,” she said. The aim, she added, is both to create panic inside Israel and to hand audiences in Iran or Gaza a fake “victory” to celebrate.
She also described how unrelated global trends are deliberately repurposed to widen reach. “And as for the Epstein files, since everyone in the world was searching for it, they started putting Epstein hashtags on their anti-Israel videos. They did this just to ‘hijack’ or jump on the trend and expose it to millions of completely unrelated people so they could see their propaganda. Plus, it’s a way to connect Israel to crazy global conspiracy theories.”
Many international outlets similarly found that pro-Iran networks had used Epstein-related content as part of a broader disinformation ecosystem tied to the war.
What emerges from both the military and digital fronts is the same underlying reality: algorithmic acceleration. On the battlefield, AI is helping militaries detect threats, identify targets, filter radar clutter, and compress the time between detection and action. Online, it is helping propagandists generate synthetic evidence, hijack attention, and create the illusion of consensus or victory before facts can catch up.
View this post on Instagram
King warned that even on the military side, this speed comes with serious risks.
“While artificial intelligence can improve precision and situational awareness on the battlefield, it also introduces new strategic risks,” he noted. As AI shortens detection and response times, he said, human deliberation shrinks, increasing the danger of rapid escalation if systems move faster than political leaders can intervene.
He framed the broader shift in stark terms.
Artificial intelligence is becoming the central nervous system of modern warfare
“Artificial intelligence is becoming the central nervous system of modern warfare,” he said. By fusing data from satellites, drones, electronic intelligence, and battlefield sensors into a real-time operational picture, AI compresses “the time between detection, decision, and action,” making wars increasingly shaped by algorithm-assisted decision cycles rather than traditional command timelines.
The same compression is now unfolding online. On social media, falsehoods can now be manufactured, amplified, and normalized before journalists, officials, or analysts have time to disprove them. On the battlefield, AI may help identify launchers, prioritize intercepts, or accelerate strike planning. In both arenas, the defining feature is velocity.
As King put it: “AI will not replace military leadership, but it will increasingly shape how quickly leaders must make decisions,” he concluded.
And as Moshe warned, the problem is no longer only what happens on the ground, but how quickly falsehoods about it can become accepted truth.









