State Actors Drive Much of the Visual Misinformation Surrounding the Iran War
Fabricated footage and disinformation campaigns distort perceptions of attacks and casualties, leaving global audiences in a digital fog.
Iran, Mar 07 : Since the start of the Iran war, a surge of AI-generated and misleading videos has flooded social media, complicating the global understanding of events. A notable example circulated widely showed crowds watching fire and smoke atop a high-rise in Bahrain, claimed to be caused by an Iranian strike. However, forensic analysis revealed the video was fake: anomalies like cars fused together and a man’s elbow passing through a backpack exposed its artificial nature. Accounts linked to the Iranian government shared the clip to exaggerate the country’s military successes.
Experts say state actors are key drivers of this visual misinformation, crafting content with clear narratives to amplify certain perspectives on the war and casualties. Melanie Smith, senior director at the Institute for Strategic Dialogue, noted, “The content from state actors is very targeted, used to support statements about the conflict and geopolitical context.”
Pro-Iran social media accounts, echoing state media, have inflated destruction and death tolls, often using AI-generated visuals. At the same time, Russia-aligned campaigns like Operation Overload (also called Matryoshka or Storm-1679) have impersonated news outlets and intelligence agencies, spreading fear and confusion beyond Iran’s borders.
A lack of reliable on-ground reporting from Iran, due to internet restrictions and censorship, further worsens the situation. Todd Helmus of RAND highlighted that unlike Ukraine, where civilians could document and share experiences, “We’re missing that story from Iran,” creating a vacuum that misinformation fills. Opportunistic users unrelated to state campaigns have also contributed by sharing old footage, video game clips, and AI-generated content as real-time events.
AI has accelerated the spread of false content to unprecedented levels, making it harder for the public to distinguish reality from fabrication. Smith warned, “The volume of AI content is polluting the information environment in crisis settings to a terrifying degree.”
Social media platforms are taking measures to counter such misuse. X, for instance, plans to suspend accounts from revenue-sharing programs for posting AI-generated content from armed conflicts without disclosure, with first offenses carrying a 90-day suspension and permanent bans for repeat violations. Emerson Brooking from the Atlantic Council cautioned users: “Social media platforms are an extension of the battlefield. Actors on all sides are spreading propaganda to manipulate perception. Your attention is an asset.”
The combination of AI tools, state-backed disinformation, and censorship has created a volatile online landscape, leaving audiences worldwide vulnerable to deception as the Iran conflict continues.