AI-Generated Satellite Images Fuel Disinformation in US-Iran Conflict

Researchers warn that AI-generated satellite images are being used to mislead the public and manipulate narratives during wartime.

Iran, Mar 09 : A manipulated satellite image circulating online recently claimed to show a destroyed U.S. military installation in Qatar. However, investigators later confirmed that the image was generated using artificial intelligence, highlighting the growing challenge of AI satellite imagery disinformation during global conflicts.

The image, shared by Iran-aligned media outlets on social media platform X, appeared to compare “before and after” views of a devastated U.S. radar facility. Analysts later determined that the picture was an altered version of an older satellite image of a U.S. base in Bahrain originally sourced from Google Earth.

The fabricated visual quickly spread across social media platforms and accumulated millions of views in multiple languages before experts identified the manipulation.

Researchers detect signs of AI manipulation

Open-source intelligence specialist Brady Africk noted that manipulated satellite imagery has become increasingly common during major geopolitical crises.

According to Africk, many altered images display recognizable signs of AI generation, including distorted perspectives, blurred details and fabricated visual elements that do not match real-world geography.

Some images are also edited manually, where signs of destruction are added digitally onto genuine satellite photographs to create the impression of damage that never occurred.

Fake visuals circulating across social media

Information warfare analyst Tal Hagin identified another fabricated image claiming that joint strikes by United States and Israel had targeted aircraft in Iran.

The misleading image included incorrect geographic coordinates and other inconsistencies. Digital investigators also discovered a hidden SynthID watermark, a marker designed to identify visuals produced by AI systems developed by Google.

Such posts circulated widely across platforms including Instagram, Threads and X, illustrating how quickly misleading visuals can spread during periods of conflict.

AI images complicate wartime information battles

Experts say the growing use of AI-generated visuals is undermining the work of legitimate open-source intelligence communities that rely on publicly available satellite images to verify military activity.

“During conflicts, it is often difficult to confirm the outcome of strikes,” Hagin explained, noting that OSINT tools have historically helped bypass censorship in countries such as Iran. However, the same ecosystem is now increasingly exploited by disinformation campaigns.

Similar incidents of manipulated satellite imagery were reported during the Russia‑Ukraine War and the brief military confrontation between India and Pakistan last year.

Experts urge caution when viewing wartime imagery

Researchers warn that AI-generated images can shape public opinion and even influence financial markets when widely shared without verification.

Satellite intelligence companies say authentic real-time imagery remains a critical tool for verifying claims and countering misinformation. In one recent case, satellite analysts confirmed that viral photos showing a major airport fire in Niamey were fabricated using AI technology.

Bo Zhao stressed that visual evidence presented during conflicts can strongly influence how audiences interpret events.

As AI tools become more advanced, he said the public must approach online images with greater scrutiny and critical awareness to avoid falling victim to increasingly convincing digital fabrications.

US-Iran