
This article was last updated on June 25, 2025
Canada: Oye! Times readers Get FREE $30 to spend on Amazon, Walmart…
USA: Oye! Times readers Get FREE $30 to spend on Amazon, Walmart…
AI-Nepvideos are fully deployed in war between Israel and Iran
Since the escalation between Israel and Iran, a large amount of disinformation has been released online. Many fake photos and videos circulate on social media, usually made with artificial intelligence (AI).
This is apparent daily at the NOS editors, where specialists in the field of open source research (OSINT) dozens of images of which the authenticity can often not be determined whether they are even clearly fake. By far the most disinformation about the war between Israel and Iran is shared on X.
The BBC also does that observation. The British public broadcaster written About dozens of videos that try to emphasize the effectiveness of Iran’s reaction to Israeli attacks. In addition, fake videos were made and shared about the aftermath of attacks on Israeli goals. Pro-Israeli accounts have also shared disinformation.
The three most viewed fake videos that the BBC found together have been viewed more than 100 million times on multiple platforms.
Osint
The NOS Osint specialists also state that AI is manipulated a striking amount in this war. The purpose of their research is always: to find out where and when a photo or video was taken. They do this by verifying environmental characteristics, determining time provisions or comparing the image with other images on the internet.
They have also created a list of X accounts in recent days on which it is always shared manipulated image. If images are made or edited with AI, you often see that immediately with the naked eye. But more and more it is needed more to filter out manipulations.
In most cases it works, but yesterday it went wrong with one of the videos of the attack on a prison in Iran. On surveillance images, the explosion of the gateway to the infamous Evin prison would be shown.
In addition to the aforementioned elements, context also plays a major role in checking visual material. In this case the attack itself was already set, and there was verified image from after the explosion. That, in combination with a relatively ‘good’ operation, has led to the fact that this image has slipped through the NOS, including the NOS.
Afterwards unjustly. It turns out to be about an old photo From the prison gate, used to generate a video with AI.
This is that video generated by the ‘explosion at the prison’:
An expert in the field of AI tells the BBC that it is the first time that AI is used on such a scale during a conflict.
The “super spreaders” of Iranian disinformation on X grow fast and sometimes have official names. They also all have blue check marks, but it is not clear who they are managed. The BBC notes that Grok, the AI-Chatbot from X, cannot always see that something is fake.
Be the first to comment