The latest escalation between Israel and Iran isn’t just playing out on battlefields anymore. It’s unfolding across our social media feeds, where artificial intelligence has become the newest weapon of war. As missiles fly and tensions reach breaking points, something equally dangerous is happening online: reality itself is under attack.
BBC Verify, the broadcaster’s fact-checking unit, has been working around the clock to identify and debunk the flood of AI-generated content that’s spreading faster than the actual news. What they’re uncovering reveals a disturbing new chapter in how conflicts are fought in the digital age.
The New Battlefield: Your Social Media Feed
Picture this: you’re scrolling through your Twitter feed when you see a dramatic image of missiles raining down on Tel Aviv. The photo looks authentic, the lighting seems real, and thousands of people are sharing it with urgent captions. But there’s just one problem – it’s completely fake, created by artificial intelligence in a matter of minutes.
Researchers identified over 30 “false, misleading, or AI generated images and videos” that garnered over 35 million views in total during recent conflicts. That’s not just a few fake posts slipping through the cracks – that’s a coordinated information warfare campaign reaching more people than most television broadcasts.
The speed at which these AI-generated images and videos spread is breathtaking. AI-generated and miscaptioned footage and images are circulating widely on social media as the Israel-Iran conflict continues, often appearing online before legitimate news outlets can even confirm what’s actually happening on the ground.
The Technology Behind the Deception
Modern AI can create images and videos that are frighteningly realistic. We’re not talking about the obviously fake content from a few years ago. Today’s AI-generated media can fool experts, let alone regular social media users scrolling through their feeds at lightning speed.
The process is surprisingly simple. Someone with basic technical knowledge can use widely available AI tools to generate a dramatic war scene, add some smoke and explosions, maybe throw in some recognizable landmarks, and boom – you have content that looks like breaking news footage. The whole process takes minutes, not hours or days like traditional propaganda used to require.
One researcher tracked down an AI-created image to its original creator, noting “this AI image was created by Mhamad Yusif, an AI Creator on June 13th, 2025” and pointing out “If you look closely, you can even see his watermark still visible”. The fact that people are literally signing their fake war footage like it’s digital art shows how normalized this has become.
Why AI Disinformation is Different
Traditional propaganda required resources, time, and skill. You needed cameras, actors, sets, editing equipment, and technical expertise. Creating convincing fake footage was expensive and time-consuming, which naturally limited how much false content could be produced.
AI has shattered those barriers. Now, anyone with a smartphone and an internet connection can generate Hollywood-quality fake footage in minutes. The democratization of sophisticated content creation tools has also democratized the ability to manipulate public opinion on a massive scale.
What makes this particularly dangerous is the emotional impact. Social media is being flooded with AI-generated media that claims to show the devastation, but is fake. These aren’t dry policy debates or abstract political arguments they’re visceral images designed to trigger immediate emotional responses. Fear, anger, sympathy, outrage all manufactured and delivered directly to your phone.
The Human Cost of Digital Deception
The real tragedy isn’t just that people are being deceived it’s what happens next. When fake images of civilian casualties circulate, they can inflame tensions and potentially escalate real-world violence. When AI-generated footage shows military strikes that never happened, it can influence public opinion and policy decisions based on complete fiction.
Families become terrified for loved ones based on fake footage. Diplomatic relationships strain over incidents that never occurred. Public support for military action grows or shrinks based on manufactured evidence. The line between digital manipulation and real-world consequences has completely disappeared.
Even more concerning is how this flood of fake content is creating what experts call “truth decay.” When people can’t distinguish between real and fake content, they start doubting everything including legitimate news reports and actual evidence. This erosion of shared reality makes democratic discourse nearly impossible.
The Verification Challenge
BBC Verify and similar fact-checking operations are fighting an uphill battle. BBC Verify offers the latest updates from its specialists in fact checking, verifying video, and tackling disinformation across global and national news stories, but they’re essentially playing defense against an army of AI content generators.
The verification process that used to take minutes now takes hours. Experts have to analyze metadata, check for visual inconsistencies, trace the origins of content, and cross-reference multiple sources. Meanwhile, the fake content spreads to millions of people in the time it takes to verify a single image.
Traditional fact-checking assumes that truth and falsehood are distinct categories. But AI-generated content exists in a gray zone. An image might be “fake” in the sense that it was artificially created, but “accurate” in depicting something that actually happened. Or it might show a real location but a fictional event. These nuances are lost when content is shared at viral speeds.
The Scale of the Problem
The numbers are staggering. During major news events, social media platforms see massive spikes in AI-generated content. Some researchers estimate that up to 30% of conflict-related imagery shared during major incidents now has some element of AI manipulation or generation.
This isn’t just about a few bad actors trying to spread propaganda. State-sponsored disinformation campaigns, terrorist organizations, political activists, and even well-meaning individuals sharing dramatic content they assume is real all contribute to this information ecosystem where fiction and reality are increasingly indistinguishable.
The global nature of social media means that AI-generated content about Middle Eastern conflicts can influence elections in Europe, affect stock markets in Asia, and shape public opinion in the Americas. Geographic boundaries don’t limit the impact of digital deception.
Platform Responses: Too Little, Too Late?
Social media companies are scrambling to address the problem, but their solutions feel inadequate to the scale of the challenge. Content moderation systems that were designed to catch obviously fake content struggle with sophisticated AI-generated media. By the time platforms identify and remove fake content, it has often already reached millions of users.
Some platforms are experimenting with AI-detection tools, but it’s essentially an arms race between AI content generators and AI detection systems. Every improvement in detection capabilities is quickly matched by improvements in generation quality. The technology to create fake content is advancing faster than the technology to identify it.
Warning labels and fact-checking notifications help, but research shows they have limited effectiveness once content has already gone viral. People tend to remember the dramatic image or video more than the small text saying it might be fake.
The International Response
Governments are beginning to recognize AI disinformation as a national security threat. AI provides purveyors of disinformation the ability to rapidly recon American social media audiences to identify psychological vulnerabilities, making it a powerful tool for foreign interference in domestic affairs.
Some countries are considering legislation to regulate AI-generated content, but enforcement remains a massive challenge. How do you prosecute someone in another country for creating fake images? How do you balance free speech concerns with the need to combat disinformation? These legal frameworks are still being developed while the technology races ahead.
International cooperation on AI disinformation is in its infancy. Unlike traditional warfare, digital information warfare crosses borders instantly and continuously. The existing international legal framework simply wasn’t designed for conflicts fought with algorithms instead of armies.
What This Means for Democracy
Perhaps the most troubling aspect of AI disinformation is its impact on democratic processes. AI-powered disinformation will impact several upcoming elections in 2024, including in the U.S., where the creation of false or misleading content with LLMs is likely to lead to high quantities of efficient false content.
Democracy depends on an informed citizenry making decisions based on accurate information. When that information environment is polluted with sophisticated fake content, the foundation of democratic decision-making crumbles. Voters can’t make informed choices if they can’t distinguish between real and manufactured evidence.
The speed of AI content generation also means that disinformation can be tailored and targeted in real-time during breaking news events or election cycles. Traditional propaganda was slow and broad AI disinformation can be fast and precise, hitting specific audiences with customized false narratives designed to maximize impact.
Learning to Navigate the New Reality
The solution isn’t going to come from technology alone. Media literacy has been considered as a potential counter to “prime” a viewer to identify a deepfake when they encounter one organically by engendering critical thinking. We need to fundamentally change how we consume and share information online.
This means developing new habits: pausing before sharing dramatic content, checking multiple sources, being skeptical of perfect-looking images from conflict zones, and understanding that if something seems too dramatic or convenient, it might be artificially generated.
Educational institutions, news organizations, and tech companies need to work together on media literacy programs that help people develop the skills to navigate an information environment where artificial intelligence can manufacture compelling fake content at unprecedented speed and scale.
The Road Ahead
The conflict between Israel and Iran will eventually de-escalate, but the AI disinformation techniques being perfected during this crisis will outlast any particular military engagement. We’re witnessing the emergence of new forms of information warfare that will shape conflicts, elections, and public discourse for years to come.
AI-generated content has already begun to work against us, rather than for us. To ensure this technology brings benefits rather than harms, we must institute immediate changes. The window for getting ahead of this challenge is rapidly closing.
The future of information and perhaps democracy itself – depends on how quickly we can adapt to a world where reality can be manufactured as easily as it can be recorded. The stakes couldn’t be higher, and the clock is ticking.