Why This Matters
A series of viral, Lego-style animated clips praising Iran and attacking the United States is drawing attention to a new kind of digital propaganda: fast, cheap, and built with artificial intelligence tools that anyone can use.
The videos, which mix toy-like characters with graphic scenes of war and political scandal, are part of the online messaging battle surrounding the current conflict involving Iran and US-Israeli forces. Their childlike look and social-media-friendly format make them easy to share, even for viewers who may not realize they are watching state-linked content.
Experts say this trend shows how quickly propaganda has adapted to a world where most people get news and commentary through short, eye-catching clips rather than traditional media. It also raises questions for governments, tech platforms, and regular users about how to recognize and respond to foreign influence campaigns online.
Key Facts and Quotes
The videos are produced by Explosive Media, a social media outlet that creates flashy, AI-generated animations styled to resemble Lego movies, but more vivid and fast-paced. In an interview for the new BBC podcast Top Comment, a representative who asked to be called “Mr Explosive” initially denied working for the Iranian government and repeated the outlet’s earlier claim that it is “totally independent.”
Under further questioning, Mr Explosive acknowledged that Iran’s ruling regime is a “customer” of Explosive Media, according to the BBC report. That admission, which he had not made publicly before, directly links the operation to Iran’s government at a time when information campaigns are a key part of its response to the war.
The clips themselves are not subtle. One widely shared video shows former US president Donald Trump tumbling through a whirlwind of “Epstein file” documents as rap lyrics declare “the secrets are leaking, the pressure is rising.” Another depicts George Floyd under a police officer’s boot, paired with a voiceover claiming Iran is “standing here for everyone your system ever wronged.” Together, the scenes frame Iran as resisting what it portrays as an “almighty global oppressor”: the United States.
Propaganda scholar Dr Emma Briant, quoted in the BBC report, argues that calling this kind of content “slopaganda”, a term coined in academic work as a play on low-effort “AI slop”, underestimates its impact. She describes the output as “highly sophisticated” and warns that these AI-generated clips have been viewed hundreds of millions of times during the war, giving them reach once reserved for major broadcasters.

What It Means for You
For US audiences, these videos may appear in feeds alongside jokes, movie clips, and genuine news, making them harder to identify as state-backed messaging. Even if aimed primarily at viewers in the Middle East, recommendation algorithms can push them worldwide, helping foreign governments shape perceptions of American politics, racism, and military power.
Media researchers advise viewers to ask basic questions when they encounter emotionally charged, animated political content: Who made this? Who paid for it? Who benefits if I share it? Policymakers and platforms, meanwhile, are debating new rules on labeling AI-generated media and foreign-funded content, steps that could change how propaganda like this spreads in the years ahead.
How do you think social media platforms should treat stylized AI political videos that blur the line between entertainment and state-sponsored messaging?
Sources
BBC News report and podcast segment by Matt Shea on Explosive Media and Iran-linked AI videos, 11 April 2026; Published academic work on digital propaganda and AI by Dr Emma L. Briant and colleagues, 2017-2023.