Key Insights:
- As per the new AI news, X will block revenue sharing for 90 days if creators post AI-made war videos without clear disclosure
- Product chief Nikita Bier said the rule is meant to protect authenticity during wartime.
- Posts flagged by Community Notes or AI signals can trigger enforcement, repeat offenders risk a permanent payout ban.
Recent AI news indicates X has tightened the rules for its creator economy around AI generated war footage.
X said it will kick creators out of its revenue-sharing program for 90 days if they post AI-made videos of armed conflict without clearly labeling them as AI. So the videos can still go up, but if you don’t disclose how they were made, you can lose your payouts.
AI News: X Introduces 90-Day Payout Ban AI War Videos Posting
The AI news update landed on Wednesday, when X’s head of product, Nikita Bier, explained that the main goal of the restriction is to protect the authenticity of what people see during fast-moving wartime moments.
He said wars create the perfect conditions for fake clips to spread fast. When people are scared and searching for updates, a convincing video can travel in minutes. And with today’s AI tools, he added, almost anyone can make footage that looks real and “from the ground” even when it isn’t.

That’s what makes this policy different. X isn’t just relying on labels, removals, or turning down a post’s reach. It’s hitting creators where it matters most, their payouts.
The message is simple: if you want to keep earning, you have to clearly say when a war video was made with AI.
Monetization Enforcement Tied to AI disclosure
As per recent AI news, X positioned the rule as a monetization enforcement policy, not a blanket crackdown on AI content. The platform said creators who share AI-generated conflict footage must clearly disclose the use of artificial intelligence.
If they do not, X can block their access to revenue-sharing for three months. This approach adds financial pressure to the existing set of moderation tools.
It also ties the policy to the platform’s creator economy, where monetization status can be as important as reach. As a result, the change hits creators where it hurts most, especially those who rely on X payouts as a consistent income stream.
As per the AI news, X also outlined how enforcement may happen. Posts could trigger action if they are flagged through Community Notes. They could also get caught through metadata or other signals linked to generative AI tools.
In practice, that means the platform may not rely on one single detection method. It can use community reporting, technical fingerprints, and pattern signals together.
Then comes the escalation. X said repeat offenders who keep posting undisclosed AI-generated conflict videos could face permanent removal from the revenue-sharing program.
That threat raises the stakes beyond a temporary penalty, because it creates a hard line for accounts that build an audience around sensational war content.
Importantly, the policy targets a narrow slice of content. It applies specifically to videos depicting armed conflicts. It does not amount to a wider ban on AI-generated media across the platform.
Middle East Conflict Raises Misinformation Concern
That timing is no accident. Social feeds and group chats have been buzzing for days as Middle East tensions keep dominating the conversation.
Then came the Feb. 28 flashpoint, when the United States and Israel carried out joint airstrikes on Iran. Markets didn’t take long to react which saw Bitcoin drop to around $63,000.
However, the BTC price later bounced back and was trading near $70,000 at the time of the update, according to CoinGecko.
Taken together, this is the new reality in AI news. AI can generate convincing war footage for clicks, and it can also support analysis behind the scenes.
X is betting that disclosure rules, backed by monetization penalties, can slow down the spread of misleading conflict content without banning AI outright.