Today, YouTube announced a way for creators to self-label when their videos contain AI-generated or synthetic material.
- Home
- Technology
- News
YouTube adds new AI-generated content labeling tool
YouTube previously said it would require creators to disclose AI-generated material in 2024. The labels will require creators to be honest about synthetic content.


The checkbox appears in the uploading and posting process, and creators are required to disclose “altered or synthetic” content that seems realistic. That includes things like making a real person say or do something they didn’t; altering footage of real events and places; or showing a “realistic-looking scene” that didn’t actually happen. Some examples YouTube offers are showing a fake tornado moving toward a real town or using deepfake voices to have a real person narrate a video.
On the other hand, disclosures won’t be required for things like beauty filters, special effects like background blur, and “clearly unrealistic content” like animation.
In November, YouTube detailed its AI-generated content policy, essentially creating two tiers of rules: strict rules that protect music labels and artists and looser guidelines for everyone else. Deepfake music, like Drake singing Ice Spice or rapping a song written by someone else, can be taken down by an artist’s label if they don’t like it. As part of these rules, YouTube said creators would be required to disclose AI-generated material but hadn’t outlined how exactly they would do it until now. And if you’re an average person being deepfaked on YouTube, it could be much harder to get that pulled — you’d have to fill out a privacy request form that the company would review. YouTube didn’t offer much about this process in today’s update, saying it is “continuing to work towards an updated privacy process.”
Like other platforms that have introduced AI content labels, the YouTube feature relies on the honor system — creators have to be honest about what’s appearing in their videos. YouTube spokesperson Jack Malon previously told The Verge that the company was “investing in the tools” to detect AI-generated content, though AI detection software is historically highly inaccurate.
In its blog post today, YouTube says it may add an AI disclosure to videos even if the uploader hasn’t done so themselves, “especially if the altered or synthetic content has the potential to confuse or mislead people.” More prominent labels will also appear on the video itself for sensitive topics like health, election, and finance.

Oh, you think the government will regulate Kalshi and Polymarket? Wanna bet?
- 17 گھنٹے قبل
Hajj flights under govt scheme to begin from April 18
- 39 منٹ قبل

Two of my favorite color e-book readers are the cheapest they’ve been in months
- 8 گھنٹے قبل
World Snooker Championship to stay at Crucible until at least 2045
- ایک دن قبل

The Supreme Court seems alarmingly willing to trash thousands of ballots
- 15 گھنٹے قبل

PM Shehbaz reiterates Pakistan's solidarity, support for KSA
- 6 گھنٹے قبل
Pakistan has conveyed US proposal; Turkey or Pakistan could host talks, senior Iranian official says
- 5 گھنٹے قبل
World Snooker Championship to stay at Crucible until at least 2045
- ایک دن قبل
Fire at Kuwait airport after drones hit fuel tank: aviation agency
- 7 گھنٹے قبل
Pakistan offers to host peace talks to end US-Israeli war on Iran
- ایک دن قبل

Gold prices continue to surge in Pakistan, global markets
- 7 گھنٹے قبل
Zolqadr appointed Iran’s new security chief
- ایک دن قبل

:format(webp)/cdn.vox-cdn.com/uploads/chorus_asset/file/25342216/image2_k9yr5yM.width_800.format_webp.jpeg)








