Skip to main content
Social Media

AI Content Disclosure: What to Tell Your Audience in 2026

Navigate AI content disclosure requirements for Instagram, TikTok, YouTube, and other platforms — practical guidance for creators using AI image tools.

/team/lt.jpg

Lensgo Team

February 25, 20266 min read read
AI Content Disclosure: What to Tell Your Audience in 2026

AI Content Disclosure: What to Tell Your Audience in 2026

AI-generated images have become commonplace in social media — used by professional creators, brands, and casual users alike. As AI content has proliferated, platforms have introduced formal disclosure policies, audiences have developed stronger preferences about transparency, and in some jurisdictions, disclosure requirements have been codified into law.

Understanding what to disclose, how to disclose it, and when disclosure is required helps creators navigate this evolving landscape confidently and ethically.

Platform-by-Platform Disclosure Requirements

Note: The following reflects platform policies as of early 2026 — these policies evolve rapidly. Always check each platform's current creator guidelines before publishing.

Instagram (Meta)

Instagram's AI content policy requires creators to disclose AI-generated or AI-altered content that could mislead viewers. Specifically:

  • Content that depicts real people doing or saying things they didn't do or say must be labeled
  • Realistic AI-generated content that viewers could plausibly mistake for real events requires disclosure
  • Meta has added an AI-label option in post settings for proactive disclosure

Practical guidance: Use the "AI-generated" label in Instagram's settings for realistic AI imagery, particularly for content depicting real-looking people, real-looking events, or sensitive topics.

TikTok

TikTok has implemented automatic AI content detection and has required disclosure labels for AI-generated content since 2023. TikTok's system:

  • Automatically applies "AI-generated" labels to content it detects as AI-created
  • Requires creators to manually label AI content that automated detection might miss
  • Has stricter requirements for content depicting real public figures

YouTube

YouTube requires disclosure of AI-generated content that is realistic and could be mistaken for real footage. The requirement applies particularly to:

  • AI-generated videos depicting real people
  • AI-synthesized news or informational content
  • Realistic AI content in areas like health, elections, or finance

YouTube's content settings include an option to indicate AI use, which affects how content is labeled for viewers.

LinkedIn

LinkedIn has disclosure guidelines for AI-generated content, primarily focused on AI-written text rather than images. For AI images specifically, LinkedIn recommends transparency but doesn't yet have mandatory image disclosure policies.

Create AI images ethically on Lensgo →

When Disclosure Is Most Important

Realistic Imagery Depicting Real People

This is the highest-stakes disclosure context. AI imagery showing real, identifiable people doing things they didn't actually do raises serious concerns about consent, defamation, and deception. Disclose clearly and consider whether the content crosses ethical lines beyond just disclosure requirements.

Journalism and News-Adjacent Content

AI images used in contexts where viewers expect documentary photography should be clearly labeled. News organizations have developed strict internal policies about AI image use — even when AI images are editorial illustrations rather than claimed photos, clear labeling is required.

Paid Partnerships and Advertising

AI images in sponsored content and advertising are subject to broader advertising disclosure requirements. Beyond AI disclosure, ensure FTC and platform compliance for paid partnerships. Many advertisers specifically address AI generation in influencer contracts.

Health and Safety Information

AI-generated imagery in health, medical, safety, or emergency contexts should be clearly labeled to prevent confusion with authoritative real documentation.

Building Trust Through Proactive Disclosure

Many creators have found that proactive, positive AI disclosure builds rather than undermines audience trust. The approach: frame AI as a creative tool rather than an attempt to deceive.

Example disclosure language:

  • "Created with AI image generation tools"
  • "AI-enhanced photography"
  • "This image was generated with AI assistance"
  • "Made with AI by [your name]"

Audiences are increasingly familiar with AI content. Many followers appreciate behind-the-scenes transparency about creative process, including tool use. Creators who share their AI workflows often see higher engagement from audiences interested in the process.

What Doesn't Require Disclosure

Not all AI-assisted content requires formal disclosure:

  • AI tools used purely for enhancement (noise reduction, color correction) are standard editing tools similar to Lightroom or Photoshop — no special disclosure required
  • AI background removal is a standard tool, not materially different from manually cutting a background
  • AI upscaling and resolution enhancement are standard post-processing steps
  • Stylistic AI filters applied to real photos (similar to traditional filters) are generally considered standard editing

The disclosure requirement kicks in when AI generation creates imagery that didn't exist in the real world — particularly realistic imagery that could be mistaken for documentary photography.

Staying informed about platform policies as they evolve, and building transparency into your content practice, protects your reputation and builds long-term audience trust. Create your AI content on Lensgo — with clear creative workflows that support confident disclosure.

/team/lt.jpg

Written by Lensgo Team

We're passionate about helping travel creators produce stunning visual content with AI.