Skip to main content
AI Trends

AI Content Ethics: When and How to Disclose AI-Generated Images

The emerging best practices and rules around AI content disclosure. What you're legally required to do, what's best practice, and how to disclose transparently.

LT

Lensgo Team

April 1, 202611 min read
AI Content Ethics: When and How to Disclose AI-Generated Images

AI Content Ethics: When and How to Disclose AI-Generated Images

The rapid mainstream adoption of AI-generated imagery has outpaced both regulation and cultural norms. For creators, brands, and businesses using AI-generated visuals, uncertainty is understandable: what am I required to disclose? What should I disclose? What happens if I don't?

This guide provides a clear-eyed look at the current regulatory landscape, platform policies, and best practices for AI content disclosure as of mid-2026.

The Regulatory Landscape

United States

No federal law currently mandates disclosure of AI-generated content for commercial creative use. However, the FTC's guidelines on endorsements and testimonials apply: if AI-generated content creates a false impression of a real event, real person, or real endorsement, deception principles apply.

Specific AI-related regulation is progressing at the state level. California has enacted disclosure requirements for AI-generated content in political advertising (see current California state law for specifics, as provisions continue to evolve). Several states have or are considering deepfake disclosure laws. The FTC has issued guidance indicating AI-generated testimonials should be disclosed.

For commercial content — marketing, advertising, e-commerce — the relevant standard is truthfulness in advertising: AI-generated images should accurately represent the product or service and not create materially false impressions.

European Union

The EU AI Act, now fully in effect, includes transparency requirements for AI systems that interact with people. AI-generated content intended to influence public opinion or that features deepfake-style face swaps requires disclosure. The Act also requires watermarking or labeling standards for certain categories of AI content. Specifics vary by content category and use case.

Platform Policies (as of early 2026 — policies change frequently; check each platform's current Creator Policy)

Instagram/Meta: Recommends labeling AI-generated content and has begun adding automatic AI labels to some detected generated content. Policies are actively evolving toward more mandatory labeling requirements.

TikTok: Requires labeling of realistic AI-generated content, with specific disclosure requirements for realistic-looking AI-generated video. Enforcement and specific requirements may have been updated — check TikTok's Creator Policy for current requirements.

YouTube: Requires disclosure of AI-generated content that could mislead viewers about real events, real people, or contains realistic depictions. Has an AI disclosure setting in upload tools — check YouTube's current Creator Guidelines for specifics.

LinkedIn: Recommendations and requirements around AI content disclosure continue to evolve — check LinkedIn's current Professional Community Policies for the most up-to-date requirements.

Platform policies in this area change frequently. Always check the current Creator Policy or Community Guidelines for each platform before publishing AI-generated content.

When to Disclose: A Decision Framework

Always Disclose

  • Political content: AI-generated images in political advertising or political commentary require disclosure everywhere that's implemented at any regulatory level. This is the most legally and ethically clear category.
  • Realistic faces of real people: AI-generated images depicting real people (face swap, deepfake-style content) in any realistic context require disclosure. This applies even if the image is clearly creative/fun.
  • Testimonials and reviews: If an AI-generated person "reviews" a product or service, that's a fabricated testimonial — disclosure is required under FTC guidance.
  • News and documentary contexts: Any AI-generated content presented in a journalistic or documentary context is a serious ethical violation without very clear disclosure.
  • Best Practice to Disclose

  • Creative and entertainment content: Even when not legally required, labeling creative AI content ("Created with AI" or "AI-generated") is best practice. Most audiences appreciate transparency and are curious about the creative process.
  • Commercial advertising: Even where not technically required, transparent communication about AI-generated imagery in advertising builds trust.
  • Social media: Standard creator practice is evolving toward disclosure as default. Early adopters of transparency are building more trust than those who obscure it.
  • Disclosure Optional

  • Internal use: AI-generated images used internally (presentations, mockups, planning docs) don't require disclosure.
  • Obvious stylized content: AI-generated cartoon illustrations, anime art, and clearly artistic content where "real" is not the intended impression doesn't require explicit AI disclosure (though transparency is still good practice).
  • How to Disclose

    Caption Disclosure (Social Media)

    Simple and effective: include "Created with AI" or "AI-generated" in the caption. This can be at the end of the caption or integrated naturally: "A little AI travel inspo for your feed 🌍✨ (AI-generated)."

    Hashtag Disclosure

    Adding #AIGenerated, #AIArt, or #MadeWithAI is increasingly standard practice and creates a searchable signal for audiences who want to know.

    Post Label or Sticker (Instagram/TikTok)

    Both platforms offer AI content labels you can apply in-app. Using these platform tools is the most official way to mark content.

    Platform Disclosure Settings

    YouTube's upload process includes an AI content disclosure checkbox. Using it correctly ensures your audience and platform algorithms classify the content appropriately.

    Why Transparency Wins

    Beyond the regulatory requirements, there's a practical case for transparent AI disclosure: audiences respond better to it than creators fear.

    When creators disclose AI generation, the typical response is curiosity ("how did you do this?"), appreciation for the transparency, and often increased engagement as audiences engage with the behind-the-scenes conversation. The fear that AI disclosure will undermine audience trust is generally not borne out — the opposite is more often true.

    The creators and brands with the strongest long-term positions are those building audience trust through consistent transparency. That foundation is more durable than the short-term engagement gained by obscuring AI origins.

    Create AI content with confidence — Lensgo.ai grants commercial rights to generated images for paid plan subscribers, subject to our Terms of Service.

    LT

    Written by Lensgo Team

    We're passionate about helping travel creators produce stunning visual content with AI.

    Ready to try it yourself?