AI Content Ethics: When and How to Disclose AI-Generated Images
The rapid mainstream adoption of AI-generated imagery has outpaced both regulation and cultural norms. For creators, brands, and businesses using AI-generated visuals, uncertainty is understandable: what am I required to disclose? What should I disclose? What happens if I don't?
This guide provides a clear-eyed look at the current regulatory landscape, platform policies, and best practices for AI content disclosure as of mid-2026.
The Regulatory Landscape
United States
No federal law currently mandates disclosure of AI-generated content for commercial creative use. However, the FTC's guidelines on endorsements and testimonials apply: if AI-generated content creates a false impression of a real event, real person, or real endorsement, deception principles apply.
Specific AI-related regulation is progressing at the state level. California has enacted disclosure requirements for AI-generated content in political advertising (see current California state law for specifics, as provisions continue to evolve). Several states have or are considering deepfake disclosure laws. The FTC has issued guidance indicating AI-generated testimonials should be disclosed.
For commercial content — marketing, advertising, e-commerce — the relevant standard is truthfulness in advertising: AI-generated images should accurately represent the product or service and not create materially false impressions.
European Union
The EU AI Act, now fully in effect, includes transparency requirements for AI systems that interact with people. AI-generated content intended to influence public opinion or that features deepfake-style face swaps requires disclosure. The Act also requires watermarking or labeling standards for certain categories of AI content. Specifics vary by content category and use case.
Platform Policies (as of early 2026 — policies change frequently; check each platform's current Creator Policy)
Instagram/Meta: Recommends labeling AI-generated content and has begun adding automatic AI labels to some detected generated content. Policies are actively evolving toward more mandatory labeling requirements.
TikTok: Requires labeling of realistic AI-generated content, with specific disclosure requirements for realistic-looking AI-generated video. Enforcement and specific requirements may have been updated — check TikTok's Creator Policy for current requirements.
YouTube: Requires disclosure of AI-generated content that could mislead viewers about real events, real people, or contains realistic depictions. Has an AI disclosure setting in upload tools — check YouTube's current Creator Guidelines for specifics.
LinkedIn: Recommendations and requirements around AI content disclosure continue to evolve — check LinkedIn's current Professional Community Policies for the most up-to-date requirements.
Platform policies in this area change frequently. Always check the current Creator Policy or Community Guidelines for each platform before publishing AI-generated content.
When to Disclose: A Decision Framework
Always Disclose
Best Practice to Disclose
Disclosure Optional
How to Disclose
Caption Disclosure (Social Media)
Simple and effective: include "Created with AI" or "AI-generated" in the caption. This can be at the end of the caption or integrated naturally: "A little AI travel inspo for your feed 🌍✨ (AI-generated)."
Hashtag Disclosure
Adding #AIGenerated, #AIArt, or #MadeWithAI is increasingly standard practice and creates a searchable signal for audiences who want to know.
Post Label or Sticker (Instagram/TikTok)
Both platforms offer AI content labels you can apply in-app. Using these platform tools is the most official way to mark content.
Platform Disclosure Settings
YouTube's upload process includes an AI content disclosure checkbox. Using it correctly ensures your audience and platform algorithms classify the content appropriately.
Why Transparency Wins
Beyond the regulatory requirements, there's a practical case for transparent AI disclosure: audiences respond better to it than creators fear.
When creators disclose AI generation, the typical response is curiosity ("how did you do this?"), appreciation for the transparency, and often increased engagement as audiences engage with the behind-the-scenes conversation. The fear that AI disclosure will undermine audience trust is generally not borne out — the opposite is more often true.
The creators and brands with the strongest long-term positions are those building audience trust through consistent transparency. That foundation is more durable than the short-term engagement gained by obscuring AI origins.
Create AI content with confidence — Lensgo.ai grants commercial rights to generated images for paid plan subscribers, subject to our Terms of Service.