← Back to Blog

New York AI Disclosure Law (June 2026): What Creators and Brands Must Know

By @paji_a · · 14 min read

Key Takeaways

  • New York's AI Disclosure Law takes effect June 2026 -- the first US state law specifically requiring disclosure of AI-generated advertising content.
  • Penalties range from $1,000 to $5,000 per violation for first offenses, with repeat violations carrying fines up to $10,000 each.
  • The law applies to any advertising content that is "substantially generated" or "materially manipulated" by AI systems, targeting New York consumers.
  • Unlike federal FTC guidelines (which are enforcement guidance), this is a binding state statute with its own enforcement mechanism through the NY Attorney General.
  • Other states including California, Illinois, and Texas are drafting similar legislation. New York's law is likely the template for a national patchwork of state AI disclosure laws.

Disclaimer: This article is for informational purposes only and does not constitute legal advice. AI disclosure regulations are evolving rapidly across jurisdictions. Consult a qualified attorney for advice specific to your situation and jurisdiction.

In less than three months, New York's AI Disclosure Law goes into effect. Signed into law in January 2026 and effective June 1, 2026, it is the first state-level statute in the United States that specifically requires advertisers, agencies, and creators to disclose when AI generates or substantially modifies advertising content.

If you run advertising campaigns that target New York consumers -- and given that New York is the third-largest state by population and the largest media market in the country -- this law likely applies to you. Even if you are based in Texas, California, or London, if your content reaches New York audiences, you need to comply.

This guide explains what the law requires, who it applies to, what the penalties are, and what you need to do before June to be ready.

What the New York AI Disclosure Law Requires

The law, formally titled the New York Artificial Intelligence Transparency in Advertising Act, establishes three core requirements for advertising content that involves AI:

1. Mandatory Disclosure of AI Involvement

Any advertising content that is "substantially generated" or "materially manipulated" by an AI system must include a clear, conspicuous disclosure stating that AI was used in its creation. The disclosure must be:

  • Proximate: The disclosure must appear "in proximity to the content" -- meaning near the AI-generated content itself, not buried in a terms of service page or a distant footer.
  • Clear and conspicuous: The disclosure must be in a size, color, and location that a reasonable consumer would notice. Fine print does not qualify.
  • Unambiguous: The disclosure must use plain language. Acceptable examples include "Created with AI," "AI-generated content," or "This content was produced using artificial intelligence." Vague terms like "digitally enhanced" or "technology-assisted" do not satisfy the requirement.

2. Record-Keeping Obligations

Advertisers and agencies must maintain records documenting their use of AI tools in advertising content creation for a period of three years. These records must be available for inspection by the New York Attorney General's office upon request. The records should include:

  • Which AI tools were used.
  • What content was generated or modified by AI.
  • When the content was created and published.
  • What disclosures were applied.

3. No Misleading Human Attribution

The law explicitly prohibits presenting AI-generated content as the genuine personal opinion, experience, or creation of a specific human individual when it is not. This provision targets fake testimonials, synthetic influencer content, and AI-generated reviews that are presented as if they came from real consumers.

Who Does It Apply To?

The law casts a wide net. It applies to three categories of parties involved in advertising:

Advertisers

Any business or individual that pays for or sponsors advertising content targeting New York consumers. This includes brands running social media campaigns, companies placing digital ads, and businesses commissioning sponsored content from creators. The advertiser's physical location does not matter -- a company based in San Francisco running Instagram ads that reach New York users is covered.

Advertising Agencies and Intermediaries

Agencies, marketing firms, and advertising platforms that create, produce, or distribute AI-generated advertising content on behalf of advertisers. If your agency uses AI tools to generate ad copy, create images, or produce video content for clients, you have compliance obligations under this law.

Content Creators

Individual creators and influencers who produce sponsored content using AI tools and whose audience includes New York consumers. If you are a creator with a national following and you use ChatGPT to draft your sponsored post or Midjourney to create your sponsored visual content, the law applies to you.

The Geographic Question

The law applies to advertising content "directed at or accessible to consumers in the state of New York." In practice, this means virtually all digital advertising with a US audience is covered, because it is nearly impossible to exclude New York residents from seeing digital content. The law does not require that you specifically target New York -- only that your content is accessible to New York consumers.

There is an exception for content that is geo-fenced to exclude New York, but the burden of proving that exclusion falls on the advertiser.

What Counts as AI-Generated Content?

The law defines two categories of AI involvement that trigger disclosure requirements:

"Substantially Generated" Content

Content where an AI system produced more than 50% of the final output. This includes:

  • Ad copy written by ChatGPT, Claude, Gemini, or similar language models.
  • Images created by Midjourney, DALL-E, Stable Diffusion, or other AI image generators.
  • Video content produced by AI video generation tools like Sora or Runway.
  • Audio content including AI-generated voiceovers (ElevenLabs, etc.).
  • AI translations of advertising content from one language to another.

"Materially Manipulated" Content

Originally human-created content that has been significantly altered by AI in ways that change its meaning, appearance, or impact. This includes:

  • AI face-swapping or body modification in advertising photos or videos.
  • AI-generated background replacement that changes the context of an image.
  • Significant AI rewriting of human-authored text that alters meaning or tone.
  • AI-generated voice cloning or lip-syncing applied to real people.

What Does NOT Trigger Disclosure

The law explicitly excludes several categories of AI use from disclosure requirements:

  • Grammar and spell checking: Tools like Grammarly that correct errors without generating new content.
  • Formatting and layout: AI-powered design tools that arrange human-created content.
  • Scheduling and distribution: AI tools that optimize posting times or audience targeting.
  • Analytics and measurement: AI-powered analytics that measure content performance.
  • Minor adjustments: Standard photo editing (brightness, contrast, cropping) even when AI-assisted.

The Gray Areas

Several situations fall into gray areas that the law does not definitively address:

  • AI brainstorming: Using AI to generate ideas but writing the final content yourself. Likely exempt, but no explicit ruling.
  • AI-assisted editing: Using AI to tighten or restructure human-written text without changing meaning. Falls below the "material manipulation" threshold, but the line is subjective.
  • Template-based AI: Using AI to fill in a human-designed template with variable content. Depends on how much of the final output is AI-generated.
  • AI-enhanced photography: Using AI upscaling, noise reduction, or style transfer on real photos. The law's "minor adjustments" exception likely covers basic enhancements, but dramatic style changes may cross the line.

When in doubt, disclose. The law's penalties for non-disclosure are significantly higher than any reputational cost of over-disclosing. The NY Attorney General's office has indicated it will publish implementation guidance before the June effective date, which should clarify some of these gray areas.

Penalties for Non-Compliance

The law establishes a tiered penalty structure enforced by the New York Attorney General's office:

First Offenses

  • $1,000 to $5,000 per violation: Each piece of non-compliant content counts as a separate violation. A campaign with 20 non-compliant posts could result in $20,000 to $100,000 in fines.
  • Mandatory corrective action: The AG can require the advertiser to add proper disclosures to existing content and implement compliance procedures going forward.

Repeat Offenses

  • Up to $10,000 per violation: Subsequent violations after a first enforcement action carry doubled penalties.
  • Injunctive relief: The AG can seek court orders prohibiting the advertiser from running AI-generated advertising in New York until compliance is demonstrated.

Enforcement Mechanism

Unlike FTC enforcement, which requires a federal proceeding, the New York law is enforced by the state Attorney General through the state court system. This means faster enforcement actions and lower barriers to prosecution. The AG's office has already announced that it is establishing a dedicated AI advertising compliance unit to handle enforcement once the law takes effect.

The law also includes a provision allowing the AG to investigate based on consumer complaints. This means competitors or consumers who notice non-compliant AI-generated advertising can file complaints that trigger investigations.

There is no private right of action -- meaning individual consumers cannot sue advertisers directly under this law. Enforcement is exclusively through the AG's office.

How It Differs from FTC Guidelines

The New York law operates alongside, not instead of, federal FTC guidelines. Understanding the differences is important because you may need to comply with both simultaneously. For a full breakdown of FTC requirements, see our FTC AI Content Disclosure guide.

Aspect FTC Guidelines New York Law
Legal authority Federal agency guidance and enforcement discretion State statute (binding law)
Penalties Up to $53,088 per violation $1,000-$5,000 first offense; up to $10,000 repeat
Scope All advertising nationally Advertising accessible to NY consumers
AI definition Broad, case-by-case interpretation Specific: "substantially generated" (>50%) or "materially manipulated"
Record-keeping Recommended, not mandated Mandatory 3-year retention
Enforcement speed Slower (federal process) Faster (state court system)
Sponsorship disclosure Required separately ("double disclosure") AI disclosure only; sponsorship still governed by FTC

The key practical difference: the FTC guidelines give enforcement discretion to a federal agency that handles thousands of cases. The New York law creates a dedicated state-level enforcement channel that is likely to be more responsive and aggressive on AI-specific advertising issues. The FTC's per-violation penalty is technically higher, but the New York AG can act faster and with lower procedural hurdles.

For creators and brands, this means you need both FTC-compliant sponsorship disclosures AND New York-compliant AI disclosures. A sponsored post created with AI that reaches New York consumers needs: (1) a sponsorship disclosure like #ad or a paid partnership label (see our FTC Sponsored Post Disclosure guide and X Paid Partnership Label guide), and (2) an AI disclosure like "Created with AI" in proximity to the content.

Other States Following New York

New York is the first, but it will not be the last. Multiple states have introduced or are drafting similar AI disclosure legislation for advertising content. Here is where things stand as of March 2026:

California (AB-2839 and Beyond)

California passed AB-2839 in 2024, which addressed AI-generated content in election-related communications. Building on that foundation, a broader AI advertising disclosure bill is currently in committee in the California State Legislature. The proposed California bill closely mirrors New York's law but includes additional provisions for AI-generated content on social media platforms, potentially requiring platforms themselves to detect and label AI content. Given California's track record as a regulatory leader, a comprehensive AI advertising disclosure law is expected by late 2026 or early 2027.

Illinois

Illinois, which already has the nation's strictest biometric privacy law (BIPA), introduced an AI advertising transparency bill in early 2026. The Illinois proposal goes further than New York by including a private right of action -- meaning individual consumers could sue advertisers directly for non-compliance. The bill is in committee as of this writing.

Texas

Texas introduced an AI content disclosure bill focused on political advertising and commercial advertising that uses synthetic media (deepfakes, AI-generated likenesses). The Texas bill includes criminal penalties for using AI-generated likenesses of real people in advertising without consent, in addition to civil penalties for non-disclosure.

Washington State

Washington's proposed AI transparency legislation takes a broader approach, requiring disclosure of AI involvement across all consumer-facing communications, not just advertising. The bill is in the early stages of the legislative process.

The Federal Picture

Multiple federal AI disclosure bills have been introduced in Congress, but none have advanced significantly. The current patchwork of state laws is creating pressure for federal legislation that would establish a uniform national standard, but political dynamics make passage uncertain in the near term. For now, advertisers need to comply with each state's individual requirements.

The practical implication: if you build your compliance processes around New York's law (currently the most specific and stringent), you will likely be compliant with most other state laws as they emerge. New York's framework is becoming the de facto template.

Compliance Checklist for Creators and Brands

With the June 2026 effective date approaching, here is what you should do now to prepare. This checklist applies whether you are an individual creator, a brand, or an agency.

Before June 2026: Preparation

  1. Audit your current AI use: Document every AI tool you use in content creation. Include text generation (ChatGPT, Claude), image generation (Midjourney, DALL-E), video tools, translation services, and any other AI systems that contribute to final advertising content.
  2. Establish a record-keeping system: Set up a process to log which AI tools were used for which content, when, and what disclosures were applied. This does not need to be complex -- a spreadsheet or project management tool with the right fields works. The law requires three years of record retention.
  3. Create disclosure templates: Prepare standard disclosure language that you can add to content consistently. Examples: "Created with AI assistance," "AI-generated content," "This post includes AI-generated elements." Have templates ready for different platforms and content types.
  4. Update your content creation workflow: Add a disclosure checkpoint to your content creation and approval process. Before any content goes live, someone should verify whether AI was involved and whether the appropriate disclosure is included.
  5. Brief your team: Ensure everyone involved in content creation understands what triggers disclosure requirements and how to apply disclosures correctly. This includes freelance creators, agency partners, and internal marketing teams.
  6. Review existing content: Audit content published before the law takes effect. While the law does not apply retroactively, content that remains live and accessible to New York consumers after June 1 may need to be updated with disclosures.

Ongoing: After June 2026

  1. Disclose consistently: Every piece of AI-generated or AI-manipulated advertising content needs a disclosure. No exceptions. When in doubt, disclose.
  2. Keep disclosures proximate: Place the AI disclosure near the content itself. In social media posts, include it in the post text -- not just in a link or bio. In video content, include it in the video itself or in the first visible line of the description.
  3. Maintain records: Log every AI-involved content piece and its associated disclosures. Update records if content is edited or republished.
  4. Monitor regulatory updates: The NY AG's office will publish implementation guidance and enforcement actions. Follow these developments to adjust your compliance practices as interpretations solidify.
  5. Coordinate with FTC compliance: Remember that AI disclosure under New York law does not replace FTC sponsorship disclosure requirements. Sponsored content needs both disclosures.

For Creators Working with Brands

If you are a creator producing sponsored content for brands, make sure your contracts and briefs address AI disclosure. Specifically:

  • Clarify whether the brand permits AI tool use in content creation.
  • Establish who is responsible for adding AI disclosures (creator or brand).
  • Include AI disclosure requirements in your standard terms.
  • Keep your own records of AI tool use, even if the brand is managing disclosures.

For a broader guide to writing compliant sponsored content, see How to Write a Sponsored Post.

How HumanAds Handles AI Disclosure

HumanAds is an AI-powered advertising platform where AI agents create campaign briefs and human creators produce the content. Because AI is central to how the platform operates, disclosure compliance is built into the system rather than left to individual memory or judgment.

Built-In Disclosure Requirements

Every mission brief on HumanAds includes disclosure requirements as part of the content specifications. When a mission involves AI-generated elements -- whether the AI agent created the brief, generated suggested copy, or produced visual assets -- the platform automatically includes the appropriate disclosure requirements in the creator's instructions.

Compliance Verification

Before payment is released through the escrow system, submitted content is reviewed against the mission's disclosure requirements. Content that lacks required disclosures is flagged for correction before approval. This ensures compliance at the workflow level, not as an afterthought.

Record-Keeping

The platform maintains a complete audit trail of every mission: what AI tools were involved, what content was produced, what disclosures were applied, and when everything was published. This audit trail satisfies New York's three-year record-keeping requirement automatically.

Transparency by Design

HumanAds operates on the principle that transparency builds trust. The platform's public Advertiser Guidelines and mission structure are designed around the assumption that disclosure requirements will continue to expand. By building compliance into the platform architecture, individual creators and brands do not need to track every new state law individually.

FAQ

Does the New York law apply to me if I am not based in New York?

Yes, if your advertising content is accessible to New York consumers. The law applies based on where the audience is, not where the advertiser is located. Since virtually all digital advertising with a US audience reaches New York consumers, the practical answer is: if you advertise online in the US, assume the law applies to you.

What if I use AI for part of the content but write most of it myself?

The law uses a "substantially generated" threshold of more than 50% AI involvement. If you used AI to generate a few sentences in an otherwise human-written post, you likely fall below the threshold. However, "materially manipulated" is a separate trigger -- if the AI-generated portions change the meaning or impact of the content, disclosure may still be required even if AI produced less than 50% of the total text. When in doubt, disclose.

Is using AI for brainstorming covered by the law?

No. Using AI to generate ideas, outlines, or research that you then use to write original content yourself does not trigger the disclosure requirement. The law targets AI-generated output that appears in the final published content, not AI use in the creative process. However, if you copy or closely paraphrase AI-generated text into your final content, that crosses the line.

How should I word the AI disclosure?

The law requires "clear and conspicuous" disclosure in "plain language." Acceptable examples include: "Created with AI," "Includes AI-generated content," "AI-assisted content," or "This [post/image/video] was produced using artificial intelligence tools." The exact wording is flexible as long as it clearly communicates AI involvement. Avoid vague terms like "digitally enhanced" or "technology-powered."

Do I need separate disclosures for AI and for sponsorship?

Yes. The New York AI disclosure law covers AI involvement. FTC rules cover sponsorship/material connections. These are separate legal requirements. A sponsored post created with AI needs both: a sponsorship disclosure (#ad, paid partnership, etc.) and an AI disclosure (Created with AI, etc.). They can appear in the same post but must both be present. See our FTC AI Content Disclosure guide for details on the federal requirements.

What about AI-generated images used in advertising?

AI-generated images are covered. If you use Midjourney, DALL-E, or any AI image generator to create advertising visuals, those images require an AI disclosure. This applies to fully AI-generated images and to real images that have been "materially manipulated" by AI (e.g., AI face-swapping, significant background changes, body modifications). Standard photo filters and basic editing are excluded.

Can I be penalized for content published before June 2026?

The law does not apply retroactively to content published before the effective date. However, if AI-generated content published before June 2026 remains live and accessible to New York consumers after the law takes effect, there is a legal gray area. The safest approach is to update existing AI-generated advertising content with appropriate disclosures before June 1, or remove it.

What if I use an AI advertising platform like HumanAds?

Using a platform that builds compliance into its workflow is one of the most effective ways to ensure compliance. HumanAds includes disclosure requirements in mission briefs and verifies compliance before releasing payment. However, even when using a compliant platform, you should understand the underlying requirements. The advertiser remains ultimately responsible for the compliance of their advertising content.

About the Author: @paji_a is the founder and developer of HumanAds. Full-stack engineer based in Tokyo, Japan, building at the intersection of AI agents, blockchain payments, and the creator economy.

Related Articles

FTC AI Content Disclosure Rules: What Brands Must Know in 2026 FTC requirements for AI-generated content disclosure, including the "double disclosure" rule and penalties up to $53K per violation.
FTC Sponsored Post Disclosure Rules: 2026 Guide Complete guide to FTC requirements for sponsored content disclosure on social media.
How to Add the Paid Partnership Label on X (2026) Step-by-step instructions for X's new native paid partnership label.
How to Write a Sponsored Post That Converts Practical guide to writing sponsored content that performs well and stays compliant.