Every claim on this page is sourced. Every date is verified. Every law is cited with its official reference. This is not a marketing document — it is a research brief on the regulatory landscape that shapes AI identity rights globally.
Two tracks, one tension: federal preemption vs. state-level protections
Revoked Biden's entire AI executive order. The stated purpose: “removing every barrier to American AI dominance.”
“Ensuring a National Policy Framework for Artificial Intelligence” — the most consequential AI policy action in U.S. history. Key mechanisms:
Full legislative blueprint urging Congress to adopt a “light-touch” federal regime. Seven pillars: child safety, communities, creators, censorship, competitiveness, workforce, and preemption of state AI laws.
The Framework explicitly recommends “establishing safeguards against unauthorized digital replicas of individuals' voice, likeness, or other attributes.” This is the administration acknowledging that identity rights need federal protection — even while dismantling everything else.
Signed December 11, 2025. Any person who produces an advertisement must conspicuously disclose use of a “synthetic performer” — a digitally created human not recognizable as any real person. Penalties: $1,000 first violation, $5,000 each subsequent. No private cause of action. Carveouts for audio-only, language translation, and expressive work promotions.
Contract provisions allowing AI digital replicas of living performers in place of work they would otherwise perform are unenforceable unless specific requirements are met. A direct labor protection — studios cannot bury consent in boilerplate contracts.
Protects deceased personalities from unauthorized AI-generated digital replicas in audiovisual works and sound recordings.
First U.S. federal deepfake law. Criminalizes non-consensual intimate imagery including AI-generated fakes. Platforms must remove flagged content within 48 hours. By May 2026, all platforms hosting user content must have notice-and-takedown systems.
Per Dynamis LLP: 68% of consumers frequently wonder if content is real. 50% prefer brands that avoid generative AI in consumer-facing content. 63% say brands have a duty to disclose AI use. Disclosure has become a brand signal, not just a legal requirement.
Dynamis LLP — AI Disclosure in 2026The August 2 deadline — the world's most consequential AI law enters full enforcement
Up to €15 million or 3% of global annual turnover, whichever is higher.
Any brand or creator selling into or advertising in the EU — regardless of where they are based — is within the law's reach as a deployer. A Hollywood studio running a campaign in Germany must comply.
The EU's Code of Practice explicitly points toward Content Credentials (C2PA) as the technical standard for provenance. Google, Meta, and TikTok have already integrated C2PA functionality.
Originally scheduled for 2026, now delayed to 2027 per European Commission proposal. Biometric identification, emotion recognition, and AI systems that manipulate human behavior fall under the highest-risk tier with mandatory conformity assessments, human oversight requirements, and registration in an EU database.
The world's most radical identity law — personal identity as intellectual property
Denmark has done something no other country has done: treated personal identity as intellectual property. Culture Minister Jakob Engel-Schmidt announced an amendment to Danish copyright law giving every citizen the right to their own body, facial features, and voice. The bill passed with nine-in-ten MP support — the broadest cross-party consensus on any tech legislation in European history.
Denmark held the EU Council Presidency in late 2025 and used it to push this model to France, Ireland, and other EU members. The Good Lobby reports that Denmark explicitly framed this as a European blueprint — not just a domestic law.
The opposite direction — and why it still matters
Japan's first comprehensive AI law is explicitly non-binding. No fines. No mandatory compliance. The government issues guidance; companies are expected to follow it voluntarily. The enforcement mechanism is “name and shame” — public disclosure of non-compliance.
Japan's Cabinet approved amendments to the Personal Information Protection Act that remove the requirement for opt-in consent before sharing personal data for AI development. Japan's Digital Transformation Minister explicitly called existing consent requirements “a very big obstacle to AI development.”
Japan's Copyright Act Article 30-4 (amended 2020) permits non-expressive uses of copyrighted works for AI training without authorization — the most permissive AI training regime in any major economy. However, if models are fine-tuned to imitate specific styles (LoRA training), the exemption no longer applies.
Japan is betting that being the “easiest country to develop AI” will attract global AI investment. This creates a regulatory arbitrage problem — companies can train on Japanese data with minimal consent requirements, then deploy globally. Provenance infrastructure becomes the answer: even if training happened in Japan without consent, a render-time authorization layer ensures that deployment requires valid authorization regardless of where the model was trained.
The most technically rigorous AI labeling regime in the world
South Korea, United Kingdom, France
Rolled out measures to curb deepfake pornography including harsher punishment and stepped-up platform regulations. Criminal penalties for creation and distribution.
The Online Safety Act 2023 implementation continued through 2025. 2025 amendments target creators directly — intentionally crafting sexually explicit deepfake images without consent, with intent to cause distress, carries up to two years in prison. Age verification for adult sites mandatory since July 25, 2025.
Bill No. 675 pending — mandatory labeling of AI-generated images on social networks. Fines up to €3,750 for users, €50,000 per offense for platforms. Article 226-8-1 (2024) already criminalizes non-consensual sexual deepfakes: up to 2 years imprisonment and €60,000 fine.
Where the global regulatory landscape is splitting
EU + Denmark + UK + France + South Korea
Identity = property right. Consent required before use. Provenance mandatory. Violations carry criminal or severe civil penalties. The direction of travel: identity rights become as fundamental as copyright.
USA + Japan
Consent requirements loosened or preempted at the federal level. State-level protections survive but are under legal attack. Even these regimes recognize that identity rights for performers and public figures need some protection — just not the kind that slows down AI development.
China
Mandatory watermarking and provenance — but controlled by the state, not by individuals. The infrastructure is technically rigorous. The governance model is the opposite of individual rights. It proves that technical provenance infrastructure is achievable at scale.
Despite the three speeds, every major jurisdiction is converging on one technical requirement: machine-readable provenance. The EU calls it Article 50 compliance. China calls it mandatory watermarking. Denmark calls it the right to demand removal. The NO FAKES Act calls it consent documentation. The C2PA standard is the technical layer underneath all of them.
Shared identity infrastructure that operates across all three speeds would generate: a render receipt that satisfies EU Article 50's machine-readable provenance requirement, a watermark that satisfies China's dual-watermarking mandate, an authorization token that satisfies Denmark's consent requirement, a contract reference that satisfies California AB 2602's performer protection, and a compliance bundle that satisfies New York's synthetic performer disclosure law. One infrastructure. Every jurisdiction.
| Date | Event | Impact |
|---|---|---|
| Jan 2025 | Trump revokes Biden AI EO | US deregulation begins |
| May 2025 | TAKE IT DOWN Act signed | First US federal deepfake law |
| May 2025 | Japan AI Promotion Act | Non-binding but signals direction |
| Jun 2025 | Denmark copyright amendment announced | Identity = IP, European first |
| Sep 2025 | China AI labeling rules in force | Mandatory watermarking at scale |
| Dec 2025 | Trump EO 14365 | Federal preemption strategy launched |
| Dec 2025 | New York A8887-B signed | Synthetic performer disclosure law |
| Apr 2026 | Japan privacy law relaxed | Authorization-free AI training data |
| Jun 9, 2026 | New York law effective | Disclosure mandatory for ads |
| Aug 2, 2026 | EU AI Act Article 50 in force | Machine-readable provenance mandatory |
| 2026–2027 | NO FAKES Act likely passage | Federal US identity rights law |
| 2027 | EU high-risk AI obligations | Biometric AI fully regulated |