Warm amber audio waveform visualization representing a distinctive voice signature

“The voice is the identity — independent of the image.”

Manifesto

AI doesn't copy your work.
It learns to become you.

TrustMark is the infrastructure that fills the gap between what law protects and what AI can do.

The situation

In February 2026, ByteDance launched Seedance. Within hours, people were generating hyper-realistic Tom Cruise and Brad Pitt footage from public data alone. Five studios sent cease-and-desist letters within days. Nobody had a technical mechanism to prevent it.

That's not a copyright problem. Copyright protects what you made. That's not a trademark problem. Trademark protects your name. This is something new — AI learning to become you from everything you've ever publicly existed as.

TrustMark is the infrastructure that fills that gap. Not a replacement for law. The record that makes law enforceable.

Copyright
Protects:The work
Law
Reality:Reactive — after the fact
Trademark
Protects:The brand name
Law
Reality:Narrow — commercial use only
TrustMark
Protects:The identity record
Infrastructure
Reality:Proactive — before harm occurs

Why now

Studios and brands need provable authorization to reduce liability. Agents and talent need visibility and control over how identity is used. Platforms need scalable verification infrastructure for AI-generated content. Regulators are moving on biometrics, authorization, and AI disclosures. AI-assisted writing workflows are accelerating — and there's no shared standard for script-level consent.

There is no "wait and see." The window to set the standard is now — before the defaults become "everything is fair game."

What TrustMark is

TrustMark — built on the Digital Identity Authorization Protocol (DIAP) — is the infrastructure layer that establishes identity records at the machine level. Not a marketplace. Not a talent agency. Not another rights management dashboard. It's the infrastructure underneath all of them.

One protocol that lets agents, studios, creators, and guilds manage exactly how AI uses identity — voice, face, expression, motion, and scripts — across every app and pipeline. Authorization-first. Revocable by default. Every output auditable.

Human-rooted authority

The identity owner — or their agent, their estate, or their authorized representative — is the source of truth. Not the platform. Not the model. Agents manage it day-to-day. The protocol ensures it.

Least exposure

Identity assets and scripts stay in the vault. They don't get shipped to untrusted apps. Access is scoped, time-limited, and revocable.

Training is a separate right

Using someone's face to render a poster is not the same as using it to train a model. TrustMark verifies that distinction at the protocol level. Always.

Two layers of consent

First: can this app even see you? Second: can this project use you? Visibility and usage are separate decisions. Both require explicit authorization.

Every frame is auditable

Receipts, watermarks, and provenance chains. If it was generated with a registered identity, there's a cryptographic trail back to the authorization that allowed it — visible to agents, studios, and talent.

Emergency revocation for unauthorized use

Emergency revocation blocks unauthorized access — it never breaks valid licenses. If you licensed your identity for two years, that license is honored. Anyone in the authorized chain — talent, agents, studios, or guild reps — can trigger it when they detect external threats. It's shared infrastructure for everyone in the pipeline.

How it works, briefly

Identity modules — voice, face, expression, motion, scripts — are registered in a secure vault. Agents set visibility policies per app: who can even see the identity, who can't. That's Layer 1.

When a studio or app wants to use an identity for a project, it sends a license request. The request specifies exactly what: which rights, which campaign, which territory, how many renders, for how long. The agent or authorized representative approves or denies. That's Layer 2.

On approval, a short-lived, cryptographically bound token is issued. The app can only do what the token allows. Every output gets a signed receipt and a watermark. Platforms downstream can verify: "Was this authorized?" The answer is cryptographic, not contractual.

Scripts work the same way. A writer registers their screenplay. They control which apps can see it, which can generate summaries or breakdowns from it, and training is always a separate, explicit permission that is never implied.

Who this is for

Actors and creators who want to participate in AI-powered production without giving up control. TrustMark gives you granular, revocable authority over every use of your digital identity.

Writers whose scripts are being ingested by AI systems with no authorization trail. ScriptModule treats your work as a protected asset — readable, derivable, or trainable only with your explicit permission.

Studios that run professional pipelines and need shared infrastructure to prove it. TrustMark turns identity and script usage into controlled, auditable workflows with provenance that travels with the asset. Marketing, localization, dubbing, previs — all covered.

AI developers who want to build tools that studios and talent will actually trust. Integrate once via the SDK. Get DIAP-Certified. Access a growing registry of authorized identities and scripts.

Platforms that need to answer: "Was this content authorized?" TrustMark's verification API and watermark scanning give you a machine-checkable answer at scale.

Where we're heading

We start centralized and focused. One Central Authority. One Trust Registry. Publish the spec, schemas, and conformance tests publicly. Launch the DIAP-Certified program. Pilot with an anchor talent and a studio marketing workflow — real posters, real approvals, real receipts.

Then we expand. Multiple certified issuers — studios, unions, agencies — listed in the registry. Localization and dubbing workflows. Writing pipelines with ScriptModule. Distribution verification partners who can check authorization at the platform level.

Eventually, federation. Multi-party governance. Studios, unions, and platforms steering the standard together. Transparent audits. Standardized key ceremonies. TrustMark becomes the authorization layer that every AI application checks before it renders a human.

The endgame is not a product. It's infrastructure. The way HTTPS made “is this connection secure?” a solved question, TrustMark makes “was this person's identity used with authorization?” a solved question.

What we don't do

TrustMark is not a replacement for copyright registration, chain-of-title, or legal guild processes. It's the technical infrastructure and audit layer that makes those agreements machine-readable.

We don't take a percentage of talent pay. Pricing is subscription-based — annual fees and usage-based billing for studios and platforms. Talent access to the vault and visibility controls is free. We want maximum participation on the supply side, not a toll booth.

We don't generate content. We don't represent talent. We don't replace agents or managers. We build the protocol that everyone else plugs into.

AI is not going to slow down. The question is whether the people it depicts get a say in how it happens. TrustMark is the infrastructure that establishes the record — so law, contracts, and guilds have something to enforce.

Governance & Legal Foundation

TrustMark IP is being built alongside founding leadership from the AI Trust Foundation.

The AI Trust Foundation is the leading U.S. voice for guiding AI technology to safe and beneficial uses. Founded by AI industry leaders and former U.S. lawmakers, the Foundation works directly with policymakers — including engagement with the White House on national AI governance standards — to develop the frameworks that will define how AI operates responsibly at scale.

TrustMark IP's registration and protection protocol is designed in alignment with these emerging standards — built not just for today's creators, but for the regulatory landscape being written right now.

AI Trust Foundation logo
TrustMark IP · Draft v1.3
Digital Identity Authorization Protocol