Human identity belongs to humans. Not to the machines that process it.

Every AI system that generates a face, clones a voice, or synthesizes a human expression should be required to prove authorization before it renders. DIAP is building the infrastructure to make that requirement universal.

The Problem At Scale

The authorization gap is not an entertainment problem. It is a civilizational one.

Right now, a person's face can be uploaded to any of hundreds of AI video generation tools — and there is no infrastructure to record that it happened, verify whether it was authorized, or trace the output back to an authorization decision. A voice can be cloned from a 30-second sample. A likeness can be used across advertising, political content, or synthetic media — and no technical mechanism exists to connect that usage to a permission.

The entertainment industry is where this became visible first. But the problem is not about actors. It is about every human being whose face has ever appeared on the internet.

A photo posted to Instagram in 2019 is now training data. A voice memo shared in a WhatsApp group is now a voice clone source. A LinkedIn headshot is now a deepfake input. None of these people authorized to any of it. None of them have a technical mechanism to stop it. None of them can prove it happened.

5.3B

People with an internet presence

120B+

Images used for AI training with no authorization infrastructure

0

Universal authorization standards for AI identity processing

Every AI system that touches a human identity should have to ask first.

DIAP's long-term vision is not a product. It is a protocol layer — the same way HTTPS is not a product but a requirement. The goal is a world where:

🔒

Before any AI tool accepts a face upload, it checks for a DIAP authorization signal. If the face matches a registered identity, the system must request authorization before processing. If no authorization exists, the render does not happen.

🎙️

Before any AI system clones a voice, it must hold a valid Voice Module token. Voice is not a byproduct of fame. It is a biometric. Every voice synthesis pipeline — dubbing, generation, real-time cloning — must prove authorization before it renders a single phoneme.

📸

When a face is shared to social media, the person controls whether it enters AI pipelines. A photo posted publicly is not an authorization for AI training. DIAP's authorization signal travels with the image — a cryptographic flag that tells any DIAP-integrated pipeline: "This identity has not authorized AI processing."

🌐

When a face is uploaded to any generative tool, the tool checks the DIAP registry. Not to block creativity. To route the request through consent. If the identity is registered, the tool requests a token. If the token is granted, the render proceeds with a receipt. If not, it stops.

⚖️

Authorization is not a checkbox. It is a living, revocable, auditable state. It can be granted for one project and denied for another. It can be revoked mid-production. It can be scoped to a territory, a medium, a duration. It is not a signature on a contract from 2019. It is a real-time signal.

The authorization signal must travel with the identity — not stay behind in a database.

The current model is broken because authorization is stored separately from the identity it governs. A studio has a face scan in one database. A authorization form is in a legal filing cabinet. An AI pipeline has no connection between the two. DIAP's architecture inverts this. The authorization signal is embedded in the identity itself — in the watermark, in the embedding, in the cryptographic reference that travels with every authorized output.

Today

Broken

  • Face scan → Studio database
  • Consent form → Legal filing
  • AI pipeline → No connection

Unauthorized use is undetectable

DIAP Now

Entertainment Layer

  • Face scan → Identity Vault (math only)
  • Authorization → Layer 1 + Layer 2 tokens
  • AI pipeline → Must request token

Every render is authorized or blocked

DIAP Vision

Universal Layer

  • Any face → Checked against DIAP registry
  • Any upload → Triggers authorization signal check
  • Any AI tool → Must hold valid token

Authorization is infrastructure, not paperwork

Consent must be verified at the moment of processing — not after.

There are four moments where a human identity enters an AI pipeline. DIAP is building infrastructure at all four.

The Upload Moment

When a face is uploaded to a generative AI tool

This is the most common entry point. A user uploads a photo to a video generation tool, a face-swap app, or an image editor. If the face matches a registered DIAP identity, the tool must request a token before processing. If no token is granted, the upload is accepted but the identity-specific processing is blocked.

How DIAP handles it

Embedding match at upload → token request → authorization check → receipt issued if authorized.

Current status

Requires DIAP-Certified tool integration. The vision is mandatory certification for any tool processing human faces.

The Social Media Moment

When a face is shared publicly on a social platform

A photo posted to Instagram, TikTok, or LinkedIn is not authorization for AI processing. DIAP's vision includes an authorization signal that travels with the image at the point of posting — a cryptographic flag embedded in the image metadata or pixel layer that tells any DIAP-integrated pipeline the identity's authorization status.

How DIAP handles it

Watermark embedded at registration → travels with any image containing the identity → DIAP-integrated platforms check the signal before ingesting for training or generation.

Current status

Watermark technology exists and is live. Platform integration requires partnership agreements. The vision is platform-level adoption as a regulatory requirement.

The Training Moment

When identity data enters an AI training pipeline

This is the most consequential moment and the hardest to verify. Training use is a separate, explicit right in DIAP's authorization model — TRAINING_USE must be granted independently of render rights. No DIAP-authorized identity can enter a training pipeline without an explicit TRAINING_USE token.

How DIAP handles it

TRAINING_USE flag is never implied by any other right. Every training pipeline must request it separately. Audit trail logs every training use. Emergency revocation can revoke training authorization retroactively where technically possible.

Current status

Enforced for DIAP-Certified pipelines. The vision is regulatory requirement for all AI training on human identity data.

The Distribution Moment

When AI-generated content containing a human identity is distributed

The final verification point. Before content reaches a streaming platform, social network, or broadcaster, DIAP's verification API checks the watermark. Valid watermark + active token = authorized. No watermark or expired token = flagged. This is the point where distributors become verification partners.

How DIAP handles it

Dual-layer watermarks (pixel + ultrasonic audio) on every authorized output. Verification API available to any distributor. Audio verification works without the original file.

Current status

Watermark technology live. Verification API available. The vision is distributor-level verification as an industry standard — the way content ID works for copyright, but for consent.

The endgame is not a product. It is infrastructure.

HTTPS did not ask websites to opt in. It became the default because the alternative — unencrypted connections — became unacceptable. DIAP's trajectory follows the same arc.

Phase 1NOW

Entertainment Layer

Actors, studios, guilds, voice artists.

The industry where the problem is most visible

and the economic incentive to solve it is highest.

Token infrastructure live.

Watermarks live.

Audit trail live.

Phase 2NEAR

Platform Integration

Social platforms embed DIAP authorization signals

at the point of upload.

Generative AI tools integrate DIAP token checks

before processing human faces.

Regulators reference DIAP as a compliance standard.

Phase 3EMERGING

Universal Registry

Every person can register their identity — not just talent.

Any face, any voice, any person.

DIAP becomes the registry that any AI system

checks before processing a human identity.

The way DNS resolves a domain,

DIAP resolves consent.

Phase 4VISION

Foundational Layer

No AI system processes a human identity

without checking DIAP first.

Authorization is infrastructure.

Not paperwork. Not a checkbox.

A cryptographic requirement

baked into every pipeline

that touches a human face.

This does not happen by itself. Here is what it takes.

Technical

  • Embedding match at upload speed (sub-200ms)
  • Watermark survival across social media compression
  • Blind verification without original file
  • Open API that any tool can integrate
  • Resilience layer for screen capture and re-encoding
  • Watermark-based tamper detection system

Regulatory

  • Biometric consent laws that reference machine-readable standards
  • Training data disclosure requirements
  • Distributor liability for unauthorized identity content
  • Platform obligations to check authorization signals before ingestion
  • International harmonization of identity rights

Industry

  • Studio adoption of DIAP Certification as a production standard
  • Union contracts that reference DIAP tokens as technical infrastructure
  • Platform-level verification partnerships
  • AI tool makers integrating DIAP as a default, not an option
  • Open governance — studios, unions, platforms steering the standard together

Human identity must remain with humans. Not with the machines that process it.

A face is not a dataset. A voice is not a sample. A person's likeness is not a training input.

These are human things. They belong to the people they came from. The AI companies that trained on them, the platforms that indexed them, the models that learned from them — none of that changes who they belong to. What's missing is the infrastructure to make that ownership machine-readable.

The question is not whether AI will use human identity. It will. It already does. At a scale that is difficult to comprehend.

The question is whether the humans whose identities are being used have a technical mechanism to know about it, consent to it, and stop it.

DIAP is that mechanism. Not for actors. Not for celebrities. Not for the famous. For every person whose face has ever appeared on the internet. Which is most of us.

The infrastructure does not exist yet. We are building it.