The Problem: The "Mercurial Core"

For the last six months, I have been arguing that the current approach to Agentic AI is fundamentally unsafe for institutional capital. We are attempting to govern probabilistic models using deterministic policies (RBAC, IAM), and the math simply doesn’t hold.

If an Agent’s memory is mutable, its audit trail is fiction. If an Admin can clone an Agent’s API key, its identity is a suggestion.

I call this the "Mercurial Core" problem: You cannot build a solid structure (Finance/Law) on top of a liquid foundation (LLM Context).

The Solution: Trusting Physics over Policy

Today, I am releasing The Citadel Protocol as an open standard.

This is not a software framework. It is a reference architecture for binding Agent Identity to a Hardware Root of Trust (HRoT). It proposes that for high-stakes execution, we must move from "Software Permissions" to "Hardware Airlocks."

The protocol defines three mandatory architectural components for Sovereign Agents:

  1. The Hardware Mutex: Binding the Agent’s context window to a specific silicon enclave (TPM/HSM), ensuring that "Identity" cannot be cloned or snapshotted by a root user.

  2. The Witness Chain: Replacing the "Log File" with a cryptographic, append-only Merkle structure that acts as a legal witness to execution.

  3. The Sovereign Airlock: A network topology that enforces "Terminal Refusal"—the ability of the infrastructure to physically sever connections when an Agent’s intent diverges from its signed mandate.

The Standard

I have published the full specification as an open standard to ensure that the "Evidence Layer" of the AI economy remains a public utility, free from vendor lock-in.

It is now available for review and implementation.

Reply

Avatar

or to participate

Keep Reading

No posts found