DCL Evaluator
Startup
Launched Apr 2026
The Story
AI agents make thousands of decisions autonomously — but how do you prove what they actually did? DCL Evaluator cryptographically seals every agent decision with SHA-256 hash chains before execution. Tamper-evident, deterministic, 100% local. Built because AI accountability needs proof, not promises.
AI Overview
AI-generated
Regulatory pressure on AI deployments is mounting, but most organizations lack a way to prove what their systems actually output or detect tampering with audit records. DCL Evaluator addresses this gap by layering cryptographic verification on top of any LLM pipeline, converting probabilistic AI outputs into deterministic, tamper-evident decisions that pass compliance scrutiny.
The product targets engineering teams deploying AI agents in regulated environments—financial services, healthcare, EU-regulated markets—where policy compliance and audit trails are non-negotiable. The integration approach is notably frictionless: developers add three lines of code to pipe LLM responses through the verification engine, receiving back a cryptographic proof tied to a chain of prior decisions.
What distinguishes DCL Evaluator from conventional LLM safety filters is its commitment to determinism. While most guardrails rely on secondary models that can drift or contradict themselves, this tool applies bit-for-bit reproducible policy checks, using SHA-256 hash chaining to make any tampering with historical records mathematically impossible—alter one decision and the entire chain invalidates. The claimed track record—zero false positives across 1000+ EU AI Act evaluations—reflects this deterministic design philosophy.
The product includes built-in policy templates for major compliance regimes (EU AI Act, GDPR, finance, medical) plus custom YAML support for bespoke requirements. A drift monitor using statistical testing provides early warning of behavioral anomalies before they escalate to violations, with four configurable modes: normal, warning, escalation, and block. The system supports outputs from any major model (Claude, GPT-4, Grok, DeepSeek, Gemini) as well as local deployments via Ollama.
On the technical side, the webhook API design sidesteps installation overhead—teams can evaluate outputs without touching their infrastructure. Export functionality covers JSON, PDF, and CEF formats for downstream compliance workflows and auditor reviews.
The business model remains unclear from the available material. The site emphasizes free availability and 30-second trial access, though the distinction between free and paid tiers is not articulated. For organizations already shipping AI into regulated markets, the deterministic audit capability may justify pricing that isn't yet public. For those still evaluating risk, the zero-friction onboarding makes experimentation cost-free.
The product targets engineering teams deploying AI agents in regulated environments—financial services, healthcare, EU-regulated markets—where policy compliance and audit trails are non-negotiable. The integration approach is notably frictionless: developers add three lines of code to pipe LLM responses through the verification engine, receiving back a cryptographic proof tied to a chain of prior decisions.
What distinguishes DCL Evaluator from conventional LLM safety filters is its commitment to determinism. While most guardrails rely on secondary models that can drift or contradict themselves, this tool applies bit-for-bit reproducible policy checks, using SHA-256 hash chaining to make any tampering with historical records mathematically impossible—alter one decision and the entire chain invalidates. The claimed track record—zero false positives across 1000+ EU AI Act evaluations—reflects this deterministic design philosophy.
The product includes built-in policy templates for major compliance regimes (EU AI Act, GDPR, finance, medical) plus custom YAML support for bespoke requirements. A drift monitor using statistical testing provides early warning of behavioral anomalies before they escalate to violations, with four configurable modes: normal, warning, escalation, and block. The system supports outputs from any major model (Claude, GPT-4, Grok, DeepSeek, Gemini) as well as local deployments via Ollama.
On the technical side, the webhook API design sidesteps installation overhead—teams can evaluate outputs without touching their infrastructure. Export functionality covers JSON, PDF, and CEF formats for downstream compliance workflows and auditor reviews.
The business model remains unclear from the available material. The site emphasizes free availability and 30-second trial access, though the distinction between free and paid tiers is not articulated. For organizations already shipping AI into regulated markets, the deterministic audit capability may justify pricing that isn't yet public. For those still evaluating risk, the zero-friction onboarding makes experimentation cost-free.
Tech Stack & Tags
Discussion
No comments yet — be the first!
Join the conversation — sign up to comment.
Sign up free