Documentation
Find what you need by what you’re trying to do.
Start Here
- Concepts Overview — understand the core ideas in five minutes
- Getting Started — install, bootstrap, and run your first evidence report
I Want To…
Set up Veritas in a new repo
- Getting Started for installation and first run
- Start Your Next Project for greenfield repos
- Brownfield Adoption for existing repos with custom guidance or verification
- CLI Reference for exact commands and flags
Wire Veritas into my AI agent
- Agent Runtime Integrations for Claude Code, Cursor, Codex hooks
- Agent Activation for how the framework reaches agents
Write rules for my repo
- Policy Packs for classification and staging model
- Tune For Your Team for rollout strategy
Understand what reviewers see
- Artifacts and Schemas for the JSON contract surface
- Example Fixtures for sample evidence and eval payloads
- Telemetry and Read Models for derived artifacts
Measure if Veritas is helping
- Live Evals for the feedback model
- Benchmarking for deterministic marker scoring
- Live Eval Roadmap for what’s coming
Run Veritas in CI
- Operational Check-ins for CI workflow setup
- CLI Reference for command flags and output format
Contribute to the framework
- Framework Core vs Adapter for architecture decisions
- Surface-Veritas Boundary for the trust substrate/product-layer boundary
- Schema Evolution for schema change policy
- CONTRIBUTING.md for development workflow
All Pages
Guides
- Getting Started — install the framework and run your first evidence report
- Agent Runtime Integrations — connect Veritas to Claude Code, Cursor, and Codex
- Start Your Next Project — bootstrap a greenfield repo with Veritas from day one
- Tune For Your Team — adapt policy and rollout without forking the framework
- Operational Check-ins — run the check-in flow in CI and interpret the output
- Publish And Release — what gets published, versioned, and how
Reference
- CLI Reference — every command, flag, and JSON output shape
- Artifacts and Schemas — the JSON contract surface the framework ships
- Example Fixtures — canonical sample evidence and eval payloads used by tests
- Telemetry and Read Models — derived artifacts and how to read trends over time
- Benchmarking — deterministic scoring against marker fixtures
Design
- Framework Core vs Adapter — what stays generic and what lives in the repo adapter
- Surface-Veritas Boundary — how Veritas maps repo proof into Surface trust input
- Agent Activation — how the framework reaches whatever agent is touching the codebase
- Policy Packs — classification and staging model for repo-specific rules
- Proof Family Results — native family-level evidence for decomposing broad proof lanes
- Live Evals — how the framework measures whether its guidance is actually helping
- Live Eval Roadmap — the build plan for live eval, phase by phase
- Schema Evolution — how framework contracts change without breaking consumers