Concepts
In-depth explainers on the foundational ideas that VERITAS is built around. Each page is a standalone resource — useful even if you never integrate the API.
LLM grounding
What it means to ground a language model's output in verified sources, the three patterns that work, and where VERITAS fits.
LLM hallucination
Five categories of LLM hallucination, the six root causes, measured rates by query type, and the mitigation ladder from prompt engineering to signed-claim verification.
RAG vs signed-claim verification
RAG retrieves prose chunks; VERITAS retrieves typed atomic claims with signatures. Comparison table, when to use each, the hybrid pattern most production systems converge on.
Citation chains
The auditable trail from an LLM's emitted claim back to primary sources. Three building blocks: stable identifier · cryptographic signature · re-fetchable canonical URL. Local-verification walkthrough + how chains fit into agentic responses.
Evaluation harnesses
Why the same model scores differently on the same benchmark across LM Eval Harness vs HELM vs lab-internal evals. Six axes of variation, how to read benchmark claims honestly, and why VERITAS excludes performance-comparison claims.
Embeddings
Dense numerical vectors that represent text/images/audio such that similar inputs produce similar vectors. The retrieval backbone of RAG, semantic search, classification, and most LLM-era infrastructure. History, model selection, common pitfalls, and where embeddings stop and verification starts.