Blog
Posts on AI-citation quality, LLM grounding, and the SourceScore methodology.
2026-05-16 · methodology · trust · benchmarks · veritas
Why VERITAS doesn't ship performance-comparison claims (and what we ship instead)
Benchmark numbers vary by prompt format, model version, shot count, and evaluation harness. Shipping them as 'verified claims' is the surest way to make the catalog wrong by Thursday. Here's the alternative.
2026-05-16 · tutorial · python · veritas · hallucination
Verifying AI-generated facts in 5 lines of Python
Drop SourceScore VERITAS into your LLM pipeline as a post-generation check. Every claim the model emits gets a confidence score + canonical citation before the user sees it.
2026-05-16 · launch · veritas · api · llm-grounding
Stop hallucinating: a developer API for grounding LLM responses with signed, sourced claims
VERITAS is a free-tier-friendly API that returns hand-verified AI/ML claims with their primary sources, an HMAC-SHA256 signature, and a ready-to-paste citation.