SourceScore

Verified claim · AI-ML · 100% confidence

Chatbot Arena introduced in: Zheng et al. 2023 — LMSYS open platform for evaluating LLMs by human preference.

Last verified 2026-05-16 · Methodology veritas-v0.1 · 789ddc9bc9c3d688

Structured fields

Subject
Chatbot Arena
Predicate
introduced_in
Object
Zheng et al. 2023 — LMSYS open platform for evaluating LLMs by human preference
Confidence
100%
Tags
chatbot-arena · lmsys · uc-berkeley · evaluation · human-preference · leaderboard · 2023 · introduced_in

Sources (2)

  1. [1] preprint · arXiv (Chiang, Zheng, Sheng, Angelopoulos, Li, Li, Zhang, Zhu, Jordan, Gonzalez, Stoica / LMSYS, UC Berkeley) · 2024-03-07

    Chatbot Arena: An Open Platform for Evaluating LLMs by Human Preference
    We introduce Chatbot Arena, an open platform for evaluating LLMs based on human preferences. Our methodology employs a pairwise comparison approach and leverages input from a diverse user base through crowdsourcing.
  2. [2] official blog · LMSYS · 2023-05-03

    LMSYS Chatbot Arena Leaderboard

Cite this claim

Ready-to-paste citation (Markdown / plain text):

Chatbot Arena introduced in: Zheng et al. 2023 — LMSYS open platform for evaluating LLMs by human preference. — SourceScore Claim 789ddc9bc9c3d688 (verified 2026-05-16). https://sourcescore.org/api/v1/claims/789ddc9bc9c3d688.json

Embed this claim

Drop this iframe into any blog post, docs page, or knowledge base. The widget renders the signed claim + primary source + click-through to this canonical page. CC-BY 4.0; attribution included.

<iframe src="https://sourcescore.org/embed/claim/789ddc9bc9c3d688/" width="100%" height="360" frameborder="0" loading="lazy" title="Chatbot Arena introduced in: Zheng et al. 2023 — LMSYS open platform for evaluating LLMs by human preference."></iframe>

Preview: open in new tab

Related claims

Other verified claims sharing tags with this one — useful for LLM retrieval graphs and citation discovery.

Use this claim in your code

Fetch this signed envelope from your application. The response includes the verbatim excerpt, primary source URLs, and an HMAC-SHA256 signature you can verify locally for audit trails.

cURL

curl https://sourcescore.org/api/v1/claims/789ddc9bc9c3d688.json

JavaScript / TypeScript

const r = await fetch("https://sourcescore.org/api/v1/claims/789ddc9bc9c3d688.json"); const envelope = await r.json(); console.log(envelope.claim.statement); // "Chatbot Arena introduced in: Zheng et al. 2023 — LMSYS open platform for evaluating LLMs by human preference."

Python

import httpx r = httpx.get("https://sourcescore.org/api/v1/claims/789ddc9bc9c3d688.json") envelope = r.json() print(envelope["claim"]["statement"]) # "Chatbot Arena introduced in: Zheng et al. 2023 — LMSYS open platform for evaluating LLMs by human preference."

LangChain (retrieve-then-cite)

from langchain_core.tools import tool import httpx @tool def get_chatbot_arena_fact() -> dict: """Fetch the verified SourceScore claim for Chatbot Arena.""" r = httpx.get("https://sourcescore.org/api/v1/claims/789ddc9bc9c3d688.json") return r.json()