Assistive AI • Citation-first • Human-reviewed Last updated: January 8, 2026

AI Policy

Evidence Atlas uses AI like a flashlight, not a crystal ball. AI can help surface patterns, summarize dense material, and reduce busywork—but it cannot replace provenance, peer critique, or human judgment.

Our default posture is simple: show sources, label uncertainty, keep humans in the loop, and protect people + heritage.

Non-Negotiables

These guardrails shape every AI feature we ship (and every AI feature we refuse to ship).

Citations icon

Citation-First

AI outputs must point back to evidence items and sources. No “trust me bro” summaries.

Sources icon

Transparent Labels

AI content is clearly marked, with uncertainty labels and a path to the underlying inputs.

Safety icon

Safety & Heritage

We don’t enable trespass, looting, harassment, or sensitive-location disclosure—ever.

Privacy icon

Privacy by Default

We minimize data collection and avoid invasive tracking. Private submissions stay private.

Mantra

Wonder is welcome. Evidence is required. AI can help you navigate wonder—but evidence is what earns conclusions.

Policy, Explained Like You’re Smart and Busy

A practical guide to what AI can do here, what it cannot do, and how we keep it honest.

1) Scope: What AI Does Here

AI features in Evidence Atlas are built to support search, synthesis, and structure. The goal is to make it easier to move from scattered material to testable claims.

Pattern surfacing
Cluster similar features across sites (e.g., tool marks, joinery, stone handling).
Cited summaries
Summaries must link to evidence items—no uncited assertions.
Debate scaffolding
Help draft claim threads with clear alternatives and “what would change my mind?” prompts.
Hard boundary

AI is not allowed to present itself as authoritative. It may propose hypotheses, but must surface sources and invite counter-evidence.

2) Labels & Uncertainty

We separate observation, inference, and hypothesis. AI content follows the same labeling system.

Observation

“What is visible/measurable” (e.g., dimensions, materials, photographed tool marks).

Inference

“What likely follows” from observations—but still contestable.

Hypothesis

A proposed explanation. Must list alternatives and what evidence would confirm/refute it.

Label rule of thumb:
If an AI sentence cannot be traced to an evidence item or clearly framed as hypothesis,
it does not ship.

3) Guardrails: What We Refuse to Do

Some features are “cool” and also deeply harmful. We’re not building those.

No hallucinated citations

AI must not invent sources. If the model can’t cite it, it must say “I don’t know.”

No deepfake evidence

We don’t generate “evidence media.” If we ever use synthetic imagery (e.g., diagrams), it is labeled as such and never mixed with primary documentation.

No sensitive location enablement

We don’t publish or infer access routes to protected areas. Some coordinates are intentionally imprecise.

No automated “truth verdicts”

AI may propose competing explanations—but humans and evidence decide what’s supported.

4) Human Oversight & Appeals

AI assists. Humans own outcomes. Any AI-assisted claim, label change, or moderation action must be reviewable and reversible.

Appeals in one sentence

If you think an AI output is wrong, misleading, or unsafe, you can flag it—and we’ll review it with receipts.

Review trail

Who approved it, when, and why—visible in revision history.

Reason codes

Moderation and relabeling actions include plain-language reasons.

Corrections log

Corrections are public and normal—signal, not shame.

5) Data, Privacy, and Model Training

We minimize collection and avoid creepy tracking. The product is built to prefer quality signals (evidence completeness, provenance quality, review status) over invasive surveillance metrics.

Defaults

  • We do not sell personal data.
  • We do not use private user uploads to train foundation models by default.
  • If we ever introduce optional training or research programs, they will be explicit opt-in and documented here.
  • We log only what we need to run and secure the platform (abuse prevention, reliability, safety).
Read the full policy: Privacy.

6) Standards Alignment (the grown-up part)

We draw on established trustworthy-AI principles—especially around transparency, accountability, robustness, and human oversight. Useful references include:

NIST AI Risk Management Framework

Risk-based governance across the AI lifecycle (a living framework).

OECD AI Principles

Human-centered values, transparency, robustness, and accountability.

UNESCO Recommendation on the Ethics of AI

Human rights, transparency, fairness, and the importance of human oversight.

EU AI Act transparency obligations

Disclosure rules for certain AI interactions and generated content.

7) FAQ

Does AI decide what’s true?

No. AI can propose hypotheses and summaries, but the platform is designed so evidence and counter-evidence remain primary. Claims should be falsifiable and sourced.

Can AI generate images or “new evidence”?

Not as evidence. We may use AI to assist with non-evidentiary materials (like UI helper text or diagrams), but anything synthetic will be labeled and separated from primary documentation.

How do you handle sensitive sites?

Some coordinates are intentionally imprecise, and we remove content that enables trespass or looting. Safety and heritage protection win.

How do I report an AI problem?

Use the “Report” action on the relevant page when available. For now: email safety@evidenceatlas.com with the link and a short description.

Copy-safe policy line

“AI assists our research workflows, but sources, provenance, and human review govern what’s published.”

Get Founding Access

Want the AI tools when they go live? Join the beta list. Early users help shape standards, labels, and guardrails.

No spam. Just launches, dossier drops, and Field Dispatch updates.
Saved Done.