Transparency
AI Disclosure
Unresolved Signals is an AI-native production. AI tools are integral to how the show researches, writes, produces, and distributes content. This page documents every AI system in the production stack, what it does, and where human oversight applies.
We publish this disclosure because the audience deserves to know how the show is made. Transparency about AI use is a prerequisite for trust, and trust is the foundation of investigative journalism.
Research Layer
Google NotebookLM
Deep research and conversational analysis engine. Processes primary source documents across multiple notebooks organized by source collection. Used to query the corpus, cross-reference findings across documents, and identify patterns across thousands of pages.
Claude (Anthropic)
Orchestration layer for editorial planning, script generation, cross-reference query design, source verification, and quality control. Synthesizes NotebookLM research outputs into episode structures. Also used for adversarial fact-checking: a second instance attempts to disprove each finding before publication.
Production Layer
ElevenLabs
AI voice generation for narration and bumper audio. The show uses synthetic voices with deliberate delivery control (pacing, pauses, emphasis). The voices are not impersonating real people. AI-generated narration is disclosed in every episode's closing credits.
Human Editorial Layer
AI processes the corpus. Humans decide what matters, what's responsible to publish, and how to frame uncertainty. Every script passes through human editorial review before production. The human editorial functions include:
Editorial judgment — deciding which findings are significant, which require more corroboration, and which are ready for publication.
Ethical review — evaluating the impact of publishing specific findings, protecting source identities, and treating living persons with appropriate care.
Framing uncertainty — ensuring the show clearly distinguishes between confirmed findings, probable connections, and open questions.
Source tiering — assigning provenance ratings to all cited sources (Tier 1: government/military records, Tier 2: verified historical documents and credible journalism, Tier 3: contested provenance or limited corroboration).
Final approval — no episode publishes without human sign-off on the complete script and source bibliography.
What AI Cannot Do
AI systems hallucinate. They generate plausible-sounding text that may contain fabricated details, misattributed quotes, or invented source references. The show's methodology is designed to catch this: every factual claim must trace to a specific primary source document, and the adversarial verification layer specifically targets hallucination risk.
AI systems also lack judgment about what matters. A cross-reference between two documents is a data point. Whether that data point is meaningful, coincidental, or misleading requires human interpretation. The show presents its evidence chains and lets the audience evaluate them, but the editorial decision about what to include and how to frame it remains human.
Corrections
If AI-generated content introduces an error that passes human review, the correction is published on the relevant episode page and logged in the correction record. The show takes accuracy seriously regardless of whether an error originated from an AI system or a human decision.
Report errors to tips@unresolvedsignals.com.