Article

The Human Is the Model: Prior Knowledge as the Determinant of LLM-Augmented Reasoning

Bayesian inference begins with a prior. The quality of the posterior is bounded by the quality of the prior. So is the quality of every answer you accept from a language model.

  • Published 2026-04-25
  • Author: Yves Ketemwabi Shamavu
  • Topics: LLM, Bayesian, Cognitive Science, AI Safety, Epistemics

Overview

A short essay arguing that human-LLM interaction is structurally analogous to Bayesian inference: the user supplies the prior, the model supplies likelihood-like text, and the quality of the posterior is bounded by the quality of the prior. Synthesizes findings from cognitive psychology, human-computer interaction, and AI safety.

The prerendered article page is intentionally descriptive. It gives crawlers and link preview systems enough plain HTML to understand the topic, the technical scope, and the reason the article exists before the full Flutter reading experience loads.

Key themes

The article focuses on LLM, Bayesian, Cognitive Science, AI Safety, Epistemics. The complete in-app essay expands on the engineering tradeoffs, implementation details, and practical lessons behind the topic.

  • LLM
  • Bayesian
  • Cognitive Science
  • AI Safety
  • Epistemics
  • Human-AI Interaction