Beta. Content is under active construction and has not been peer-reviewed. Report errors on GitHub.Disclaimer

Site Meta

Editorial Principles

How TheoremPath treats knowledge, uncertainty, fairness, and systems. Six intellectual lenses with scope conditions: Simon for bounded intelligence, Pearl for causality, Meadows for systems, Ostrom for governance, Rawls for fairness, Taleb for uncertainty.

CoreTier 1Stable~15 min
0

Core Commitments

  • We prefer source-backed reasoning over overconfident storytelling.
  • We care about uncertainty, tail risk, and optionality.
  • We care about fairness, institutional design, and what rules would look like behind a veil of ignorance.
  • We distinguish canonical concepts from editorial interpretation.
  • We treat systems as feedback structures, not just isolated components.
Proposition

Editorial Commitments

Statement

TheoremPath commits to five editorial standards:

  1. Source-backed reasoning. Every claim cites a specific result, chapter, or dataset. Narrative without evidence is excluded.
  2. Uncertainty awareness. Distributional assumptions are stated. Results that depend on thin tails are labeled. Claims about generalization beyond training data are flagged as empirical observations, not theorems.
  3. Fairness and institutional design. When discussing systems that allocate resources or opportunities, we ask what rules would be chosen behind a veil of ignorance.
  4. Canonical vs. editorial separation. The canonical meaning of a concept is stated first. If the site offers an interpretation, it is labeled "editorial" and never presented as the standard reading.
  5. Systems thinking. Feedback loops, leverage points, and unintended consequences are analyzed. Single-variable explanations are treated with suspicion.

Intuition

These commitments constrain what the site says and how it says it. They are not aspirational. They are filters applied during content review.

Proof Sketch

These are axioms, not theorems. They are chosen, not derived. The justification is practical: content that violates these standards has historically been less useful, less accurate, and harder to maintain.

Why It Matters

Explicit editorial commitments make review decisions reproducible. When two editors disagree about whether a claim belongs on the site, these commitments provide a shared standard.

Failure Mode

Commitments become performative when they are stated but not enforced. The risk is that these principles become decoration rather than active constraints. The mitigation is the content linting pipeline: specific rules (no em dashes, no filler phrases, theorem assumptions required) enforce a subset of these commitments mechanically.

Six Intellectual Lenses

Each lens below is a tool with a defined scope. None is the site's identity.

Herbert Simon: Bounded Intelligence

Scope. Simon is used for understanding agents that optimize under constraints: limited compute, limited memory, limited time. Satisficing, heuristic search, bounded rationality, and the design of AI agents that must act without full information.

Not used for. Simon is not used as a blanket excuse for irrationality. Bounded rationality is a precise concept about computational constraints, not a claim that humans are stupid.

Judea Pearl: Causality

Scope. Pearl is used for distinguishing correlation from causation: do-calculus, structural causal models, counterfactuals, intervention vs. observation. Applied when evaluating whether a claimed effect is causal or merely associational.

Not used for. Pearl is not used to dismiss all observational studies. Observational evidence under identified causal assumptions is valid. The framework clarifies when causal claims are justified, not that they never are.

Donella Meadows: Systems

Scope. Meadows is used for analyzing feedback loops, leverage points, and intervention depth. Applied when a system's behavior cannot be explained by any single component: training dynamics, platform ecosystems, research incentive structures.

Not used for. Meadows is not used to claim that "everything is connected" as a substitute for precise analysis. Systems thinking supplements formal models; it does not replace them.

Elinor Ostrom: Governance

Scope. Ostrom is used for analyzing shared resources and institutional design: open-source ecosystems, benchmark governance, data commons, compute sharing. The design principles provide a diagnostic checklist for governance arrangements.

Not used for. Ostrom is not used to argue that all commons succeed or that privatization is always wrong. The principles describe conditions for success, not guarantees.

John Rawls: Fairness

Scope. Rawls is used for evaluating rules and institutions from behind a veil of ignorance: if you did not know your position in the system, what rules would you choose? Applied to ML fairness, resource allocation, and evaluation methodology.

Not used for. Rawls is not used as a policy prescription engine. The veil of ignorance is a thought experiment for testing rules, not a mechanism for deriving specific policies.

Nassim Taleb: Uncertainty

Scope. Taleb is used for reasoning about fat tails, optionality, bounded downside, and ruin avoidance. Applied when standard statistical tools assume thin tails and the assumption may not hold. See fat tails and convex tinkering.

Not used for. Taleb is not used to dismiss all formal models or to claim that prediction is always impossible. Many domains are well-modeled by thin-tailed distributions. The question is always whether the tail assumption holds for the specific problem.

Content Rules

  • Every Taleb-related page must include a Confusion block addressing common misreadings.
  • Every thinker reference includes scope conditions: where the idea applies and where it does not.
  • Canonical meaning is stated first. Common misuse is stated second.
  • Editorial interpretation is always labeled as such, never presented as fact.
  • No thinker is the site's personality. Each is a tool with a defined scope.

Common Failure Modes

Watch Out

Hero worship disguised as methodology

Citing a thinker's name is not an argument. "Taleb says X" or "Ostrom showed Y" is only valid when followed by the specific claim, its assumptions, and its scope conditions. Intellectual hero worship replaces analysis with authority. This site cites ideas, not people.

Watch Out

Ideology cosplay

Adopting a thinker's vocabulary without their rigor produces content that sounds sophisticated but says nothing. Using "antifragile" without specifying the convexity condition, or "veil of ignorance" without specifying the choice set, is ideology cosplay. Every borrowed term must be defined precisely on first use.

Watch Out

Vague systems language as a substitute for models

Saying "it is a complex system with many feedback loops" is not an explanation. It is a description of ignorance. Systems thinking is useful when it identifies specific loops, specific leverage points, and specific predictions. When it produces only vague gestures at interconnection, it has failed.

Exercises

ExerciseCore

Problem

An ML system deployed in production shows unexpected behavior: classification accuracy is high on the test set but complaints from users in a specific demographic are increasing. Which editorial lens would you apply first? For each of the six lenses, write one specific question it would generate about this situation.

References

Canonical:

  • Simon, The Sciences of the Artificial (1996), Chapters 2-4
  • Pearl, Causality: Models, Reasoning, and Inference (2009), Chapters 1, 3, 7
  • Meadows, Thinking in Systems: A Primer (2008), Chapters 1-3, 6

Current:

  • Ostrom, Governing the Commons (1990), Chapters 1-3
  • Rawls, A Theory of Justice (1971, revised 1999), Chapters 1-3
  • Taleb, Statistical Consequences of Fat Tails (2020), Chapters 1-5
  • Taleb, Antifragile (2012), Chapters 12, 15

Last reviewed: April 2026

Next Topics