Prerequisite chain
Prerequisites for Maximum A Posteriori (MAP) Estimation
Topics you need before working through Maximum A Posteriori (MAP) Estimation. Direct prerequisites are listed first; transitive prerequisites (the chain reachable through them) follow.
Direct prerequisites (4)
- Maximum Likelihood Estimation: Theory, Information Identity, and Asymptotic Efficiencylayer 0B, tier 1
- Bayesian Estimationlayer 0B, tier 2
- Common Probability Distributionslayer 0A, tier 1
- Convex Optimization Basicslayer 1, tier 1
Reachable through the chain (100)
These topics are not directly cited as prerequisites but are reached transitively by following the chain upward. Working through the direct prerequisites pulls these in.
- Sets, Functions, and Relationslayer 0A, tier 1
- Basic Logic and Proof Techniqueslayer 0A, tier 2
- Exponential Function Propertieslayer 0A, tier 1
- Integration and Change of Variableslayer 0A, tier 2
- Measure-Theoretic Probabilitylayer 0B, tier 1
- Cardinality and Countabilitylayer 0A, tier 2
- Kolmogorov Probability Axiomslayer 0A, tier 1
- Random Variableslayer 0A, tier 1
- Zermelo-Fraenkel Set Theorylayer 0A, tier 2
- Differentiation in Rⁿlayer 0A, tier 1
- Vectors, Matrices, and Linear Mapslayer 0A, tier 1
- Continuity in Rⁿlayer 0A, tier 1
- Metric Spaces, Convergence, and Completenesslayer 0A, tier 1
- Central Limit Theoremlayer 0B, tier 1
- Law of Large Numberslayer 0B, tier 1
- Expectation, Variance, Covariance, and Momentslayer 0A, tier 1
- Joint, Marginal, and Conditional Distributionslayer 0A, tier 1
- Triangular Distributionlayer 0A, tier 2
- Borel-Cantelli Lemmaslayer 0B, tier 1
- Modes of Convergence of Random Variableslayer 0B, tier 1
- Characteristic Functionslayer 1, tier 1
- Moment Generating Functionslayer 0A, tier 2
- KL Divergencelayer 1, tier 1
- Information Theory Foundationslayer 0B, tier 2
- Distance Metrics Comparedlayer 1, tier 2
- Non-Euclidean and Hyperbolic Geometrylayer 1, tier 2
- Total Variation Distancelayer 1, tier 1
- Method of Momentslayer 0B, tier 2
- Radon-Nikodym and Conditional Expectationlayer 0B, tier 1
- Shrinkage Estimation and the James-Stein Estimator: Inadmissibility, SURE, and Brown's Characterizationlayer 0B, tier 1
- Cramér-Rao Bound: Information Inequality, Achievability, and Sharper Variantslayer 0B, tier 1
- Fisher Information: Curvature, KL Geometry, and the Natural Gradientlayer 0B, tier 1
- Basu's Theoremlayer 0B, tier 3
- Sufficient Statistics and Exponential Familieslayer 0B, tier 2
- Positive Semidefinite Matriceslayer 0A, tier 1
- Eigenvalues and Eigenvectorslayer 0A, tier 1
- Matrix Operations and Propertieslayer 0A, tier 1
- Linear Independencelayer 0A, tier 1
- Inner Product Spaces and Orthogonalitylayer 0A, tier 1
- Matrix Normslayer 0A, tier 1
- Minimax Lower Bounds: Le Cam, Fano, Assouad, and the Reduction to Testinglayer 3, tier 1
- Concentration Inequalitieslayer 1, tier 1
- Common Inequalitieslayer 0A, tier 1
- Martingale Theorylayer 0B, tier 2
- Skewness, Kurtosis, and Higher Momentslayer 1, tier 1
- Empirical Processes and Chaininglayer 3, tier 2
- Rademacher Complexitylayer 3, tier 1
- Empirical Risk Minimizationlayer 2, tier 1
- High-Dimensional Probability (Vershynin)layer 2, tier 1
- Cramér-Wold Theoremlayer 1, tier 2
- Loss Functions Cataloglayer 1, tier 1
- Logistic Regressionlayer 1, tier 1
- Dynamic Programminglayer 0A, tier 1
- Graph Algorithms Essentialslayer 0A, tier 2
- Greedy Algorithmslayer 0A, tier 2
- GraphSLAM and Factor Graphslayer 3, tier 2
- Inverse and Implicit Function Theoremlayer 0A, tier 2
- The Jacobian Matrixlayer 0A, tier 1
- Submodular Optimizationlayer 3, tier 3
- Taylor Expansionlayer 0A, tier 1
- The Hessian Matrixlayer 0A, tier 1
- Vector Calculus Chain Rulelayer 0A, tier 1
- Data Preprocessing and Feature Engineeringlayer 1, tier 1
- Linear Regressionlayer 1, tier 1
- The Elements of Statistical Learning (Hastie, Tibshirani, Friedman)layer 0B, tier 1
- Naive Bayeslayer 1, tier 2
- Robust Statistics and M-Estimatorslayer 3, tier 2
- Minimax and Saddle Pointslayer 2, tier 2
- Convex Dualitylayer 2, tier 1
- Subgradients and Subdifferentialslayer 1, tier 1
- Winsorizationlayer 1, tier 3
- Order Statisticslayer 1, tier 2
- Sequences and Series of Functionslayer 0A, tier 2
- Understanding Machine Learning (Shalev-Shwartz, Ben-David)layer 1, tier 1
- VC Dimensionlayer 2, tier 1
- Counting and Combinatoricslayer 0A, tier 2
- Hypothesis Classes and Function Spaceslayer 2, tier 1
- PAC Learning Frameworklayer 1, tier 1
- Uniform Convergencelayer 2, tier 1
- Adaptive Learning Is Not IIDlayer 3, tier 2
- Bernstein Inequalitylayer 2, tier 1
- Bennett's Inequalitylayer 2, tier 1
- Chernoff Boundslayer 1, tier 1
- Hoeffding's Lemmalayer 1, tier 1
- Realizability Assumptionlayer 2, tier 1
- Loss Functionslayer 1, tier 2
- Slud's Inequalitylayer 2, tier 2
- Bias-Complexity Tradeofflayer 2, tier 2
- No-Free-Lunch Theoremlayer 2, tier 2
- Glivenko-Cantelli Theoremlayer 2, tier 2
- McDiarmid's Inequalitylayer 3, tier 1
- Sub-Gaussian Random Variableslayer 2, tier 1
- Epsilon-Nets and Covering Numberslayer 3, tier 1
- Contraction Inequalitylayer 3, tier 2
- Sub-Exponential Random Variableslayer 2, tier 1
- Chi-Squared Concentrationlayer 2, tier 1
- Symmetrization Inequalitylayer 3, tier 1
- Asymptotic Statistics: M-Estimators, Delta Method, LANlayer 0B, tier 1
- Measure Concentration and Geometric Functional Analysislayer 3, tier 1
- Stochastic Processes for MLlayer 2, tier 2