Unlock: Bayesian Linear Regression
Gaussian prior, Gaussian likelihood, Gaussian posterior. Full posterior derivation by completing the square in the exponent: the posterior mean equals the ridge estimator, the predictive distribution has irreducible plus epistemic variance, and the marginal likelihood gives a closed-form hyperparameter selection criterion. Worked numeric example with three data points carries the algebra end to end.
109 Prerequisites0 Mastered0 Working96 Gaps
Prerequisite mastery12%
Recommended probe
Subgradients and Subdifferentials is your weakest prerequisite with available questions. You haven't been assessed on this topic yet.
Not assessed5 questions
Conjugate PriorsInfrastructure
No quiz
Linear RegressionFoundations
Not assessed15 questions
Maximum A Posteriori (MAP) EstimationInfrastructure
No quiz
Maximum Likelihood Estimation: Theory, Information Identity, and Asymptotic EfficiencyInfrastructure
Not assessed52 questions
Ridge RegressionFoundations
Not assessed8 questions
The Multivariate Normal DistributionInfrastructure
No quiz
Bayesian EstimationInfrastructure
Not assessed12 questions
Sign in to track your mastery and see personalized gap analysis.