Probabilistic Model Selection

Probabilistic model selection compares candidate models by asking how well each model represents the observed data under an explicit probabilistic framework. The focus is not only on fit, but also on whether the model provides a good balance between explanatory power, predictive usefulness, and complexity.

Common tools

Lower values indicate a better trade-off between model fit and complexity for the selected criterion.

Why it differs from hypothesis testing

Hypothesis testing typically asks whether a null model is sufficiently incompatible with the observed data. Probabilistic model selection asks a different question: among several plausible models, which one is better supported or more useful under the chosen criterion?

In this garden

This note sits between broad inferential concepts and specific criteria. It helps connect statistical inference to practical model-comparison tools.

See also: Statistical inference, information criteria, Hypothesis test, MOC Statistics and Inference, MOC Projects and Research Threads