For the latest version of JMP Help, visit

Publication date: 11/10/2021

Whole Model Test

The Whole Model Test report shows if the model fits better than constant response probabilities. This report is analogous to the Analysis of Variance report for a continuous response model. It is a specific likelihood ratio Chi-square test that evaluates how well the categorical model fits the data.

The negative sum of natural logs of the observed probabilities is called the negative log-likelihood (–LogLikelihood). The negative log-likelihood for categorical data plays the same role as sums of squares in continuous data: twice the difference in the negative log-likelihood from the model fitted by the data and the model with equal probabilities is a Chi-square statistic. This test statistic examines the hypothesis that the x variable has no effect on the responses.

Values of the RSquare (U) (sometimes denoted as R2) range from 0 to 1. High R2 values are indicative of a good model fit, and are rare in categorical models.

The Whole Model Test report contains the following columns:


Sometimes called Source.

The Reduced model contains only an intercept.

The Full model contains all of the effects as well as the intercept.

The Difference is the difference of the log-likelihoods of the full and reduced models.


Records the degrees of freedom associated with the model.


Measures variation, sometimes called uncertainty, in the sample.

Full (the full model) is the negative log-likelihood (or uncertainty) calculated after fitting the model. The fitting process involves predicting response rates with a linear model and a logistic response function. This value is minimized by the fitting process.

Reduced (the reduced model) is the negative log-likelihood (or uncertainty) for the case when the probabilities are estimated by fixed background rates. This is the background uncertainty when the model has no effects.

The difference of these two negative log-likelihoods is the reduction due to fitting the model. Two times this value is the likelihood ratio Chi-square test statistic.

See Likelihood, AICc, and BIC in Fitting Linear Models.


The likelihood ratio Chi-square test of the hypothesis that the model fits no better than fixed response rates across the whole sample. It is twice the –LogLikelihood for the Difference Model. It is two times the difference of two negative log-likelihoods, one with whole-population response probabilities and one with each-population response rates. See Statistical Details for the Logistic Platform.


The observed significance probability, often called the p-value, for the Chi-square test. It is the probability of getting a Chi-square value greater than the one computed. Models are often judged significant if this probability is below 0.05.

RSquare (U)

The proportion of the total uncertainty that is attributed to the model fit, defined as the Difference negative log-likelihood value divided by the Reduced negative log-likelihood value. An RSquare (U) value of 1 indicates that the predicted probabilities for events that occur are equal to one: There is no uncertainty in predicted probabilities. Because certainty in the predicted probabilities is rare for logistic models, RSquare (U) tends to be small. See Statistical Details for the Logistic Platform.

Note: RSquare (U) is also known as McFadden’s pseudo R-square.


The corrected Akaike Information Criterion. See Likelihood, AICc, and BIC in Fitting Linear Models.


The Bayesian Information Criterion. See Likelihood, AICc, and BIC in Fitting Linear Models.


Sometimes called Sum Wgts. The total sample size used in computations. If you specified a Weight variable, this is the sum of the weights.

Want more information? Have questions? Get answers in the JMP User Community (