The way the x-variables (factors) are modeled to predict an expected value or probability is the subject of the factor side of the model.
The factors enter the prediction equation as a linear combination of x values and the parameters to be estimated. For a continuous response model, where i indexes the observations and j indexes the parameters, the assumed model for a typical observation, yi, is written
yi is the response
xij are functions of the data
ei is an unobservable realization of the random error
bj are unknown parameters to be estimated.
The way the x’s in the linear model are formed from the factor terms is different for each modeling type. The linear model x’s can also be complex effects such as interactions or nested effects. Complex effects are discussed in detail later.
Continuous factors are placed directly into the design matrix as regressors. If a column is a linear function of other columns, then the parameter for this column is marked zeroed or nonestimable. Continuous factors are centered by their mean when they are crossed with other factors (interactions and polynomial terms). Centering is suppressed if the factor has a Column Property of Mixture or Coding, or if the centered polynomials option is turned off when specifying the model. If there is a coding column property, the factor is coded before fitting.
Using the JMP Fit Model command and requesting a factorial model for columns A and B produces the following design matrix. Note that A13 in this matrix is A1–A3 in the previous matrix. However, A13B13 is A13*B13 in the current matrix.
JMP implements the effective hypothesis tests described by Hocking (1985, 80–89, 163–166), although JMP uses structural rather than cell-means parameterization. Effective hypothesis tests start with the hypothesis desired for the effect and include “as much as possible” of that test. Of course, if there are containing effects with missing cells, then this test will have to drop part of the hypothesis because the complete hypothesis would not be estimable. The effective hypothesis drops as little of the complete hypothesis as possible.
Note that this shows that a test on the β1 parameter is equivalent to testing that the least squares means are the same. But because β1 is not estimable, the test is not testable, meaning there are no degrees of freedom for it.
The tests are whole marginal tests, meaning they always go completely across other effects in interactions.
Comparison of GLM and JMP Hypotheses, shows the test of the main effect A in terms of the GLM parameters. The first set of columns is the test done by JMP. The second set of columns is the test done by GLM Type IV. The third set of columns is the test equivalent to that by JMP; it is the first two columns that have been multiplied by a matrix:
noting that μ is the expected response at A = 1, μ + α2 is the expected response at A = 2, and μ + α2 + α3 is the expected response at A = 3. Thus, α2 estimates the effect moving from A = 1 to A = 2 and α3 estimates the effect moving from A = 2 to A = 3.
Summary Information for Nominal Fits (Left) and Ordinal Fits (Right)
Parameter Estimates for Nominal Fits (Left) and Ordinal Fits (Right)
Singularity Details for Nominal Fits (Left) and Ordinal Fits (Right)
Effects Tests for Nominal Fits (Left) and Ordinal Fits (Right)
Least Squares Means for Nominal Fits (Left) and Ordinal Fits (Right)