Shows a report containing the parameter estimates and t tests for the hypothesis that each parameter is zero. See Parameter Estimates.
 • Parameter estimates in the Parameter Estimates report
 • Least squares means in the Least Squares Means Table
JMP Learning Library – Basic Inference - Proportions and Means
Watch a brief video on how to perform a one-way analysis of variance and interpret the F-statistic.
 • This value is the same as the number of rows in the data table under the following conditions: there are no missing values, no excluded rows, and no column assigned to the role of Weight or Freq.
 • This value is the sum of the positive values in the Weight column if there is a column assigned to the role of Weight.
 • This value is the sum of the positive values in the Freq column if there is a column assigned to the role of Freq.
Gives the associated degrees of freedom (DF) for each source of variation.
 • The C. Total DF is always one less than the number of observations.
 • The C. Total DF is partitioned into degrees of freedom for the Model and Error:
 ‒ The Model degrees of freedom is the number of parameters (other than the intercept) used to fit the model.
 ‒ The Error DF is the difference between the C. Total DF and the Model DF.
 • The total (C. Total) SS is the sum of the squared differences between the response values and the sample mean. It represents the total variation in the response values.
 • The Error SS is the sum of the squared differences between the fitted values and the actual values. It represents the variability that remains unexplained by the fitted model.
 • The Model SS is the difference between C. Total SS and Error SS. It represents the variability explained by the model.
Gives the p-value for the test. The Prob > F value measures the probability of obtaining an F Ratio as large as what is observed, given that all parameters except the intercept are zero. Small values of Prob > F indicate that the observed F Ratio is unlikely. Such values are considered evidence that there is at least one significant effect in the model.
 Term Estimate Std Error Gives estimates of the standard errors for each of the estimated parameters. t Ratio Tests whether the true value of the parameter is zero. The t Ratio is the ratio of the estimate to its standard error. Given the usual assumptions about the model, the t Ratio has a Student’s t distribution under the null hypothesis. Prob>|t| Lists the p-value for the test that the true parameter value is zero, against the two-sided alternative that it is not. Lower 95% Shows the lower 95% confidence limit for the parameter estimate. Upper 95% Shows the upper 95% confidence limit for the parameter estimate. Std Beta Shows parameter estimates for a regression model where all of the terms have been standardized to a mean of 0 and a variance of 1. VIF Shows the variance inflation factor for each term in the model. High VIFs indicate a collinearity issue among the terms in the model. The VIF for the ith term, xi, is defined as follows: where Ri 2 is the RSquare, or coefficient of multiple determination, for the regression of xi as a function of the other explanatory variables. Design Std Error Shows the square roots of the relative variances of the parameter estimates (Goos and Jones, 2011, p. 25): These are the standard errors divided by RMSE.
The Effect Tests report only appears when there are fixed effects in the model. The effect test for a given effect tests the null hypothesis that all parameters associated with that effect are zero. An effect might have only one parameter as for a single continuous explanatory variable. In this case, the test is equivalent to the t test for that term in the Parameter Estimates report. A nominal or ordinal effect can have several associated parameters, based on its number of levels. The effect test for such an effect tests whether all of the associated parameters are zero.
 •
 •
 Source Lists the effects in the model. Nparm Shows the number of parameters associated with the effect. A continuous effect has one parameter. The number of parameters for a nominal or ordinal effect is one less than its number of levels. The number of parameters for a crossed effect is the product of the number of parameters for each individual effect. DF Sum of Squares Gives the sum of squares for the hypothesis that the effect is zero. F Ratio Gives the F statistic for testing that the effect is zero. The F Ratio is the ratio of the mean square for the effect divided by the mean square for error. The mean square for the effect is the sum of squares for the effect divided by its degrees of freedom. Prob > F Gives the p-value for the effect test. Mean Square Shows the mean square for the effect, which is the sum of squares for the effect divided by its DF.
 • Effect Leverage emphasis: Each effect has its own report at the top of the Fit Least Squares report window to the right of the Whole Model report. In this case, the report includes a Leverage Plot for the effect.
 • Effect Screening or Minimal Report emphases: The Effect Details report is provided but is initially closed. Click the disclosure icon to show the report.
The red triangle options next to an effect name are described in Description of Effect Options. For certain modeling types, some of these options might not be appropriate and are therefore not available.
 LSMeans Table This option is not enabled for continuous effects. LSMeans Plot LSMeans Contrast LSMeans Student’s t Note: The significance level applies to individual comparisons and not to all comparisons collectively. The error rate for the collection of comparisons is greater than the error rate for individual tests. LSMeans Tukey HSD Gives tests and confidence intervals for pairwise comparisons of least squares means using the Tukey-Kramer HSD (Honestly Significant Difference) test (Tukey 1953, Kramer, 1956). See LSMeans Student’s t and LSMeans Tukey HSD. LSMeans Dunnett Test Slices Power Analysis
Least squares means are values predicted by the model for the levels of a categorical effect where the other model factors are set to neutral values. The neutral value for a continuous effect is defined to be its sample mean. The neutral value for a nominal effect that is not involved in the effect of interest is the average of the coefficients for that effect. The neutral value for an uninvolved ordinal effect is defined to be the first level of the effect in the value ordering.
Least squares means are also called adjusted means or population marginal means. Least squares means can differ from simple means when there are other effects in the model. In fact, it is common for the least squares means to be closer together than the sample means. This situation occurs because of the nature of the neutral values where these predictions are made.
 1 Select Help > Sample Data Library and open Big Class.jmp.
 2 Select Analyze > Fit Model.
 3 Select weight and click Y.
 4 Select age, sex, and height and click Add.
 5 From the Emphasis list, select Effect Screening.
 6 Click Run.
 7 The Effect Details report appears near the bottom of the Fit Least Squares report and is initially closed. Click the disclosure icon next to the Effect Details report title to show the report.
The Effect Details report, shown in Least Squares Mean Table, shows reports for each of the three effects. Least Squares Means tables are given for age and sex, but not for the continuous effect height. Notice how the least squares means differ from the sample means.
Least Squares Mean Table
 Level Lists the categorical levels or combination of levels. Least Sq Mean Gives an estimate of the least squares mean for each level. Estimability Displays a warning if a least squares mean is not estimable. Std Error Gives the standard error of the least squares mean for each level. Lower 95% Shows the lower 95% confidence limit for the least squares mean. Upper 95% Shows the upper 95% confidence limit for the least squares mean. Mean Gives the response sample mean for the given level. This mean differs from the least squares mean if the values for other effects in the model do not balance out across this effect.
This option constructs least squares means (LS Means) plots for nominal and ordinal main effects and their interactions. The Popcorn.jmp sample data table illustrates an interaction between two categorical effects. Least Squares Means Tables and Plots for Two Effects shows the Least Squares Means tables and the corresponding LS Means plots for two categorical effects in the Popcorn.jmp sample data table.
 • Deselect the LSMeans Plot option.
 • Hold the SHIFT key and select the LSMeans Plot option again.
Least Squares Means Tables and Plots for Two Effects
 1 Select Help > Sample Data Library and open Popcorn.jmp.
 2 Select Analyze > Fit Model.
 3 Select yield and click Y.
 4 Select popcorn, oil amt, and batch and click Macros > Full Factorial. Note that the Emphasis changes to Effect Screening.
 5 Click Run.
 6 Click the Effect Details disclosure icon to show the details for the seven model effects.
 7
 8 To transpose the factors in the plot for popcorn*batch, deselect the LSMeans plot option. Then hold the SHIFT key while you select the LSMeans Plot option again.
LSMeans Plot for Interaction with Factors Transposed shows the popcorn*batch interaction plot with the factors transposed. Compare it with the plot in Least Squares Means Tables and Plots for Two Effects. These plots depict the same information but, depending on your interest, one might be more intuitive than the other.
LSMeans Plot for Interaction with Factors Transposed
A contrast is a linear combination of parameter values. In the Contrast Specification window, you can specify multiple contrasts and jointly test whether they are zero (LSMeans Contrast Specification for age).
Each time you click the + or - button, the contrast coefficients are normalized to make their sum zero and their absolute sum equal to two, if possible. To compare additional levels, click the New Column button. A new column appears in which you define a new contrast. After you are finished, click Done. The Contrast report appears (LSMeans Contrast Report). The overall test is a joint F test for all contrasts.
p-value for the significance test
The Test Detail report (LSMeans Contrast Report) shows a column for each contrast that you tested. For each contrast, the report gives its estimated value, its standard error, a t ratio for a test of that single contrast, the corresponding p-value, and its sum of squares.
The Parameter Function report (LSMeans Contrast Report) shows the contrasts that you specified expressed as linear combinations of the terms of the model.
 1 Select Help > Sample Data Library and open Big Class.jmp.
 2 Select Analyze > Fit Model.
 3 Select weight and click Y.
 4 Select age, sex, and height, and click Add.
 5 Select age in the Select Columns list, select height in the Construct Model Effects list, and click Cross.
 6 Click Run.
 7 From the red triangle menu next to age, select LSMeans Contrast.
LSMeans Contrast Specification for age
 8 Click “+” for the ages 12 and 13.
 9 Click “-” for ages 14 and 15.
 10 Note that there is a text box next to the continuous effect height. The default value is the mean of the continuous effect.
 11 Click Done.
 12 Open the Test Detail and Parameter Function reports.
The Contrast report is shown in LSMeans Contrast Report. The test for the contrast is significant at the 0.05 level. You conclude that the predicted weight for age 12 and 13 children differs statistically from the predicted weight for age 14 and 15 children at the mean height of 62.55.
LSMeans Contrast Report
The LSMeans Student’s t and LSMeans Tukey HSD (honestly significant difference) options test pairwise comparisons of model effects.
 • The LSMeans Student’s t option is based on the usual independent samples, equal variance t test. Each comparison is based on the specified significance level. The overall error rate resulting from conducting multiple comparisons exceeds that specified significance level.
LSMeans Tukey HSD Report shows the LSMeans Tukey report for the effect age in the Big Class.jmp sample data table. (You can obtain this report by running the Fit Model data table script and selecting LS Means Tukey HSD from the red triangle menu for age.) By default, the report shows the Crosstab Report and the Connecting Letters Report.
LSMeans Tukey HSD Report
In LSMeans Tukey HSD Report, levels 17, 12, 16, 13, and 15 are connected by the letter A. The connection indicates that these levels do not differ at the 0.05 significance level. Also, levels 16, 13, 15, and 14 are connected by the letter B, indicating that they do not differ statistically. However, ages 17 and 14, and ages 12 and 14, are not connected by a common letter, indicating that these two pairs of levels are statistically different.
Bar Chart from LSMeans Differences HSD Connecting Letters Table shows the bar chart for an example based on Big Class.jmp. Run the Fit Model data table script, select LSMeans Tukey HSD from the red triangle menu for age. Select Save Connecting Letters Table from the LSMeans Differences Tukey HSD report. Run the Bar Chart script in the data table that appears.
Ranks the differences from largest to smallest, giving standard errors, confidence limits, and p-values. Also plots the differences on a bar chart with overlaid confidence intervals.
Gives individual detailed reports for each comparison. For a given comparison, the report shows the estimated difference, standard error, confidence interval, t ratio, degrees of freedom, and p-values for one- and two-sided tests. Also shown is a plot of the t distribution, which illustrates the significance test for the comparison. The area of the shaded portion is the p-value for a two-sided test.
Uses the Two One-Sided Tests (TOST) method to test for a practical difference between the means (Schuirmann, 1987). You must select a threshold difference for which smaller differences are considered practically equivalent. Two one-sided t tests are constructed for the null hypotheses that the true difference exceeds the threshold values. If both tests reject, this indicates that the difference in the means does not exceed either threshold value. Therefore, the groups are considered practically equivalent.
Bar Chart from LSMeans Differences HSD Connecting Letters Table
A report for the LSMeans Dunnett option for effect treatment in the Cholesterol.jmp sample data table is shown in LSMeans Dunnett Report. Here, the response is June PM and the level of treatment called Control is specified as the control level.
LSMeans Dunnett Report
Note: To ensure that your study includes sufficiently many observations to detect the required differences, use information about power when you design your experiment. Such an analysis is called a prospective power analysis. Consider using the DOE platform to design your study. Both DOE > Sample Size and Power and DOE > Evaluate Design are useful for prospective power analysis. For an example of a prospective power analysis using standard least squares, see Prospective Power Analysis.
Power Details Window shows an example of the Power Details window for the Big Class.jmp sample data table. Using the Power Details window, you can explore power for values of alpha ( ), sigma ( ), delta ( ), and Number (study size). Enter a single value (From only), two values (From and To), or the start (From), stop (To), and increment (By) for a sequence of values. Power calculations are reported for all possible combinations of the values that you specify.
Power Details Window
 Alpha (α) The significance level of the test. This value is between 0 and 1, and is often 0.05, 0.01, or 0.10. The initial value for Alpha, shown in the first row, is 0.05, unless you have selected Set Alpha Level and set a different value in the Fit Model launch window. Sigma (σ) An estimate of the residual error in the model. The initial value shown in the first row, provided for guidance, is the RMSE (the square root of the mean square error). Delta (δ) The effect size of interest. See Effect Size for details. The initial value, shown in the first row, is the square root of the sum of squares for the hypothesis divided by the number of observations in the study. Number (n) The sample size. The initial value, shown in the first row, is the number of observations in the current study. Solve for Power Solves for the power as a function of α, σ, δ, and n. The power is the probability of detecting a difference of size δ by seeing a test result that is significant at level α, for the specified σ and n. For more details, see Computations for the Power in Statistical Details. Solve for Least Significant Number Solves for the smallest number of observations required to obtain a test result that is significant at level α, for the specified δ and σ. For more details, see Computations for the LSN in Statistical Details. Solve for Least Significant Value Solves for the smallest positive value of a parameter or linear function of the parameters that produces a p-value of α. The least significant value is a function of α, σ, and n. This option is available only for one-degree-of-freedom tests. For more details, see Computations for the LSV in Statistical Details. Adjusted Power and Confidence Interval Retrospective power calculations use estimates of the standard error and the test parameters in estimating the F distribution’s noncentrality parameter. Adjusted power is retrospective power calculation based on an estimate of the noncentrality parameter from which positive bias has been removed (Wright and O’Brien, 1988). The confidence interval for the adjusted power is based on the confidence interval for the noncentrality estimate. The adjusted power deals with a sample estimate, so it and its confidence limits are computed only for the δ estimated in the current study. For more details, see Computations for the Adjusted Power in Statistical Details.
 • There are no replicated points with respect to the X variables, so it is impossible to calculate a pure error sum of squares.
 • The model is saturated, meaning that there are as many estimated parameters as there are observations. Such a model fits perfectly, so it is impossible to assess lack of fit.
The difference between the error sum of squares from the model and the pure error sum of squares is called the lack of fit sum of squares. The lack of fit variation can be significantly greater than pure error variation if the model is not adequate. For example, you might have the wrong functional form for a predictor, or you might not have enough, or the correct, interaction effects in your model.
 • The DF for Total Error is the same as the DF value found on the Error line of the Analysis of Variance table. Based on the sum of squares decomposition, the Total Error DF is partitioned into degrees of freedom for Lack of Fit and for Pure Error.
 • The Pure Error DF is pooled from each replicated group of observations. In general, if there are g groups, each with identical settings for each effect, the pure error DF, denoted DFPE, is given by:
where ni is the number of replicates in the ith group.
 • The Lack of Fit DF is the difference between the Total Error and Pure Error DFs.
 • The Total Error SS is the sum of squares found on the Error line of the corresponding Analysis of Variance table.
 • The Pure Error SS is the total of the sum of squares values for each replicated group of observations. The Pure Error SS divided by its DF estimates the variance of the response at a given predictor setting. This estimate is unaffected by the model. In general, if there are g groups, each with identical settings for each effect, the Pure Error SS, denoted SSPE, is given by:
where SSi is the sum of the squared differences between each observed response and the mean response for the ith group.
 • The Lack of Fit SS is the difference between the Total Error and Pure Error sum of squares.
Shows the ratio of the Mean Square for Lack of Fit to the Mean Square for Pure Error. The F Ratio tests the hypothesis that the variances estimated by the Lack of Fit and Pure Error mean squares are equal, which is interpreted as representing “no lack of fit”.
Lists the p-value for the Lack of Fit test. A small p-value indicates a significant lack of fit.