• If your factors are all two-level and orthogonal, then all of the statistics in the Screening platform should work well.
 • For highly supersaturated main effect designs, the Screening platform is effective in selecting factors, but is not as effective at estimating the error or the significance. The Monte Carlo simulation to produce p-values uses assumptions that are not valid for this case.
 • The Screening platform is not appropriate for mixture designs.
Consider the Half Reactor.jmp sample data table. The data are derived from a design discussed in Box, Hunter, and Hunter (1978). We are interested in a model with main effects and two-way interactions. This example uses a model with fifteen parameters for a design with sixteen runs.
For this example, select all continuous factors, except the response, Percent Reacted, as the screening effects, X. Select Percent Reacted as the response Y. The screening platform constructs interactions automatically. This is in contrast to Fit Model, where you manually specify the interactions that you want to include in your model.
Traditional Saturated Half Reactor.jmp Design Output shows the result of using the Fit Model platform, where a factorial to degree 2 model is specified. This result illustrates why the Screening platform is needed.
Traditional Saturated Half Reactor.jmp Design Output
JMP can calculate parameter estimates, but degrees of freedom for error, standard errors, t-ratios, and p-values are all missing. Rather than use Fit Model, it is better to use the Screening platform, which specializes in getting the most information out of these situations, leading to a better model. The output from the Screening platform for the same data is shown in Half Reactor.jmp Screening Design Report.
Half Reactor.jmp Screening Design Report
 • Estimates labeled Contrast. Effects whose individual p-value is less than 0.1 are highlighted.
 • A t-ratio is calculated using Lenth’s PSE (pseudo-standard error). The PSE is shown below the Half Normal Plot.
 • Both individual and simultaneous p-values are shown. Those that are less than 0.05 are shown with an asterisk.
 • A Half Normal plot enables you to quickly examine the effects. Effects initially highlighted in the effects list are also labeled in this plot.
 • Buttons at the bottom of the report also operate on the highlighted variables. The Make Model button opens the Fit Model window using the current highlighted factors. The Run Model button runs the model immediately.
For this example, Catalyst, Temperature, and Concentration, along with two of their two-factor interactions, are selected.
Open the data table called Plackett-Burman.jmp, found in Design Experiment folder in the Sample Data installed with JMP. This table contains the design runs and the Percent Reacted experimental results for the 12-run Plackett-burman design created in the previous section.
The data table has two scripts called Screening and Model, showing in the upper-left corner of the table, that were created by the DOE Screening designer. You can use these scripts to analyze the data, however it is simple to run the analyses yourself.
 1 Select Analyze > Modeling > Screening to see the completed launch dialog shown in Launch Dialog for the Screening Platform. When you create a DOE design table, the variable roles are saved with the data table and used by the launch platform to complete the dialog.
Launch Dialog for the Screening Platform
 2 Click OK to see the Screening platform result shown in Results of the Screening Analysis.
Results of the Screening Analysis
 3
The Contrasts section of the Screening report lists all possible model effects, a contrast value for each effect, Lenth t-ratios (calculated as the contrast value divided by the Lenth PSE (pseudo-standard error), individual and simultaneous p-values, and aliases if there are any. Significant and marginally significant effects are highlighted.
 Term Name of the factor. Contrast Estimate for the factor. For orthogonal designs, this number is the same as the regression parameter estimate. This is not the case for non-orthogonal designs. An asterisk might appear next to the contrast, indicating a lack of orthogonality. Bar Chart Shows the t-ratios with blue lines marking a critical value at 0.05 significance. Lenth t-Ratio Lenth’s t-ratio, calculated as , where PSE is Lenth’s Pseudo-Standard Error. See Lenth’s Pseudo-Standard Error for details. Individual p-Value Analogous to the standard p-values for a linear model. Small values of this value indicate a significant effect. Refer to Statistical Details for details. Do not expect the p-values to be exactly the same if the analysis is re-run. The Monte Carlo method should give similar, but not identical, values if the same analysis is repeated. Simultaneous p-Value Similar to the individual p-value, but multiple-comparison adjusted. Aliases Appears only if there are exact aliases of later effects to earlier effects.
The Make Model button beneath the Half Normal Plot creates a Fit Model dialog that includes all the highlighted effects. However, note that the Catalyst*Stir Rate interaction is highlighted, but the Stir Rate main effect is not. Therefore, you should add the Stir Rate main effect to the model.
 4 Click the Make Model Button beneath the Half Normal Plot.
 5 Select Stir Rate and click Add on the Fit Model dialog.
 6 The Emphasis might change to Effect Screening when you add Stir Rate. Change it back to Effect Leverage. The dialog is shown in Create Fit Model Dialog and Remove Unwanted Effect.
 7 Then click Run to see the analysis results.
Create Fit Model Dialog and Remove Unwanted Effect
The Whole Model actual-by-predicted plot, shown in An Actual-by-Predicted Plot, appears at the top of the Fit Model report. You see at a glance that this model fits well. The blue line falls outside the bounds of the 95% confidence curves (red-dotted lines), which tells you the model is significant. The model p-value (p = 0.0208), R2, and RMSE appear below the plot. The RMSE is an estimate of the standard deviation of the process noise, assuming that the unestimated effects are negligible.
An Actual-by-Predicted Plot
To see a scaled estimates report, use Effect Screening > Scaled Estimates found in the red triangle menu on the Response Percent Reacted title bar. When there are quadratic or polynomial effects, the coefficients and the tests for them are more meaningful if effects are scaled and coded. The Scaled Estimates report includes a bar chart of the individual effects embedded in a table of parameter estimates. The last column of the table has the p-values for each effect.
Example of a Scaled Estimates Report
The Fit Model report has outline nodes for the Catalyst and Temperature effects. To run a power analysis for an effect, click the red triangle icon on its title bar and select Power Analysis.
This example shows a power analysis for the Catalyst variable, using default value for α (0.05), the root mean square error and parameter estimate for Catalyst, for a sample size of 12. The resulting power is 0.8926, which means that in similar experiments, you can expect an 89% chance of detecting a significant effect for Catalyst.
Example of a Power Analysis
Refer to the Fitting Linear Models book for details.