Processes | Utilities | Mixed Model Power

Mixed Model Power
The Mixed Model Power process assists you in the planning of your experiments. Starting with an exemplary experimental design data set and parameter settings for a relevant mixed models , it enables you to calculate power curves for a range of Type 1 error probabilities ( alpha ). In other words, this process helps you decide how big an experiment you need to run in order to be reasonably assured that the true effects in the study (change in gene expression , for example) are deemed statistically significant. Conversely, this process also enables you to calculate the statistical power of an experiment, given a specified sample size . This process is typically run before you perform your experiment, and it helps to have conducted a pilot study in order to determine reasonable values for variance components.
Mixed Model Power computes the statistical power of a set of one-degree-of-freedom hypothesis tests arising from a mixed linear model. You specify an experimental design file, parameters for relevant PROC MIXED statements (including fixed values for the variance components and ESTIMATE statements ), and ranges of values for alpha and effect sizes. The process outputs a table of power values calculated using a noncentral t-distribution.
What do I need?
Two data sets are required for Mixed Model Power :
The Experimental Design Data Set (EDDS) . It must include all relevant design variables of the experiment for which you want to compute power. The sample size equals the number of rows in this data set.
The file containing PROC MIXED ESTIMATE statements. ESTIMATE statements are used to specify linear hypotheses of interest that are valid for each specified fixed effects model. Distinct power values are computed for each hypothesis test. See Estimate Builder for more details.
For detailed information about the files and data sets used or created by JMP Life Sciences software, see Files and Data Sets .
Output/Results
The output of the Mixed Model Power process includes one output data set listing the t-statistics and associated power values for each level of alpha ( not shown) and the power curves shown below .
Effect sizes ( log 2 differences) are plotted along the x-axes. Power is plotted along the y-axis of each plot. The greater the power, the higher the probability of rejecting the null hypothesis when the observed difference is real. Note that, as expected, power increases for all effects as the effect size increases. In other words, the greater the difference due to the effect, the more likely you are to successfully conclude that the observed difference is real.
You might need to adjust the experimental design, depending on the results of this analysis. You might find that the power of the proposed design is not sufficient for you to reject the null with desired confidence. One way to increase power is to increase the size of your experiment, adding technical replicates, for example, until the power is sufficient. Alternatively, if predicted power is more than sufficient for your experimental conditions, you might be able to reduce the size of experiment, saving valuable resources.
To compute power for a new design, use DOE > Custom Design to generate the design of interest, save the table as a SAS data set, and rerun Mixed Model Power using the new design as the EDDS.
With the results of this process, you can now design the size of your experiment to ensure there is sufficient statistical power to draw meaningful conclusions.