Christine Anderson Cook

Christine M. Anderson-Cook,

Los Alamos

 

About the Author

Christine Anderson-Cook is a retired, Guest Scientist at Los Alamos National Laboratory, where she worked from 2004 until 2021 leading projects on complex system reliability, nonproliferation, malware detection and statistical process control. Her specialties include Design of Experiments, Response Surface Methodology, Reliability, Statistical Engineering, and Multiple Criteria Optimization.

Anderson-Cook has authored more than 200 articles in statistics and quality journals, and has been a long time contributor to the Quality Progress Statistics Roundtable column. She also co-authored a popular book on response surface methodology with Raymond Myers and Douglas Montgomery and served on numerous editorial boards, including a special issue in Quality Engineering on statistical engineering she edited with Lu Lu.

Anderson-Cook is a George Box Medal winner and an elected Fellow of the American Statistical Association and the American Society for Quality. She is also the recipient of the 2012 William G. Hunter Award, and a two-time recipient of the ASQ Shewell Award. In 2011 she received the 26th Annual Governor’s Award for Outstanding New Mexico Women.

Deciding what data to collect for an experiment is a critical decision that has substantial consequences for how resources are spent, what information can be gleaned from the results of the experiment, and what conclusions can be drawn.

Knowing what you want from an experiment and what will constitute success are key to making good decisions, as often there may be multiple goals that are important. Likewise, successful experimentation is about managing your budget and your time well, so being able to examine and compare performance against the cost of the experiment should be done explicitly across a range of budget possibilities.

In designing hundreds of experiments throughout her career as a consultant and collaborator at Virginia Tech and Los Alamos National Laboratory, Chirstine Anderson-Cook has learned (sometimes the hard way) the importance of clarifying the goal(s) of the experiment with the scientists and engineers and then comparing alternatives to find the designed experiment that best matches the objectives.

The good news is this has become dramatically easier with JMP® 17 and the introduction of Design Explorer, which provides a simple way to construct and contrast multiple designs. JMP has long been a world leader in generating excellent designs for a wide variety of purposes, but now it is even easier to generate multiple desirable choices with a single series of dialog boxes that can be compared and evaluated. 

In this white paper, you'll learn best practices for looking more broadly at multiple options for your experiments and how to use Design Explorer, a new feature in JMP 17, to generate a suitable suite of candidate designs to choose from.

*
*
*
*
  Please subscribe me to JMP Newswire, the monthly newsletter for JMP users.
  Yes, you may send me emails occasionally about JMP products and services. I understand that I can withdraw my consent at any time by clicking the opt-out link in the emails.

JMP Statistical Discovery LLC. Your information will be handled in accordance with our Privacy Statement.