The Profiler at 30
by Brad Jones, JMP Distinguished Research Fellow at SAS
“The purpose of industrial experiments is to find the optimal settings of the factors, but the statistical fit alone is not enough; it is important to understand the response surface. When that response surface is in many dimensions, this is a daunting challenge. Brad Jones’ development of the interactive Prediction Profiler was the huge breakthrough – having separate cross-sections of the surface for each factor, conditioned on the current values of the other factors, and being able to interactively move any factor. Just looking at the Profiler gives you an immediate understanding of which factors are important and the directions of their effects. Interactively changing a factor’s value reveals any strong interaction effects.” —John Sall, SAS co-founder and chief architect of JMP
Thirty years ago, I came up with the idea of the Prediction Profiler while working for a long-defunct company named Catalyst. JMP developed the first commercially viable implementation. As John Sall points out above, the original purpose of the Prediction Profiler was to explore the fitted response surface of a designed experiment with the goal of finding optimal process settings. Nowadays however, the Prediction Profiler in JMP is much more versatile.
While the Profiler works best when the region containing all the data is a cube or a sphere as in designed experiments, the Profiler can be useful in exploring predictions made from an analysis of observational data. Most JMP analysis platforms offer the user the choice of viewing a Profiler. Especially useful are the Profilers connected to Neural, Partition (with lots of splits), Bootstrap Forest and Boosted Tree platforms, because the models created with these platforms are hard to visualize. One of the features of observational data is that model terms are often highly correlated. This makes model selection difficult.
However, once an adequate model is found, the Profiler can provide questionable predictions in regions where there is little or no data. This dilemma leads me to introduce you to one of my favorite new features in JMP 16: Extrapolation Control in the Profiler. The figure shows the prediction profile of the Longley data with a model y as a linear function of factors x2, x3 and x4.
With the Extrapolation Control: Warning On, the Profiler is warning that the setting [x2 x3 x4] = [430800 1870 2750] is a possible extrapolation. If one changes the Extrapolation Control to On, then the Profiler only shows factor settings that are not extrapolation. This is the same way the Profiler has handled inequality constraints in previous releases. Now in JMP 16, the Profiler can avoid Disallowed Combinations in designed experiments using the same Extrapolation Control technology.
You can find this feature on the red triangle menu of the Prediction Profiler platform. Check it out!
See the Prediction Profiler in action: jmp.com/profiler
Let's stay connected! Opt-in.
You may contact me by email regarding news, events and offers from JMP. I understand I can withdraw my consent at any time.
Dive deeper into this topic
- Statistical Thinking for Industrial Problem SolvingThis online statistics course is available – for free – to anyone interested in learning foundational statistical concepts and building practical skills in using data to solve problems better. Watch our brief video to learn more!
- Lead your organization to analytic excellenceHow P&G, Intel Corporation and Seagate Technology embed analytics throughout the organization