Developer Tutorial: Using JMP Pro Generalized Regression to Better Understand Observational Data
Statistics, Predictive Modeling and Data Mining
This session is for JMP users who understand basic predictive modeling principles and have used JMP for predictive modeling.
Often observational data is gathered without involving the subject of the research or the data analyst. Such data can present analysis problems such as missing key factors, selection bias, multicollinearity and outliers.
Analyzing and building predictive models for correlated and high-dimensional data requires using variable selection techniques to select a subset of variables (predictors) to use in modeling a response variable. Shrinkage techniques like Lasso and Elastic Net are especially promising to avoid overfitting observational data. Variable selection also plays a key role in analyzing designed experiments, but the strategies and techniques used can be slightly different.
JMP Pro Generalized Regression is useful for many modeling situations that include and go beyond variable selection, or when you suspect collinearity in your predictors. It also lets you specify a variety of distributions for continuous, binomial, count, or zero-inflated responses and when you want to fit models that you compare to models obtained using other techniques.
In this session, you will understand the rationale and underpinnings behind the penalized regression techniques available in JMP Pro’s Generalized Regression. You will see how and why to use Generalized Regression to find the best set of predictors in a sea of possibilities, model a categorical response with more than two levels, and handle non-normal responses like count data or yield percentage.
This JMP Developer Tutorial covers: Normally and non-normally distributed responses; Variable Selection, Lasso, and Elastic Net; Gamma and Skewed distributions; Count data; Logistic and Quantile regression; Outliers; Proportional Hazards.