Assessing performance of machine learning algorithms
There are so many methods for measuring accuracy of performance of machine learning (ML) – you need speedy turnaround and the ability to handle data sets, plus you have to overcome incomplete data and classification inaccuracies. The measure you adopt matters – so aim to define your quality metric ahead of modeling.
In his address, David Hand shares his insights on the things that are most relevant to effectively assessing model performance.
Hand explains that:
Different measures are appropriate for different questions.
Performance is not an intrinsic property of a classifier.
Comparative evaluations on diverse past data sets may not be relevant to your problem.
Panel discussion
Hear from machine learning experts from Brewer Science, Abt Associates, and SAS
In this discussion, panelists share the importance of collecting high-quality data, as well as why data prep is vital in ML. Hear why they think design of experiments (DOE) is transformative to the ML innovation process and what motivates them to look at the bigger picture of statistical problem solving.
Hear from:
Diana Ballard, Brewer Science
Jason Brinkley, Abt Associates
Jim Georges, SAS
headline
Register to view the keynote and panel discussion.