Beyond the Black Box: Interpreting ML Models with SHAP

30 Minute Talk
Saturday at 2:15 pm in Ballroom A

ML models behave as a black box in most scenarios. Model predicts or provides a certain output but it is very difficult to generate any actionable insights directly. This is mostly because we generally have no idea which features are contributing the most to the model behavior internally. SHAP provides a certain way to explain model predictions, and can act as an important tool in a data scientist’s toolbox.

In this talk, we will begin by explaining to the audience the need for explainable ML models and why it is important to understand beyond what the model outputs. We will then briefly go over the mathematical intuition behind Shapley values and its origins from game theory. After that we will walk through a couple of case studies of tree based and neural network based models. We will be focusing on interpretation of SHAP through various plots using the shap library in Python. Finally, we will discuss the best practices for interpreting SHAP visualizations, handling large datasets, and common pitfalls to avoid.

Presented by

Avik Basu