Why Your AI Model Lies: Debugging Healthcare Predictions with SHAP and Python

30 Minute Talk

Machine learning models are increasingly used in healthcare, but what happens when they fail silently?

In this talk, we walk through a real Python-based cardiovascular disease (CVD) risk prediction system and uncover how models can produce misleading and counterintuitive results when faced with unexpected or invalid inputs. Using tools like scikit-learn and SHAP, we demonstrate how to interpret model predictions, identify hidden issues, and understand why explanations are not always straightforward—especially in multi-class settings.

You’ll learn how to:

  • Build and interpret ML models in Python
  • Use SHAP to explain predictions
  • Detect unreliable or out-of-distribution inputs
  • Improve trust in real-world AI systems

Includes a live demo of a real-world prediction system.

This session is beginner-friendly and ideal for Python developers interested in machine learning, data science, or building reliable AI systems.

Presented by