Interpreting Machine Learning Models
h2oai AI Machine LearningI found this short 8 minute video from H2O World about Machine Learning Interpretability (MLI). It’s given by Patrick Hall, the lead for building these capabilities in Driverless AI.
My notes from the video are below:
- ML as an opaque black box is no longer the case
- Cracking the black box with LIME and Shapley Values
- Shapley Values won the Nobel Prize in Economics in 2012
- After Driverless AI model runs, a dashboard is created
- Shows the complex feature engineered and the original features
- Global Shapley Values is like Feature Importance and includes negative and positive contributions
- Quickly identify what are the important features in the dataset
- Then go to Partial Dependence Plots, which are the average prediction of the model across different values of the feature
- Row by Row analysis of each feature can be done to understand interactions and generate reason codes
- Shapley is accurate for feature contribution, LIME is an approximation
- Done via stacked ensemble model
- Can be deployed via Python Scoring pipeline