Introduction: One of the key challenges of using machine learning (ML) is the lack of explainability which progressively decreases as the model complexity increases starting from supervised learning and ending up at deep learning and neural nets. However demands for providing transparency are increasing as the use of ML algorithms in law, education, surveillance and finance to name just a few areas become more prevalent.
Quantitative Input Influence: In 2016, a team of researchers at Carnegie Mellon University (CMU) proposed a method called Quantitative Input Influence (QII) to measure the relative weight of each feature (variable) used in a ML model in order to provide transparency into which features are more important for the output of a model. Under the proposed method, QII values could for example provide transparency in case of a rejected loan application on how much weight the education (or race) of the applicant had versus the income of the applicant in the decision to reject the loan. QII accounts for correlations among features by randomly adjusting the features one at a time and measuring the average marginal expected contribution of the changed feature (on model output) over all of the features used in building the model. In addition, one of the advantages of using QII was that it does not require access to the code contained in the ‘black box’ or ML model and the QII values could be constructed with access to the system and to the initial dataset used in training the model.
Shapley Values and Game Theory: QII is actually built on principle of Shapley values which has origins in game theory and was developed in 1950’s to calculate the relative contribution of each player to the outcome of a game and to assess what payoff should be attributed to each player from total prize money. Using the same analogy in ML models, Shapley values can be used to measure the expected marginal contribution of each feature/variable on the output of the model with the sum of the Shapley values is equal to the outcome of the model. The Shapley values for features used in a ML model should have following three properties:
- If a feature does not change the performance of a model when it’s added to a training data, then it should be given zero value.
- If two different features when individually added to the training data always produce exactly the same change in the model’s output then they should be given the same value by symmetry.
- When the overall prediction score is the sum of K separate predictions, the value of a feature should be the sum of its value for each prediction.
Using QII for predicting mortgage defaults: Now as a practical implementation of QII, the Bank of England has published a paper which used the QII method on a ML model built to predict defaults in a population of UK mortgages. The method measured feature influences (unary and joint) of the ML model in order to assess key drivers of mortgage defaults. The study also compared the results of a ML model (built using gradient tree boosting, GTB) with a simple, regression based (Logit) model as a benchmark. The results show that;
- For both ML and regression model, the year of origination and CLTV are most significant variables to predict default as measured by their Shapley values
- Unary and Shapley values are same for the Logit regression model which does not take into account correlations. However the Shapley values for the GTB ML model are consistently lower than their unary scores which shows that once correlations between features is accounted for by the ML model the contribution of each feature decreases.
- When defaults were simulated for both types of models, the study showed that ML model had a wider dispersion compared to the regression model which illustrates a general shortcoming of ML models showing very different results when used for different datasets/time periods.
Data Shapley: In another related application of Shapley Values for ML models, another paper has introduced concept of Data Shapley i.e. measuring influence of data sets/points in the output of the model. A useful application of Data Shapley would be in measuring which data has high value (more weight in influencing output of a model while training the model) and which data has low weight and this will prove to be very useful in mode development stage. To arrive at a Data Shapley value, would require training and score a model for every possible subset of the training data (exponentially many) and doing this for all of the points in our data set. Credit to this paper goes to a stimulating blog by Adrian Colyer who publishes a post summarizing thought-provoking research papers every week. The original paper shows results on a ML model used on UK Biobank dataset for predicting breast and skin cancer shows the following;
- Removing most valuable data points from a training set as measured by Data Shapley and retraining the model, leads to a drop in model performance drops. In opposite direction, removing data points with least Data Shapley values improves the model’s performance.
- Adding noisy data causes model performance to drop and these noisy data points have lower Data Shapley values.
Summary: Application of QII and Data Shapley on machine learning models are ways to explain which features and data points respectively are most essential for the model’s prediction. Adoption of these approaches will greatly address the growing concern regarding lack of transparency and explainability for ML models which needs to be addressed in order to increase trust and reliability of these models.