You are here

Technical report | A Power Series Expansion of Feature Importance

Executive Summary

Statistics-based machine learning and artificial intelligence can enhance the capabilities of complex and critical systems, but they can also increase new risks; statistical models may fail to generalize to novel data or situations, and cause the overall system to malfunction. A key issue for these models is the need to generalize to previously unobserved data or situations. Failure to do so can have severe reputational, financial, or safety implications. Cross-validation is the de facto standard for assessing a model's generality and performance. However, as we have argued in an earlier technical report (DST-Group-TR-3576), the limitations of cross-validation are often underappreciated. It doesn't guard against the possibility of algorithmic bias, drift in the sampling distribution, adversarial inputs, or a number of other issues.

A more fundamental understanding of statistical models can promote greater trust from the user and improve model robustness to novel data and situations. We have developed a power series formulation of feature importance that explicitly identifies individual and interaction-type contributions. The decomposition quantifies the impact of information and provides insight into whether features provide complementary, independent, or redundant information. Our method complements alternative approaches, such as clustering and subset selection, and provides a unique measure of feature importance.

Measures of feature importance should be able to accommodate different contexts and topics of interest. Missing features will affect models in a number of ways: It can change the model's structure, performance, memory footprint, or computation time. Our framework is able to handle these attributes by substituting an appropriate scalar metric into its calculation. Likewise, feature importance can refer to the expected feature importance prior to data collection, or the impact of observing a particular feature value actually had on the model output. Again, our framework can naturally handle both situations by changing the sampling distribution it uses to calculate the loss or fidelity. The flexibility of the framework allows for meaningful comparisons between users who may have different contexts or aims for the feature importance calculations.

Our final contribution is to show the power series can be mapped to the well-known Shapley values. These provide a method of fairly distributing output in a coalition game, and are the only formulation that has a number of intuitively desirable properties (for machine learning these properties are local accuracy, missingness and consistency, described elsewhere). Shapley values are ambiguous as to how features interact with each other. Equal Shapley values for two features could indicate that the features are completely dependent upon each other, like in the exclusive-OR problems, or may indicate the features have the same impact on the model, but are completely independent. Our method provides a fine-grained view of how features interact and is able to resolve these kinds of ambiguities, and can be used in situations that may be inappropriate for Shapley values. Our approach also motivates efficient calculation schemes that reduce the number of computations required from exponential to polynomial. This allows feature importance calculations to be scaled to large numbers of features. Our power series formulation is versatile, theoretically-grounded, and motivates efficient calculation schemes. The power series formulation extends Defence's ability to interpret data, and will support the effective generation and maintenance of sophisticated statistical models.

Key information

Author

Thomas L. Keevers

Publication number

DST-Group-TR-3743

Publication type

Technical report

Publish Date

July 2020

Classification

Unclassified - public release

Keywords

Machine learning, Interpretability, Statistics