Interpreting Random Forest Classification Models Using a Feature Contribution Method
Publication date
2014-02-18Peer-Reviewed
YesOpen Access status
closedAccess
Metadata
Show full item recordAbstract
Model interpretation is one of the key aspects of the model evaluation process. The explanation of the relationship between model variables and outputs is relatively easy for statistical models, such as linear regressions, thanks to the availability of model parameters and their statistical significance . For “black box” models, such as random forest, this information is hidden inside the model structure. This work presents an approach for computing feature contributions for random forest classification models. It allows for the determination of the influence of each variable on the model prediction for an individual instance. By analysing feature contributions for a training dataset, the most significant variables can be determined and their typical contribution towards predictions made for individual classes, i.e., class-specific feature contribution “patterns”, are discovered. These patterns represent a standard behaviour of the model and allow for an additional assessment of the model reliability for new data. Interpretation of feature contributions for two UCI benchmark datasets shows the potential of the proposed methodology. The robustness of results is demonstrated through an extensive analysis of feature contributions calculated for a large number of generated random forest models.Version
No full-text in the repositoryCitation
Palczewska A, Palczewski J, Marchese-Robinson RM et al (2014) Interpreting Random Forest Classification Models Using a Feature Contribution Method. In: Bouabana-Tebibel T and Rubin SH (Eds.) Integration of Reusable Systems. Volume 263 of the series Advances in Intelligent Systems and Computing: 193-218.Link to Version of Record
https://doi.org/10.1007/978-3-319-04717-1_9Type
Book chapterae974a485f413a2113503eed53cd6c53
https://doi.org/10.1007/978-3-319-04717-1_9