survex: Explainable Machine Learning in Survival Analysis

R-CMD-check Codecov test coverage CRAN status Total downloads DrWhy-BackBone

Overview

Survival analysis is a task dealing with time-to-event prediction. Aside from the well-understood models like CPH, many more complex models have recently emerged, but most lack interpretability. Due to a functional type of prediction, either in the form of survival function or cumulative hazard function, standard model-agnostic explanations cannot be applied directly.

The survex package provides model-agnostic explanations for machine learning survival models. It is based on the DALEX package. If you’re unfamiliar with explainable machine learning, consider referring to the Explanatory Model Analysis book – most of the methods included in survex extend these described in EMA and implemented in DALEX but to models with functional output.

The main explain() function uses a model and data to create a standardized explainer object, which is further used as an interface for calculating predictions. We automate creating explainers from the following packages: mlr3proba, censored, ranger, randomForestSRC, and survival. Raise an Issue on GitHub if you find models from other packages that we can incorporate into the explain() interface.

Note that an explainer can be created for any survival model, using the explain_survival() function by passing model, data, y, and predict_survival_function arguments.

Installation

The package is available on CRAN:

install.packages("survex")

The latest development version can be installed from GitHub using devtools::install_github():

devtools::install_github("https://github.com/ModelOriented/survex")

Simple demo

library("survex")
library("survival")
library("ranger")

# create a model
model <- ranger(Surv(time, status) ~ ., data = veteran)

# create an explainer
explainer <- explain(model, 
                     data = veteran[, -c(3, 4)],
                     y = Surv(veteran$time, veteran$status))

# evaluate the model
model_performance(explainer)

# visualize permutation-based feature importance
plot(model_parts(explainer))

# explain one prediction with SurvSHAP(t)
plot(predict_parts(explainer, veteran[1, -c(3, 4)]))

Functionalities and roadmap

Existing functionalities: - [x] unified prediction interface using the explainer object - predict() - [x] calculation of performance metrics (Brier Score, Time-dependent C/D AUC, metrics from mlr3proba) - model_performance() - [x] calculation of feature importance (Permutation Feature Importance - PFI) - model_parts() - [x] calculation of partial dependence (Partial Dependence Profiles - PDP, Accumulated Local Effects - ALE) - model_profile() - [x] calculation of 2-dimensional partial dependence (2D PDP, 2D ALE) - model_profile_2d() - [x] calculation of local feature attributions (SurvSHAP(t), SurvLIME) - predict_parts() - [x] calculation of local ceteris paribus explanations (Ceteris Paribus profiles - CP/ Individual Conditional Expectations - ICE) - predict_profile() - [x] calculation of global feature attributions using SurvSHAP(t) - model_survshap()

Currently in develompment: - [ ] …

Future plans: - [ ] … (raise an Issue on GitHub if you have any suggestions)

Usage

survex usage cheatsheet

Citation

If you use survex, please cite our preprint:

M. Spytek, M. Krzyziński, S. H. Langbein, H. Baniecki, M. N. Wright, P. Biecek. survex: an R package for explaining machine learning survival models. arXiv preprint arXiv:2308.16113, 2023.

@article{spytek2023survex,
    title   = {{survex: an R package for explaining machine learning survival models}},
    author  = {Mikołaj Spytek and Mateusz Krzyziński and Sophie Hanna Langbein and
               Hubert Baniecki and Marvin N. Wright and Przemysław Biecek},
    journal = {arXiv preprint arXiv:2308.16113},
    year    = {2023}
}

Applications of survex