Plot variable importance

# S3 method for variable_importance
plot(x, title = "model", max_char = 40,
  caption = NULL, font_size = 11, point_size = 3, print = TRUE, ...)

Arguments

x

A data frame from get_variable_importance

title

Either "model", "none", or a string to be used as the plot caption. "model" puts the name of the best-performing model, on which variable importances are generated, in the title.

max_char

Maximum length of variable names to leave untruncated. Default = 40; use Inf to prevent truncation. Variable names longer than this will be truncated to leave the beginning and end of each variable name, bridged by " ... ".

caption

Plot title

font_size

Relative size for all fonts, default = 11

point_size

Size of dots, default = 3

print

Print the plot?

...

Unused

Value

A ggplot object, invisibly.

Examples

machine_learn(pima_diabetes[1:50, ], patient_id, outcome = diabetes, tune = FALSE) %>% get_variable_importance() %>% plot()
#> Training new data prep recipe...
#> Variable(s) ignored in prep_data won't be used to tune models: patient_id
#> #> diabetes looks categorical, so training classification algorithms.
#> #> After data processing, models are being trained on 12 features with 50 observations. #> Based on n_folds = 5 and hyperparameter settings, the following number of models will be trained: 5 rf's, 5 xgb's, and 50 glm's
#> Training at fixed values: Random Forest
#> Training at fixed values: eXtreme Gradient Boosting
#> Training at fixed values: glmnet
#> #> *** Models successfully trained. The model object contains the training data minus ignored ID columns. *** #> *** If there was PHI in training data, normal PHI protocols apply to the model object. ***