SHAPInsight

class baybe.insights.shap.SHAPInsight[source]

Bases: object

Class for SHAP-based feature importance insights.

Also supports LIME and MAPLE explainers via the shap package.

Public methods

__init__(explainer, background_data)

Method generated by attrs for class SHAPInsight.

explain([data])

Compute a Shapley explanation for a given data set.

from_campaign(campaign[, explainer_cls, ...])

Create a SHAP insight from a campaign.

from_recommender(recommender, searchspace, ...)

Create a SHAP insight from a recommender.

from_surrogate(surrogate, data[, ...])

Create a SHAP insight from a surrogate.

plot(plot_type[, data, show, explanation_index])

Plot the Shapley values using the provided plot type.

Public attributes and properties

explainer

The explainer instance.

background_data

The background data set used by the explainer.

uses_shap_explainer

Indicates if a SHAP explainer is used or not (e.g. MAPLE, LIME).

__init__(explainer: Explainer, background_data: DataFrame)

Method generated by attrs for class SHAPInsight.

For details on the parameters, see Public attributes and properties.

explain(data: DataFrame | None = None, /)[source]

Compute a Shapley explanation for a given data set.

Parameters:

data (Optional[DataFrame]) – The dataframe for which the Shapley values are to be computed. By default, the background data set of the explainer is used.

Return type:

Explanation

Returns:

The computed Shapley explanation.

Raises:

ValueError – If the columns of the given dataframe cannot be aligned with the columns of the explainer background dataframe.

classmethod from_campaign(campaign: Campaign, explainer_cls: type[Explainer] | str = 'KernelExplainer', *, use_comp_rep: bool = False)[source]

Create a SHAP insight from a campaign.

Uses the measurements of the campaign as background data.

Parameters:
Return type:

SHAPInsight

Returns:

The SHAP insight object.

Raises:

ValueError – If the campaign does not contain any measurements.

classmethod from_recommender(recommender: RecommenderProtocol, searchspace: SearchSpace, objective: Objective, measurements: DataFrame, explainer_cls: type[Explainer] | str = 'KernelExplainer', *, use_comp_rep: bool = False)[source]

Create a SHAP insight from a recommender.

Uses the provided measurements to train the surrogate and as background data for the explainer.

Parameters:
Return type:

SHAPInsight

Returns:

The SHAP insight object.

Raises:

TypeError – If the recommender has no get_surrogate method.

classmethod from_surrogate(surrogate: SurrogateProtocol, data: DataFrame, explainer_cls: type[Explainer] | str = 'KernelExplainer', *, use_comp_rep: bool = False)[source]

Create a SHAP insight from a surrogate.

For details, see make_explainer_for_surrogate().

plot(plot_type: Literal['bar', 'beeswarm', 'force', 'heatmap', 'scatter'], data: DataFrame | None = None, /, *, show: bool = True, explanation_index: int | None = None, **kwargs: Any)[source]

Plot the Shapley values using the provided plot type.

Parameters:
  • plot_type (Literal['bar', 'beeswarm', 'force', 'heatmap', 'scatter']) – The type of plot to be created.

  • data (Optional[DataFrame]) – See explain().

  • show (bool) – Boolean flag determining if the plot is to be rendered.

  • explanation_index (Optional[int]) – Positional index of the data point that should be explained. Only relevant for plot types that can only handle a single data point.

  • **kwargs (Any) – Additional keyword arguments passed to the plot function.

Return type:

Axes

Returns:

The plot object.

Raises:

ValueError – If the provided plot type is not supported.

background_data: DataFrame

The background data set used by the explainer.

explainer: Explainer

The explainer instance.

property uses_shap_explainer: bool

Indicates if a SHAP explainer is used or not (e.g. MAPLE, LIME).