Skip to content

KernelShap

pnpxai.explainers.kernel_shap

KernelShap

Bases: Explainer

KernelSHAP explainer.

Supported Modules: Linear, Convolution, LSTM, RNN, Attention

Parameters:

Name Type Description Default
model Module

The PyTorch model for which attribution is to be computed.

required
n_samples int

Number of samples

25
baseline_fn Union[BaselineMethodOrFunction, Tuple[BaselineMethodOrFunction]]

The baseline function, accepting the attribution input, and returning the baseline accordingly.

'zeros'
feature_mask_fn Union[FeatureMaskMethodOrFunction, Tuple[FeatureMaskMethodOrFunction]

The feature mask function, accepting the attribution input, and returning the feature mask accordingly.

'felzenszwalb'
mask_token_id Optional[int]

The token id of the mask, used for modalities, utilizing tokenization

None
forward_arg_extractor Optional[ForwardArgumentExtractor]

A function that extracts forward arguments from the input batch(s) where the attribution scores are assigned.

None
additional_forward_arg_extractor Optional[ForwardArgumentExtractor]

A secondary function that extract additional forward arguments from the input batch(s).

None
**kwargs

Keyword arguments that are forwarded to the base implementation of the Explainer

required
Reference

Scott M. Lundberg, Su-In Lee. A Unified Approach to Interpreting Model Predictions.

SUPPORTED_MODULES = [Linear, Convolution, LSTM, RNN, Attention] class-attribute instance-attribute
n_samples = n_samples instance-attribute
baseline_fn = baseline_fn instance-attribute
feature_mask_fn = feature_mask_fn instance-attribute
mask_token_id = mask_token_id instance-attribute
EXPLANATION_TYPE: ExplanationType = 'attribution' class-attribute instance-attribute
TUNABLES = {} class-attribute instance-attribute
model = model.eval() instance-attribute
forward_arg_extractor = forward_arg_extractor instance-attribute
additional_forward_arg_extractor = additional_forward_arg_extractor instance-attribute
device property
__init__(model: Module, n_samples: int = 25, baseline_fn: Union[BaselineMethodOrFunction, Tuple[BaselineMethodOrFunction]] = 'zeros', feature_mask_fn: Union[FeatureMaskMethodOrFunction, Tuple[FeatureMaskMethodOrFunction]] = 'felzenszwalb', forward_arg_extractor: Optional[ForwardArgumentExtractor] = None, additional_forward_arg_extractor: Optional[ForwardArgumentExtractor] = None, mask_token_id: Optional[int] = None) -> None
attribute(inputs: Tensor, targets: Optional[Tensor] = None) -> Union[Tensor, Tuple[Tensor]]

Computes attributions for the given inputs and targets.

Parameters:

Name Type Description Default
inputs Tensor

The input data.

required
targets Tensor

The target labels for the inputs.

None

Returns:

Type Description
Union[Tensor, Tuple[Tensor]]

Union[torch.Tensor, Tuple[torch.Tensor]]: The result of the explanation.

get_tunables() -> Dict[str, Tuple[type, Dict]]

Provides Tunable parameters for the optimizer

Tunable parameters

n_samples (int): Value can be selected in the range of range(10, 50, 10)

baseline_fn (callable): BaselineFunction selects suitable values in accordance with the modality

feature_mask_fn (callable): FeatureMaskFunction selects suitable values in accordance with the modality

__repr__()
copy()
set_kwargs(**kwargs)