Lime
pnpxai.explainers.lime
Lime
Bases: Explainer
Lime explainer.
Supported Modules: Linear
, Convolution
, LSTM
, RNN
, Attention
Parameters:
Name | Type | Description | Default |
---|---|---|---|
model
|
Module
|
The PyTorch model for which attribution is to be computed. |
required |
n_samples
|
int
|
Number of samples |
25
|
baseline_fn
|
Union[BaselineMethodOrFunction, Tuple[BaselineMethodOrFunction]]
|
The baseline function, accepting the attribution input, and returning the baseline accordingly. |
'zeros'
|
feature_mask_fn
|
Union[FeatureMaskMethodOrFunction, Tuple[FeatureMaskMethodOrFunction]
|
The feature mask function, accepting the attribution input, and returning the feature mask accordingly. |
'felzenszwalb'
|
perturb_fn
|
Optional[Callable[[Tensor], Tensor]]
|
The perturbation function, accepting the attribution input, and returning the perturbed value. |
None
|
forward_arg_extractor
|
Optional[Callable[[Tuple[Tensor]], Union[Tensor, Tuple[Tensor]]]]
|
A function that extracts forward arguments from the input batch(s) where the attribution scores are assigned. |
None
|
additional_forward_arg_extractor
|
Optional[Callable[[Tuple[Tensor]], Tuple[Tensor]]]
|
A secondary function that extract additional forward arguments from the input batch(s). |
None
|
**kwargs
|
Keyword arguments that are forwarded to the base implementation of the Explainer |
required |
Reference
Marco Tulio Ribeiro, Sameer Singh, Carlos Guestrin. "Why Should I Trust You?": Explaining the Predictions of Any Classifier.
SUPPORTED_MODULES = [Linear, Convolution, LSTM, RNN, Attention]
class-attribute
instance-attribute
baseline_fn = baseline_fn or torch.zeros_like
instance-attribute
feature_mask_fn = feature_mask_fn
instance-attribute
perturb_fn = perturb_fn
instance-attribute
n_samples = n_samples
instance-attribute
EXPLANATION_TYPE: ExplanationType = 'attribution'
class-attribute
instance-attribute
TUNABLES = {}
class-attribute
instance-attribute
model = model.eval()
instance-attribute
forward_arg_extractor = forward_arg_extractor
instance-attribute
additional_forward_arg_extractor = additional_forward_arg_extractor
instance-attribute
device
property
__init__(model: Module, n_samples: int = 25, baseline_fn: Union[BaselineMethodOrFunction, Tuple[BaselineMethodOrFunction]] = 'zeros', feature_mask_fn: Union[FeatureMaskMethodOrFunction, Tuple[FeatureMaskMethodOrFunction]] = 'felzenszwalb', perturb_fn: Optional[Callable[[Tensor], Tensor]] = None, forward_arg_extractor: Optional[Callable[[Tuple[Tensor]], Union[Tensor, Tuple[Tensor]]]] = None, additional_forward_arg_extractor: Optional[Callable[[Tuple[Tensor]], Tuple[Tensor]]] = None) -> None
attribute(inputs: Tensor, targets: Optional[Tensor] = None) -> Union[Tensor, Tuple[Tensor]]
Computes attributions for the given inputs and targets.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
inputs
|
Tensor
|
The input data. |
required |
targets
|
Tensor
|
The target labels for the inputs. |
None
|
Returns:
Type | Description |
---|---|
Union[Tensor, Tuple[Tensor]]
|
Union[torch.Tensor, Tuple[torch.Tensor]]: The result of the explanation. |
get_tunables() -> Dict[str, Tuple[type, Dict]]
Provides Tunable parameters for the optimizer
Tunable parameters
n_samples
(int): Value can be selected in the range of range(10, 100, 10)
baseline_fn
(callable): BaselineFunction selects suitable values in accordance with the modality
feature_mask_fn
(callable): FeatureMaskFunction selects suitable values in accordance with the modality