VarGrad
pnpxai.explainers.var_grad
VarGrad
Bases: SmoothGrad
VarGrad explainer.
Supported Modules: Linear
, Convolution
, LSTM
, RNN
, Attention
Parameters:
Name | Type | Description | Default |
---|---|---|---|
model
|
Module
|
The PyTorch model for which attribution is to be computed. noise_level (float): The noise level added during attribution. n_iter (int): The number of iterations, the input is modified. layer (Optional[Union[Union[str, Module], Sequence[Union[str, Module]]]]): The target module to be explained. |
required |
forward_arg_extractor
|
Optional[Callable[[Tuple[Tensor]], Union[Tensor, Tuple[Tensor]]]]
|
A function that extracts forward arguments from the input batch(s) where the attribution scores are assigned. |
None
|
additional_forward_arg_extractor
|
Optional[Callable[[Tuple[Tensor]], Union[Tensor, Tuple[Tensor]]]]
|
A secondary function that extract additional forward arguments from the input batch(s). |
None
|
**kwargs
|
Keyword arguments that are forwarded to the base implementation of the Explainer |
required |
Reference
Lorenz Richter, Ayman Boustati, Nikolas Nüsken, Francisco J. R. Ruiz, Ömer Deniz Akyildiz. VarGrad: A Low-Variance Gradient Estimator for Variational Inference.
SUPPORTED_MODULES = [Linear, Convolution, LSTM, RNN, Attention]
class-attribute
instance-attribute
EXPLANATION_TYPE: ExplanationType = 'attribution'
class-attribute
instance-attribute
TUNABLES = {}
class-attribute
instance-attribute
model = model.eval()
instance-attribute
forward_arg_extractor = forward_arg_extractor
instance-attribute
additional_forward_arg_extractor = additional_forward_arg_extractor
instance-attribute
device
property
n_classes = n_classes
instance-attribute
noise_level = noise_level
instance-attribute
n_iter = n_iter
instance-attribute
layer = layer
instance-attribute
__init__(model: Module, noise_level: float = 0.1, n_iter: int = 20, forward_arg_extractor: Optional[Callable[[Tuple[Tensor]], Union[Tensor, Tuple[Tensor]]]] = None, additional_forward_arg_extractor: Optional[Callable[[Tuple[Tensor]], Union[Tensor, Tuple[Tensor]]]] = None, layer: Optional[Union[Union[str, Module], Sequence[Union[str, Module]]]] = None, n_classes: Optional[int] = None) -> None
attribute(inputs: Union[Tensor, Tuple[Tensor]], targets: Tensor) -> Union[Tensor, Tuple[Tensor]]
Computes attributions for the given inputs and targets.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
inputs
|
Tensor
|
The input data. |
required |
targets
|
Tensor
|
The target labels for the inputs. |
required |
Returns:
Type | Description |
---|---|
Union[Tensor, Tuple[Tensor]]
|
torch.Tensor: The result of the explanation. |
__repr__()
copy()
set_kwargs(**kwargs)
get_tunables() -> Dict[str, Tuple[type, dict]]
Provides Tunable parameters for the optimizer
Tunable parameters
noise_level
(float): Value can be selected in the range of range(0, 0.95, 0.05)
n_iter
(int): Value can be selected in the range of range(10, 100, 10)