Skip to content

SmoothGrad

pnpxai.explainers.smooth_grad

SmoothGrad

Bases: ZennitExplainer

SmoothGrad explainer.

Supported Modules: Linear, Convolution, LSTM, RNN, Attention

Parameters:

Name Type Description Default
model Module

The PyTorch model for which attribution is to be computed.

required
noise_level float

The added noise level.

0.1
n_iter int

The Number of iterations the algorithm makes

20
layer Optional[Union[Union[str, Module], Sequence[Union[str, Module]]]]

The target module to be explained

None
n_classes Optional[int]

Number of classes

None
forward_arg_extractor Optional[Callable[[Tuple[Tensor]], Union[Tensor, Tuple[Tensor]]]]

A function that extracts forward arguments from the input batch(s) where the attribution scores are assigned.

None
additional_forward_arg_extractor Optional[Callable[[Tuple[Tensor]], Union[Tensor, Tuple[Tensor]]]]

A secondary function that extract additional forward arguments from the input batch(s).

None
**kwargs

Keyword arguments that are forwarded to the base implementation of the Explainer

required
Reference

Daniel Smilkov, Nikhil Thorat, Been Kim, Fernanda ViƩgas, Martin Wattenberg. SmoothGrad: removing noise by adding noise.

SUPPORTED_MODULES = [Linear, Convolution, LSTM, RNN, Attention] class-attribute instance-attribute
noise_level = noise_level instance-attribute
n_iter = n_iter instance-attribute
layer = layer instance-attribute
EXPLANATION_TYPE: ExplanationType = 'attribution' class-attribute instance-attribute
TUNABLES = {} class-attribute instance-attribute
model = model.eval() instance-attribute
forward_arg_extractor = forward_arg_extractor instance-attribute
additional_forward_arg_extractor = additional_forward_arg_extractor instance-attribute
device property
n_classes = n_classes instance-attribute
__init__(model: Module, noise_level: float = 0.1, n_iter: int = 20, forward_arg_extractor: Optional[Callable[[Tuple[Tensor]], Union[Tensor, Tuple[Tensor]]]] = None, additional_forward_arg_extractor: Optional[Callable[[Tuple[Tensor]], Union[Tensor, Tuple[Tensor]]]] = None, layer: Optional[Union[Union[str, Module], Sequence[Union[str, Module]]]] = None, n_classes: Optional[int] = None) -> None
attributor() -> Union[SmoothGradAttributor, LayerSmoothGradAttributor]
attribute(inputs: Union[Tensor, Tuple[Tensor]], targets: Tensor) -> Union[Tensor, Tuple[Tensor]]

Computes attributions for the given inputs and targets.

Parameters:

Name Type Description Default
inputs Tensor

The input data.

required
targets Tensor

The target labels for the inputs.

required

Returns:

Type Description
Union[Tensor, Tuple[Tensor]]

Union[torch.Tensor, Tuple[torch.Tensor]]: The result of the explanation.

get_tunables() -> Dict[str, Tuple[type, dict]]

Provides Tunable parameters for the optimizer

Tunable parameters

noise_level (float): Value can be selected in the range of range(0, 0.95, 0.05)

n_iter (int): Value can be selected in the range of range(10, 100, 10)

__repr__()
copy()
set_kwargs(**kwargs)
__init_subclass__() -> None