Skip to content

LRP

pnpxai.explainers.lrp

LRPBase

Bases: ZennitExplainer

Base class for LRPUniformEpsilon, LRPEpsilonGammaBox, LRPEpsilonPlus, and LRPEpsilonAlpha2Beta1 explainers.

Parameters:

Name Type Description Default
model Module

The PyTorch model for which attribution is to be computed.

required
zennit_composite Composite

The Composite object applies canonizers and register hooks to modules. One Composite instance may only be applied to a single module at a time.

required
layer Optional[Union[Union[str, Module], Sequence[Union[str, Module]]]]

The target module to be explained

None
n_classes Optional[int]

Number of classes

None
forward_arg_extractor Optional[ForwardArgumentExtractor]

A function that extracts forward arguments from the input batch(s) where the attribution scores are assigned.

None
additional_forward_arg_extractor Optional[ForwardArgumentExtractor]

A secondary function that extract additional forward arguments from the input batch(s).

None
**kwargs

Keyword arguments that are forwarded to the base implementation of the Explainer

required
Reference

Bach S., Binder A., Montavon G., Klauschen F., M¨uller K.-R., and Samek. On pixel-wise explanations for non-linear classifier decisions by layer-wise relevance propagation.

zennit_composite = zennit_composite instance-attribute
layer = layer instance-attribute
EXPLANATION_TYPE: ExplanationType = 'attribution' class-attribute instance-attribute
SUPPORTED_MODULES = [] class-attribute instance-attribute
TUNABLES = {} class-attribute instance-attribute
model = model.eval() instance-attribute
forward_arg_extractor = forward_arg_extractor instance-attribute
additional_forward_arg_extractor = additional_forward_arg_extractor instance-attribute
device property
n_classes = n_classes instance-attribute
__init__(model: Module, zennit_composite: Composite, forward_arg_extractor: Optional[ForwardArgumentExtractor] = None, additional_forward_arg_extractor: Optional[ForwardArgumentExtractor] = None, layer: Optional[TargetLayerOrListOfTargetLayers] = None, n_classes: Optional[int] = None) -> None
explainer(model) -> Union[Gradient, LayerGradient]
attribute(inputs: Union[Tensor, Tuple[Tensor]], targets: Tensor) -> Union[Tensor, Tuple[Tensor]]

Computes attributions for the given inputs and targets.

Parameters:

Name Type Description Default
inputs Tensor

The input data.

required
targets Tensor

The target labels for the inputs.

required

Returns:

Type Description
Union[Tensor, Tuple[Tensor]]

torch.Tensor: The result of the explanation.

__repr__()
copy()
set_kwargs(**kwargs)
get_tunables() -> Dict[str, Tuple[type, dict]]

Returns a dictionary of tunable parameters for the explainer.

Returns:

Type Description
Dict[str, Tuple[type, dict]]

Dict[str, Tuple[type, dict]]: Dictionary of tunable parameters.

__init_subclass__() -> None
LRPUniformEpsilon

Bases: LRPBase

LRPUniformEpsilon explainer.

Supported Modules: Linear, Convolution, LSTM, RNN, Attention

Parameters:

Name Type Description Default
model Module

The PyTorch model for which attribution is to be computed.

required
epsilon Union[float, Callable[[Tensor], Tensor]]

The epsilon value.

0.25
stabilizer Union[float, Callable[[Tensor], Tensor]]

The stabilizer value

1e-06
zennit_canonizers Optional[List[Canonizer]]

An optional list of canonizers. Canonizers modify modules temporarily such that certain attribution rules can properly be applied.

None
layer Optional[Union[Union[str, Module], Sequence[Union[str, Module]]]]

The target module to be explained

None
n_classes Optional[int]

Number of classes

None
**kwargs

Keyword arguments that are forwarded to the base implementation of the Explainer

required
SUPPORTED_MODULES = [Linear, Convolution, LSTM, RNN, Attention] class-attribute instance-attribute
epsilon = epsilon instance-attribute
stabilizer = stabilizer instance-attribute
zennit_canonizers = zennit_canonizers instance-attribute
EXPLANATION_TYPE: ExplanationType = 'attribution' class-attribute instance-attribute
TUNABLES = {} class-attribute instance-attribute
model = model.eval() instance-attribute
forward_arg_extractor = forward_arg_extractor instance-attribute
additional_forward_arg_extractor = additional_forward_arg_extractor instance-attribute
device property
n_classes = n_classes instance-attribute
zennit_composite = zennit_composite instance-attribute
layer = layer instance-attribute
__init__(model: Module, epsilon: Union[float, Callable[[Tensor], Tensor]] = 0.25, stabilizer: Union[float, Callable[[Tensor], Tensor]] = 1e-06, zennit_canonizers: Optional[List[Canonizer]] = None, forward_arg_extractor: Optional[ForwardArgumentExtractor] = None, additional_forward_arg_extractor: Optional[ForwardArgumentExtractor] = None, layer: Optional[TargetLayerOrListOfTargetLayers] = None, n_classes: Optional[int] = None) -> None
get_tunables() -> Dict[str, Tuple[type, dict]]

Provides Tunable parameters for the optimizer

Tunable parameters

epsilon (float): Value can be selected in the range of range(1e-6, 1)

__repr__()
copy()
set_kwargs(**kwargs)
attribute(inputs: Union[Tensor, Tuple[Tensor]], targets: Tensor) -> Union[Tensor, Tuple[Tensor]]

Computes attributions for the given inputs and targets.

Parameters:

Name Type Description Default
inputs Tensor

The input data.

required
targets Tensor

The target labels for the inputs.

required

Returns:

Type Description
Union[Tensor, Tuple[Tensor]]

torch.Tensor: The result of the explanation.

__init_subclass__() -> None
explainer(model) -> Union[Gradient, LayerGradient]
LRPEpsilonGammaBox

Bases: LRPBase

LRPEpsilonGammaBox explainer.

Supported Modules: Convolution

Parameters:

Name Type Description Default
model Module

The PyTorch model for which attribution is to be computed.

required
low float

The lowest possible value for computing gamma box

-3.0
high float

The highest possible value for computing gamma box

3.0
gamma float

The gamma value for computing gamma box

0.25
epsilon Union[float, Callable[[Tensor], Tensor]]

The epsilon value.

1e-06
stabilizer Union[float, Callable[[Tensor], Tensor]]

The stabilizer value

1e-06
zennit_canonizers Optional[List[Canonizer]]

An optional list of canonizers. Canonizers modify modules temporarily such that certain attribution rules can properly be applied.

None
layer Optional[Union[Union[str, Module], Sequence[Union[str, Module]]]]

The target module to be explained

None
n_classes Optional[int]

Number of classes

None
**kwargs

Keyword arguments that are forwarded to the base implementation of the Explainer

required
SUPPORTED_MODULES = [Convolution] class-attribute instance-attribute
low = low instance-attribute
high = high instance-attribute
epsilon = epsilon instance-attribute
gamma = gamma instance-attribute
stabilizer = stabilizer instance-attribute
zennit_canonizers = zennit_canonizers instance-attribute
EXPLANATION_TYPE: ExplanationType = 'attribution' class-attribute instance-attribute
TUNABLES = {} class-attribute instance-attribute
model = model.eval() instance-attribute
forward_arg_extractor = forward_arg_extractor instance-attribute
additional_forward_arg_extractor = additional_forward_arg_extractor instance-attribute
device property
n_classes = n_classes instance-attribute
zennit_composite = zennit_composite instance-attribute
layer = layer instance-attribute
__init__(model: Module, low: float = -3.0, high: float = 3.0, epsilon: Union[float, Callable[[Tensor], Tensor]] = 1e-06, gamma: float = 0.25, stabilizer: Union[float, Callable[[Tensor], Tensor]] = 1e-06, zennit_canonizers: Optional[List[Canonizer]] = None, forward_arg_extractor: Optional[ForwardArgumentExtractor] = None, additional_forward_arg_extractor: Optional[ForwardArgumentExtractor] = None, layer: Optional[TargetLayerOrListOfTargetLayers] = None, n_classes: Optional[int] = None) -> None
get_tunables() -> Dict[str, Tuple[type, dict]]

Provides Tunable parameters for the optimizer

Tunable parameters

epsilon (float): Value can be selected in the range of range(1e-6, 1)

gamma (float): Value can be selected in the range of range(1e-6, 1)

__repr__()
copy()
set_kwargs(**kwargs)
attribute(inputs: Union[Tensor, Tuple[Tensor]], targets: Tensor) -> Union[Tensor, Tuple[Tensor]]

Computes attributions for the given inputs and targets.

Parameters:

Name Type Description Default
inputs Tensor

The input data.

required
targets Tensor

The target labels for the inputs.

required

Returns:

Type Description
Union[Tensor, Tuple[Tensor]]

torch.Tensor: The result of the explanation.

__init_subclass__() -> None
explainer(model) -> Union[Gradient, LayerGradient]
LRPEpsilonPlus

Bases: LRPBase

LRPEpsilonPlus explainer.

Supported Modules: Convolution

Parameters:

Name Type Description Default
model Module

The PyTorch model for which attribution is to be computed.

required
epsilon Union[float, Callable[[Tensor], Tensor]]

The epsilon value.

1e-06
stabilizer Union[float, Callable[[Tensor], Tensor]]

The stabilizer value

1e-06
zennit_canonizers Optional[List[Canonizer]]

An optional list of canonizers. Canonizers modify modules temporarily such that certain attribution rules can properly be applied.

None
layer Optional[Union[Union[str, Module], Sequence[Union[str, Module]]]]

The target module to be explained

None
n_classes Optional[int]

Number of classes

None
**kwargs

Keyword arguments that are forwarded to the base implementation of the Explainer

required
SUPPORTED_MODULES = [Convolution] class-attribute instance-attribute
epsilon = epsilon instance-attribute
stabilizer = stabilizer instance-attribute
zennit_canonizers = zennit_canonizers instance-attribute
EXPLANATION_TYPE: ExplanationType = 'attribution' class-attribute instance-attribute
TUNABLES = {} class-attribute instance-attribute
model = model.eval() instance-attribute
forward_arg_extractor = forward_arg_extractor instance-attribute
additional_forward_arg_extractor = additional_forward_arg_extractor instance-attribute
device property
n_classes = n_classes instance-attribute
zennit_composite = zennit_composite instance-attribute
layer = layer instance-attribute
__init__(model: Module, epsilon: Union[float, Callable[[Tensor], Tensor]] = 1e-06, stabilizer: Union[float, Callable[[Tensor], Tensor]] = 1e-06, zennit_canonizers: Optional[List[Canonizer]] = None, forward_arg_extractor: Optional[ForwardArgumentExtractor] = None, additional_forward_arg_extractor: Optional[ForwardArgumentExtractor] = None, layer: Optional[TargetLayerOrListOfTargetLayers] = None, n_classes: Optional[int] = None) -> None
get_tunables() -> Dict[str, Tuple[type, dict]]

Provides Tunable parameters for the optimizer

Tunable parameters

epsilon (float): Value can be selected in the range of range(1e-6, 1)

__repr__()
copy()
set_kwargs(**kwargs)
attribute(inputs: Union[Tensor, Tuple[Tensor]], targets: Tensor) -> Union[Tensor, Tuple[Tensor]]

Computes attributions for the given inputs and targets.

Parameters:

Name Type Description Default
inputs Tensor

The input data.

required
targets Tensor

The target labels for the inputs.

required

Returns:

Type Description
Union[Tensor, Tuple[Tensor]]

torch.Tensor: The result of the explanation.

__init_subclass__() -> None
explainer(model) -> Union[Gradient, LayerGradient]
LRPEpsilonAlpha2Beta1

Bases: LRPBase

LRPEpsilonAlpha2Beta1 explainer.

Supported Modules: Convolution

Parameters:

Name Type Description Default
model Module

The PyTorch model for which attribution is to be computed.

required
epsilon Union[float, Callable[[Tensor], Tensor]]

The epsilon value.

1e-06
stabilizer Union[float, Callable[[Tensor], Tensor]]

The stabilizer value

1e-06
zennit_canonizers Optional[List[Canonizer]]

An optional list of canonizers. Canonizers modify modules temporarily such that certain attribution rules can properly be applied.

None
layer Optional[Union[Union[str, Module], Sequence[Union[str, Module]]]]

The target module to be explained

None
n_classes Optional[int]

Number of classes

None
**kwargs

Keyword arguments that are forwarded to the base implementation of the Explainer

required
SUPPORTED_MODULES = [Convolution] class-attribute instance-attribute
epsilon = epsilon instance-attribute
stabilizer = stabilizer instance-attribute
zennit_canonizers = zennit_canonizers instance-attribute
EXPLANATION_TYPE: ExplanationType = 'attribution' class-attribute instance-attribute
TUNABLES = {} class-attribute instance-attribute
model = model.eval() instance-attribute
forward_arg_extractor = forward_arg_extractor instance-attribute
additional_forward_arg_extractor = additional_forward_arg_extractor instance-attribute
device property
n_classes = n_classes instance-attribute
zennit_composite = zennit_composite instance-attribute
layer = layer instance-attribute
__init__(model: Module, epsilon: Union[float, Callable[[Tensor], Tensor]] = 1e-06, stabilizer: Union[float, Callable[[Tensor], Tensor]] = 1e-06, zennit_canonizers: Optional[List[Canonizer]] = None, forward_arg_extractor: Optional[ForwardArgumentExtractor] = None, additional_forward_arg_extractor: Optional[ForwardArgumentExtractor] = None, layer: Optional[TargetLayerOrListOfTargetLayers] = None, n_classes: Optional[int] = None) -> None
get_tunables() -> Dict[str, Tuple[type, dict]]

Provides Tunable parameters for the optimizer

Tunable parameters

epsilon (float): Value can be selected in the range of range(1e-6, 1)

__repr__()
copy()
set_kwargs(**kwargs)
attribute(inputs: Union[Tensor, Tuple[Tensor]], targets: Tensor) -> Union[Tensor, Tuple[Tensor]]

Computes attributions for the given inputs and targets.

Parameters:

Name Type Description Default
inputs Tensor

The input data.

required
targets Tensor

The target labels for the inputs.

required

Returns:

Type Description
Union[Tensor, Tuple[Tensor]]

torch.Tensor: The result of the explanation.

__init_subclass__() -> None
explainer(model) -> Union[Gradient, LayerGradient]
transformer_layer_map(stabilizer: float = 1e-06)
canonizers_base()