GradCam
pnpxai.explainers.grad_cam
GradCam
Bases: Explainer
GradCAM explainer.
Supported Modules: Convolution
Parameters:
Name | Type | Description | Default |
---|---|---|---|
model
|
Module
|
The PyTorch model for which attribution is to be computed. |
required |
interpolate_mode
|
Optional[str]
|
The interpolation mode used by the explainer. Available methods are: |
'bilinear'
|
**kwargs
|
Keyword arguments that are forwarded to the base implementation of the Explainer |
{}
|
Reference
Ramprasaath R. Selvaraju, Michael Cogswell, Abhishek Das, Ramakrishna Vedantam, Devi Parikh, Dhruv Batra. Grad-CAM: Visual Explanations from Deep Networks via Gradient-based Localization.
SUPPORTED_MODULES = [Convolution]
class-attribute
instance-attribute
interpolate_mode = interpolate_mode
instance-attribute
layer
property
EXPLANATION_TYPE: ExplanationType = 'attribution'
class-attribute
instance-attribute
TUNABLES = {}
class-attribute
instance-attribute
model = model.eval()
instance-attribute
forward_arg_extractor = forward_arg_extractor
instance-attribute
additional_forward_arg_extractor = additional_forward_arg_extractor
instance-attribute
device
property
__init__(model: nn.Module, interpolate_mode: str = 'bilinear', **kwargs) -> None
set_target_layer(layer: nn.Module)
attribute(inputs: Tensor, targets: Tensor) -> Tensor
Computes attributions for the given inputs and targets.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
inputs
|
Tensor
|
The input data. |
required |
targets
|
Tensor
|
The target labels for the inputs. |
required |
Returns:
Type | Description |
---|---|
Tensor
|
torch.Tensor: The result of the explanation. |
get_tunables() -> Dict[str, Tuple[type, dict]]
Provides Tunable parameters for the optimizer
Tunable parameters
interpolate_mode
(str): Value can be selected of "bilinear"
and "bicubic"