Explainer [source]
The Explainer module serves as a basis for all explainers implemented within the framework. When it is necessary to implement a custom explainer, the base class needs to be extended to enable the support of the framework's functionality. The method accumulates crucial attributes and methods, making it visible to the Recommender, and Optimizer modules.
List of available explainers:
Explainer | Supported Modules | Target Modalities |
---|---|---|
GradCam | Convolution | Vision, Time Series |
GuidedGradCam | Convolution | Vision, Time Series |
Gradient | Linear, Convolution, Recurrent, Transformer | Vision, Language, Time Series |
GradientXInput | Linear, Convolution, Recurrent, Transformer | Vision, Language, Time Series |
SmoothGrad | Linear, Convolution, Recurrent, Transformer | Vision, Language, Time Series |
VarGrad | Linear, Convolution, Recurrent, Transformer | Vision, Language, Time Series |
IntegratedGradients | Linear, Convolution, Recurrent, Transformer | Vision, Language, Time Series |
LRPUniformEpsilon | Linear, Convolution, Recurrent, Transformer | Vision, Language, Time Series |
LRPEpsilonPlus | Linear, Convolution, Recurrent, Transformer | Vision, Language, Time Series |
LRPEpsilonGammaBox | Linear, Convolution, Recurrent, Transformer | Vision, Language, Time Series |
LRPEpsilonAlpha2Beta1 | Linear, Convolution, Recurrent, Transformer | Vision, Language, Time Series |
RAP | Linear, Convolution, Recurrent, Transformer | Vision, Language, Time Series |
KernelShap | Linear, Convolution, Recurrent, Transformer, Decision Trees | Vision, Language, Structured Data, Time Series |
Lime | Linear, Convolution, Recurrent, Transformer, Decision Trees | Vision, Language, Structured Data, Time Series |
AttentionRollout | Transformer | Vision, Language |
TransformerAttribution | Transformer | Vision, Language |
Usage
import torch
from torch.utils.data import DataLoader
from pnpxai.utils import set_seed
from pnpxai.explainers import LRPUniformEpsilon
from helpers import get_imagenet_dataset, get_torchvision_model
set_seed(seed=0)
device = torch.device("cuda" if torch.cuda.is_available() else "cpu")
# load model, dataset, and explainer
model, transform = get_torchvision_model("resnet18")
model = model.to(device)
explainer = LRPUniformEpsilon(model=model, epsilon=1e-6, n_classes=1000)
dataset = get_imagenet_dataset(transform=transform, subset_size=8)
loader = DataLoader(dataset, batch_size=8)
inputs, targets = next(iter(loader))
inputs, targets = inputs.to(device), targets.to(device)
# make explanation
attrs = explainer.attribute(inputs, targets)
print(attrs)