Skip to content

Optimizer

Objective

A class that encapsulates the logic for evaluating a model's performance using a specified explainer, postprocessor, and metric within a given modality. The Objective class is designed to be callable and can be used within an optimization framework like Optuna to evaluate different configurations of explainers and postprocessors.

Parameters:

Name Type Description Default
explainer Explainer

The explainer used to generate attributions for the model.

required
postprocessor PostProcessor

The postprocessor applied to the attributions generated by the explainer.

required
metric Metric

The metric used to evaluate the effectiveness of the postprocessed attributions.

required
modality Modality

The data modality (e.g., image, text) the model and explainer are operating on.

required
inputs Optional[TensorOrTupleOfTensors]

The input data to the model. Defaults to None.

None
targets Optional[Tensor]

The target labels for the input data. Defaults to None.

None
Source code in pnpxai/evaluator/optimizer/objectives.py
 16
 17
 18
 19
 20
 21
 22
 23
 24
 25
 26
 27
 28
 29
 30
 31
 32
 33
 34
 35
 36
 37
 38
 39
 40
 41
 42
 43
 44
 45
 46
 47
 48
 49
 50
 51
 52
 53
 54
 55
 56
 57
 58
 59
 60
 61
 62
 63
 64
 65
 66
 67
 68
 69
 70
 71
 72
 73
 74
 75
 76
 77
 78
 79
 80
 81
 82
 83
 84
 85
 86
 87
 88
 89
 90
 91
 92
 93
 94
 95
 96
 97
 98
 99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
class Objective:
    """
    A class that encapsulates the logic for evaluating a model's performance using a 
    specified explainer, postprocessor, and metric within a given modality. The `Objective` 
    class is designed to be callable and can be used within an optimization framework 
    like Optuna to evaluate different configurations of explainers and postprocessors.

    Parameters:
        explainer (Explainer): 
            The explainer used to generate attributions for the model.
        postprocessor (PostProcessor): 
            The postprocessor applied to the attributions generated by the explainer.
        metric (Metric): 
            The metric used to evaluate the effectiveness of the postprocessed attributions.
        modality (Modality): 
            The data modality (e.g., image, text) the model and explainer are operating on.
        inputs (Optional[TensorOrTupleOfTensors], optional): 
            The input data to the model. Defaults to None.
        targets (Optional[Tensor], optional): 
            The target labels for the input data. Defaults to None.
    """

    EXPLAINER_KEY = 'explainer'
    POSTPROCESSOR_KEY = 'postprocessor'

    def __init__(
        self,
        explainer: Explainer,
        postprocessor: PostProcessor,
        metric: Metric,
        modality: Modality,
        inputs: Optional[TensorOrTupleOfTensors] = None,
        targets: Optional[Tensor] = None,
    ):
        self.explainer = explainer
        self.postprocessor = postprocessor
        self.metric = metric
        self.modality = modality
        self.inputs = inputs
        self.targets = targets

    def set_inputs(self, inputs):
        """
        Sets the input data for the objective.

        Parameters:
            inputs (TensorOrTupleOfTensors): 
                The input data to be used by the explainer.

        Returns:
            Objective: The updated Objective instance.
        """
        self.inputs = inputs
        return self

    def set_targets(self, targets):
        """
        Sets the target labels for the objective.

        Parameters:
            targets (Tensor): 
                The target labels corresponding to the input data.

        Returns:
            Objective: The updated Objective instance.
        """
        self.targets = targets
        return self

    def set_data(self, inputs, targets):
        """
        Sets both the input data and target labels for the objective.

        Parameters:
            inputs (TensorOrTupleOfTensors): 
                The input data to be used by the explainer.
            targets (Tensor): 
                The target labels corresponding to the input data.

        Returns:
            Objective: The updated Objective instance.
        """
        self.set_inputs(inputs)
        self.set_targets(targets)
        return self

    def __call__(self, trial: Trial) -> float:
        """
        Executes the objective function within an optimization trial. This method is 
        responsible for selecting the explainer and postprocessor based on the trial 
        suggestions, performing the explanation and postprocessing, and evaluating the 
        results using the specified metric.

        Parameters:
            trial (Trial): 
                The trial object from an optimization framework like Optuna.

        Returns:
            float: The evaluation result of the metric after applying the explainer and 
            postprocessor to the model's predictions. Returns `nan` if the postprocessed 
            results contain non-countable values like `nan` or `inf`.
        """
        # suggest explainer
        explainer = suggest(trial, self.explainer, self.modality, key=self.EXPLAINER_KEY)

        # suggest postprocessor
        modalities = format_into_tuple(self.modality)
        is_multi_modal = len(modalities) > 1
        postprocessor = []
        for pp, modality in zip(
            format_into_tuple(self.postprocessor),
            modalities,
        ):
            force_params = {}
            if (
                isinstance(explainer, (Lime, KernelShap))
                and isinstance(modality, TextModality)
            ):
                force_params['pooling_fn'] = Identity()
            postprocessor.append(suggest(
                trial, pp, modality,
                key=generate_param_key(
                    self.POSTPROCESSOR_KEY,
                    modality.__class__.__name__ if is_multi_modal else None
                ),
                force_params=force_params,
            ))
        postprocessor = format_out_tuple_if_single(tuple(postprocessor))

        # Ignore duplicated samples
        states_to_consider = (TrialState.COMPLETE,)
        trials_to_consider = trial.study.get_trials(deepcopy=False, states=states_to_consider)
        for t in reversed(trials_to_consider):
            if trial.params == t.params:
                trial.set_user_attr('explainer', explainer)
                trial.set_user_attr('postprocessor', postprocessor)
                return t.value

        # Explain and postprocess
        attrs = explainer.attribute(self.inputs, self.targets)
        postprocessed = tuple(
            pp(attr) for pp, attr in zip(
                format_into_tuple(postprocessor),
                format_into_tuple(attrs),
            )
        )

        if any(pp.isnan().sum() > 0 or pp.isinf().sum() > 0 for pp in postprocessed):
            # Treat a failure as nan
            return float('nan')

        postprocessed = format_out_tuple_if_single(postprocessed)
        metric = self.metric.set_explainer(explainer)
        evals = format_into_tuple(
            metric.evaluate(self.inputs, self.targets, postprocessed)
        )

        # Keep current explainer and postprocessor on trial
        trial.set_user_attr('explainer', explainer)
        trial.set_user_attr('postprocessor', postprocessor)
        return (sum(*evals) / len(evals)).item()

__call__(trial)

Executes the objective function within an optimization trial. This method is responsible for selecting the explainer and postprocessor based on the trial suggestions, performing the explanation and postprocessing, and evaluating the results using the specified metric.

Parameters:

Name Type Description Default
trial Trial

The trial object from an optimization framework like Optuna.

required

Returns:

Name Type Description
float float

The evaluation result of the metric after applying the explainer and

float

postprocessor to the model's predictions. Returns nan if the postprocessed

float

results contain non-countable values like nan or inf.

Source code in pnpxai/evaluator/optimizer/objectives.py
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
def __call__(self, trial: Trial) -> float:
    """
    Executes the objective function within an optimization trial. This method is 
    responsible for selecting the explainer and postprocessor based on the trial 
    suggestions, performing the explanation and postprocessing, and evaluating the 
    results using the specified metric.

    Parameters:
        trial (Trial): 
            The trial object from an optimization framework like Optuna.

    Returns:
        float: The evaluation result of the metric after applying the explainer and 
        postprocessor to the model's predictions. Returns `nan` if the postprocessed 
        results contain non-countable values like `nan` or `inf`.
    """
    # suggest explainer
    explainer = suggest(trial, self.explainer, self.modality, key=self.EXPLAINER_KEY)

    # suggest postprocessor
    modalities = format_into_tuple(self.modality)
    is_multi_modal = len(modalities) > 1
    postprocessor = []
    for pp, modality in zip(
        format_into_tuple(self.postprocessor),
        modalities,
    ):
        force_params = {}
        if (
            isinstance(explainer, (Lime, KernelShap))
            and isinstance(modality, TextModality)
        ):
            force_params['pooling_fn'] = Identity()
        postprocessor.append(suggest(
            trial, pp, modality,
            key=generate_param_key(
                self.POSTPROCESSOR_KEY,
                modality.__class__.__name__ if is_multi_modal else None
            ),
            force_params=force_params,
        ))
    postprocessor = format_out_tuple_if_single(tuple(postprocessor))

    # Ignore duplicated samples
    states_to_consider = (TrialState.COMPLETE,)
    trials_to_consider = trial.study.get_trials(deepcopy=False, states=states_to_consider)
    for t in reversed(trials_to_consider):
        if trial.params == t.params:
            trial.set_user_attr('explainer', explainer)
            trial.set_user_attr('postprocessor', postprocessor)
            return t.value

    # Explain and postprocess
    attrs = explainer.attribute(self.inputs, self.targets)
    postprocessed = tuple(
        pp(attr) for pp, attr in zip(
            format_into_tuple(postprocessor),
            format_into_tuple(attrs),
        )
    )

    if any(pp.isnan().sum() > 0 or pp.isinf().sum() > 0 for pp in postprocessed):
        # Treat a failure as nan
        return float('nan')

    postprocessed = format_out_tuple_if_single(postprocessed)
    metric = self.metric.set_explainer(explainer)
    evals = format_into_tuple(
        metric.evaluate(self.inputs, self.targets, postprocessed)
    )

    # Keep current explainer and postprocessor on trial
    trial.set_user_attr('explainer', explainer)
    trial.set_user_attr('postprocessor', postprocessor)
    return (sum(*evals) / len(evals)).item()

set_data(inputs, targets)

Sets both the input data and target labels for the objective.

Parameters:

Name Type Description Default
inputs TensorOrTupleOfTensors

The input data to be used by the explainer.

required
targets Tensor

The target labels corresponding to the input data.

required

Returns:

Name Type Description
Objective

The updated Objective instance.

Source code in pnpxai/evaluator/optimizer/objectives.py
 85
 86
 87
 88
 89
 90
 91
 92
 93
 94
 95
 96
 97
 98
 99
100
def set_data(self, inputs, targets):
    """
    Sets both the input data and target labels for the objective.

    Parameters:
        inputs (TensorOrTupleOfTensors): 
            The input data to be used by the explainer.
        targets (Tensor): 
            The target labels corresponding to the input data.

    Returns:
        Objective: The updated Objective instance.
    """
    self.set_inputs(inputs)
    self.set_targets(targets)
    return self

set_inputs(inputs)

Sets the input data for the objective.

Parameters:

Name Type Description Default
inputs TensorOrTupleOfTensors

The input data to be used by the explainer.

required

Returns:

Name Type Description
Objective

The updated Objective instance.

Source code in pnpxai/evaluator/optimizer/objectives.py
57
58
59
60
61
62
63
64
65
66
67
68
69
def set_inputs(self, inputs):
    """
    Sets the input data for the objective.

    Parameters:
        inputs (TensorOrTupleOfTensors): 
            The input data to be used by the explainer.

    Returns:
        Objective: The updated Objective instance.
    """
    self.inputs = inputs
    return self

set_targets(targets)

Sets the target labels for the objective.

Parameters:

Name Type Description Default
targets Tensor

The target labels corresponding to the input data.

required

Returns:

Name Type Description
Objective

The updated Objective instance.

Source code in pnpxai/evaluator/optimizer/objectives.py
71
72
73
74
75
76
77
78
79
80
81
82
83
def set_targets(self, targets):
    """
    Sets the target labels for the objective.

    Parameters:
        targets (Tensor): 
            The target labels corresponding to the input data.

    Returns:
        Objective: The updated Objective instance.
    """
    self.targets = targets
    return self

suggest(trial, obj, modality, key=None, force_params=None)

A utility function that suggests parameters for a given object based on an optimization trial. The function recursively tunes the parameters of the object according to the modality (or modalities) provided.

Parameters:

Name Type Description Default
trial Trial

The trial object from an optimization framework like Optuna, used to suggest parameters for tuning.

required
obj Any

The object whose parameters are being tuned. This object must implement get_tunables() and set_kwargs() methods.

required
modality Union[Modality, Tuple[Modality]]

The modality (e.g., image, text) or tuple of modalities the object is operating on. If multiple modalities are provided, the function handles multi-modal tuning.

required
key Optional[str]

An optional key to uniquely identify the set of parameters being tuned, useful for differentiating parameters in multi-modal scenarios. Defaults to None.

None

Returns:

Name Type Description
Any

The object with its parameters set according to the trial suggestions.

Notes
  • The function uses map_suggest_method to map the tuning method based on the method type provided in the tunables.
  • It supports multi-modal tuning, where different modalities may require different parameters to be tuned.
  • For utility functions (UtilFunction), the function further tunes parameters based on the selected function from the modality.
Example

Assuming trial is an instance of optuna.trial.Trial, and explainer: Explainer is an object with tunable parameters, you can tune it as follows:

tuned_explainer = suggest(trial, explainer, modality)
Source code in pnpxai/evaluator/optimizer/suggestor.py
 31
 32
 33
 34
 35
 36
 37
 38
 39
 40
 41
 42
 43
 44
 45
 46
 47
 48
 49
 50
 51
 52
 53
 54
 55
 56
 57
 58
 59
 60
 61
 62
 63
 64
 65
 66
 67
 68
 69
 70
 71
 72
 73
 74
 75
 76
 77
 78
 79
 80
 81
 82
 83
 84
 85
 86
 87
 88
 89
 90
 91
 92
 93
 94
 95
 96
 97
 98
 99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
def suggest(
    trial: Trial,
    obj: Any,
    modality: Union[Modality, Tuple[Modality]],
    key: Optional[str] = None,
    force_params: Optional[Dict[str, Any]] = None,
):
    """
    A utility function that suggests parameters for a given object based on an optimization trial. 
    The function recursively tunes the parameters of the object according to the modality (or 
    modalities) provided.

    Parameters:
        trial (Trial):
            The trial object from an optimization framework like Optuna, used to suggest 
            parameters for tuning.
        obj (Any):
            The object whose parameters are being tuned. This object must implement 
            `get_tunables()` and `set_kwargs()` methods.
        modality (Union[Modality, Tuple[Modality]]):
            The modality (e.g., image, text) or tuple of modalities the object is operating on. 
            If multiple modalities are provided, the function handles multi-modal tuning.
        key (Optional[str], optional):
            An optional key to uniquely identify the set of parameters being tuned, 
            useful for differentiating parameters in multi-modal scenarios. Defaults to None.

    Returns:
        Any:
            The object with its parameters set according to the trial suggestions.

    Notes:
        - The function uses `map_suggest_method` to map the tuning method based on the method 
          type provided in the tunables.
        - It supports multi-modal tuning, where different modalities may require different 
          parameters to be tuned. 
        - For utility functions (`UtilFunction`), the function further tunes parameters 
          based on the selected function from the modality.

    Example:
        Assuming `trial` is an instance of `optuna.trial.Trial`, and `explainer: Explainer` is an object 
        with tunable parameters, you can tune it as follows:

        ```python
        tuned_explainer = suggest(trial, explainer, modality)
        ```
    """

    is_multi_modal = len(format_into_tuple(modality)) > 1
    force_params = force_params or {}
    for param_nm, (method_type, method_kwargs) in obj.get_tunables().items():
        if param_nm in force_params:
            param = force_params[param_nm]
        else:
            method = map_suggest_method(trial, method_type)
            if method is not None:
                param = method(
                    name=generate_param_key(key, param_nm),
                    **method_kwargs
                )
            elif issubclass(method_type, UtilFunction):
                param = []
                for mod in format_into_tuple(modality):
                    fn_selector = mod.map_fn_selector(method_type)
                    _param_nm, (_method_type, _method_kwargs) = next(
                        iter(fn_selector.get_tunables().items())
                    )
                    _param_nm = generate_param_key(
                        param_nm,
                        mod.__class__.__name__ if is_multi_modal else None,
                        _param_nm,
                    )  # update param_nm
                    _method = map_suggest_method(trial, _method_type)
                    fn_nm = _method(
                        name=generate_param_key(key, _param_nm),
                        **_method_kwargs
                    )
                    fn = fn_selector.select(fn_nm)
                    param.append(suggest(
                        trial, fn, mod,
                        key=generate_param_key(
                            key, param_nm,
                            mod.__class__.__name__ if is_multi_modal else None,
                        ),
                    ))
                param = format_out_tuple_if_single(tuple(param))
        obj = obj.set_kwargs(**{param_nm: param})
    return obj

OptimizationOutput dataclass

A dataclass used to store the results of an optimization process, including the chosen explainer, postprocessor, and the Optuna study object that was used during the optimization.

Attributes:

Name Type Description
explainer Explainer

The explainer object selected during the optimization process. This object is responsible for generating explanations or attributions for a model.

postprocessor PostProcessor

The postprocessor object selected during the optimization process. This object is used to process the attributions generated by the explainer, often involving normalization or other transformations.

study Study

The Optuna study object that was used to conduct the optimization. This object contains information about the optimization trials and results.

Source code in pnpxai/evaluator/optimizer/types.py
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
@dataclass
class OptimizationOutput:
    """
    A dataclass used to store the results of an optimization process, including the chosen 
    explainer, postprocessor, and the Optuna study object that was used during the optimization.

    Attributes:
        explainer (Explainer):
            The explainer object selected during the optimization process. This object is 
            responsible for generating explanations or attributions for a model.
        postprocessor (PostProcessor):
            The postprocessor object selected during the optimization process. This object 
            is used to process the attributions generated by the explainer, often involving 
            normalization or other transformations.
        study (optuna.study.Study):
            The Optuna study object that was used to conduct the optimization. This object 
            contains information about the optimization trials and results.
    """

    explainer: Explainer
    postprocessor: PostProcessor
    study: optuna.study.Study