rise
muppet.explainers.rise
RISE (Randomized Input Sampling for Explanation) explainer for black-box models.
This module implements RISE, a model-agnostic explanation method that generates importance maps by probing the model with randomly masked versions of the input. RISE is particularly effective for image classification tasks and works entirely through black-box access to the model, making it broadly applicable.
MUPPET Component Integration
- Explorer:
RandomMasksExplorer- generates random binary masks with configurable sparsity - Perturbator:
SetToZeroPerturbator- applies zero-masking to occlude input regions - Attributor:
ClassScoreAttributor- extracts model confidence scores for target class - Aggregator:
WeightedSumAggregator- computes weighted average of masks using confidence scores
Classes:
-
RISEExplainer–Implementation of the RISE method for black-box model explanation.
References
Petsiuk, Vitali, Abir Das, and Kate Saenko. "RISE: Randomized input sampling for explanation of black-box models." arXiv preprint arXiv:1806.07421 (2018). https://ui.adsabs.harvard.edu/abs/2018arXiv180607421P/abstract
Classes
RISEExplainer
RISEExplainer(
model,
nmasks=800,
mask_dim=7,
mask_proba=0.1,
seed=None,
convention="destructive",
)
Bases: MuppetExplainer
RISE (Randomized Input Sampling for Explanation) explainer implementation.
Implements the RISE method that generates importance maps through random masking and statistical aggregation of model responses. The core principle of RISE is to generate a large number of random masks, apply them to the input, evaluate the masked inputs with the model, and then compute a weighted average of the masks where the weights are the model's confidence scores.
Key advantages of RISE: - Model-agnostic: works with any black-box model - Simple and interpretable: directly measures prediction changes under occlusion - Flexible: supports both constructive and destructive explanation modes
This approach provides a statistical estimation of pixel importance without requiring any knowledge of the model's internal structure. The method generates smooth, intuitive heatmaps that highlight the most important regions for the model's prediction. The statistical nature of the approach means more masks generally lead to better approximations of the true importance.
Initialize the RISE explainer for black-box model explanation.
Parameters:
-
model(Module) –The black-box model to explain its predictions.
-
nmasks(int, default:800) –Number of random masks to generate.
-
mask_dim(int, default:7) –The size of the squared grade (down-scaled mask).
-
mask_proba(float, default:0.1) –The probability of setting, independently, each value of the (downscaled) mask to 0 meaning there will be no perturbation at this position.
-
seed(int, default:None) –Seed to initialize for reproducible results.
-
convention(Union[AttributionConvention, str], default:'destructive') –choose if the explainer finds important features by identifying features that destroy (destructive) efficiently the model's prediction from the input, or by identifying features that build (constructive) efficiently the model's response from a completly perturbed input