Skip to content

generators

muppet.components.perturbator.generator.base

Base generator classes for producing perturbation values in MUPPET XAI framework.

This module defines the foundation for generators used by perturbators to create realistic replacement values for masked regions during the perturbation process. Generators are essential components that enable sophisticated perturbation strategies beyond simple replacement with zeros or noise.

In the MUPPET four-step framework (generate masks → apply perturbations → calculate attributions → aggregate results), generators support the perturbation step by providing contextually appropriate replacement values. This is crucial for maintaining data realism and producing meaningful explanations.

The module contains

Generator: Abstract base class for generators that don't require training, suitable for simple statistical sampling or rule-based value generation. TrainableGenerator: Extended abstract class with built-in training infrastructure for neural network-based generators that learn data distributions.

Key Design Principles
  • Generators focus solely on producing replacement values
  • Training is handled transparently with early stopping and validation splits
  • Deterministic sampling support through optional seed parameters
  • Extensible architecture for domain-specific perturbation strategies
Note

Generators are typically not used directly but are embedded within perturbator implementations. They enable advanced explanation methods like conditional sampling, learned imputations, and distribution-aware perturbations.

Classes

Generator
Generator()

Bases: ABC

Abstract base class for data generators in perturbation methods.

Generators create synthetic data to replace masked or perturbed regions in input examples. They provide the core imputation functionality for creating meaningful perturbations.

Abstract class for generators that don't need to be trained on data.

Attributes:

  • device

    The used device. Will get updated from the main explainer after initialization.

Source code in muppet/components/perturbator/generator/base.py
def __init__(
    self,
) -> None:
    """Abstract class for generators that don't need to be trained on data.

    Attributes:
        device: The used device. Will get updated from the main explainer after initialization.

    """
    self.device = DEVICE
    super().__init__()
Functions
generate abstractmethod
generate(*args, **kwargs)

Responsible for generating the perturbed values. It is called by the Perturbator.perturbate method.

Fully customizable and must be implemented in child generator that is required by a perturbator.

For deterministic sampling at inference time, take advantage of the passing a seed parameter in order to fix the seed, something like torch.manual_seed(seed), as in GaussianFeatureGenerator.

Source code in muppet/components/perturbator/generator/base.py
@abstractmethod
def generate(self, *args, **kwargs) -> torch.Tensor:
    """Responsible for generating the perturbed values. It is called by the `Perturbator.perturbate` method.

    Fully customizable and must be implemented in child generator that is required by a perturbator.

    For deterministic sampling at inference time, take advantage of the passing a seed parameter
    in order to fix the seed, something like torch.manual_seed(seed), as in `GaussianFeatureGenerator`.

    """
    raise NotImplementedError
TrainableGenerator
TrainableGenerator(lr, num_epochs)

Bases: Module, Generator

Abstract base class for trainable neural network generators.

Extends the basic Generator with PyTorch neural network capabilities and built-in training infrastructure. Supports complex learned perturbation strategies through gradient-based optimization

Implementing subclass of this trainable perturbator only requires to implement the run_epoch method.

Abstract class for generators with the train method implemented.

Parameters:

  • lr (float) –

    Learning rate

  • num_epochs (int) –

    Number of epochs

Attributes:

  • device

    The used device. Will get updated from the main explainer after initialization.

Source code in muppet/components/perturbator/generator/base.py
def __init__(
    self,
    lr: float,
    num_epochs: int,
) -> None:
    """Abstract class for generators with the train method implemented.

    Args:
        lr (float): Learning rate
        num_epochs (int): Number of epochs

    Attributes:
        device: The used device. Will get updated from the main explainer after initialization.

    """
    self.lr = lr
    self.num_epochs = num_epochs
    self.device = DEVICE
    super().__init__()
Functions
train_generator
train_generator(train_loader, validation_ratio=0.2)

Train the model.

Parameters:

  • train_loader (DataLoader) –

    The train data loader

Returns:

  • Tuple[list, list]

    The training results trends history

Source code in muppet/components/perturbator/generator/base.py
def train_generator(
    self,
    train_loader: DataLoader,
    validation_ratio: float = 0.2,
) -> Tuple[list, list]:
    """Train the model.

    Args:
        train_loader (DataLoader): The train data loader

    Returns:
        The training results trends history

    """
    stime = time.time()
    self.to(self.device)

    self.optimizer = torch.optim.Adam(params=self.parameters(), lr=self.lr)

    best_loss = np.inf

    train_loss_trends = list()
    validation_loss_trends = list()

    train_size = int((1 - validation_ratio) * len(train_loader.dataset))
    trainset_dataset, validation_dataset = torch.utils.data.random_split(
        train_loader.dataset,
        [train_size, len(train_loader.dataset) - train_size],
        generator=torch.Generator(
            device=torch.get_default_device()
        ).manual_seed(42),
    )
    trainset_loader = DataLoader(
        trainset_dataset.dataset,
        batch_size=train_loader.batch_size,
        shuffle=True,
    )
    validation_loader = DataLoader(
        validation_dataset.dataset,
        batch_size=train_loader.batch_size,
        shuffle=False,
    )

    for epoch in range(self.num_epochs + 1):
        train_loss = self.run_epoch(
            dataloader=trainset_loader, in_train=True
        )
        validation_loss = self.run_epoch(
            dataloader=validation_loader,
            in_train=False,
        )

        train_loss_trends.append(train_loss)
        validation_loss_trends.append(validation_loss)

        if epoch % 10 == 0:
            logger.info(f"\nEpoch {epoch}")
            logger.info(f"Model Training Loss ===> {train_loss}")
            logger.info(f"Model Validation Loss ===> {validation_loss}")

        if validation_loss < best_loss:
            best_loss = validation_loss
            # Add early stopping
        elif validation_loss > 1.2 * best_loss:
            # Note: maybe save the model at this epoch later on
            break

    logger.info(
        f"Model validation loss = {best_loss:.6f}  | Exucution time: {time.time() - stime}"
    )

    return train_loss_trends, validation_loss_trends
run_epoch abstractmethod
run_epoch(dataloader, in_train)

Run one training epoch. This is a customizable method that depends on the nature of the generator!

Source code in muppet/components/perturbator/generator/base.py
@abstractmethod
def run_epoch(
    self,
    dataloader: DataLoader,
    in_train: bool,
) -> float:
    """Run one training epoch.
    This is a customizable method that depends on the nature of the generator!
    """
    raise NotImplementedError

muppet.components.perturbator.generator.conditional_timestep_generator

Conditional Gaussian generator for time series perturbations using RNN-based VAE.

This module implements a sophisticated conditional generator for time series data that learns to impute missing values by modeling the conditional distribution P(X_t|X_0:t-1). The generator uses a variational autoencoder architecture with RNN encoder and Gaussian decoder to generate contextually appropriate perturbations for temporal explanations.

As part of the MUPPET perturbation framework, this generator enables advanced time series explanation methods by producing realistic substitute values that maintain temporal dependencies and feature correlations. This is essential for explaining models that depend on sequential patterns and temporal dynamics.

The module contains

ConditionalGaussianFeatureGenerator: Main trainable generator combining encoder-decoder with conditional sampling capabilities for multivariate time series GaussianRNNEncoder: RNN-based encoder that maps time series to latent Gaussian parameters GaussianDecoder: Decoder that generates likelihood distributions from latent representations check_cov_pd: Utility function ensuring positive definite covariance matrices

Key Technical Features
  • Variational autoencoder with RNN encoder for temporal modeling
  • Conditional sampling P(X_S'|X_S) for feature subsets
  • Multivariate Gaussian distributions with learned covariances
  • Positive definite covariance correction with noise injection
  • Support for both univariate and multivariate time series
  • Deterministic sampling for reproducible explanations

The generator is designed for use with time series explanation methods like temporal LIME, SHAP for sequences, or custom perturbation-based attributions that require realistic temporal imputations rather than simple masking strategies.

Classes

ConditionalGaussianFeatureGenerator
ConditionalGaussianFeatureGenerator(
    feature_size,
    hidden_size,
    latent_size,
    mid_layer_size,
    prediction_size,
    num_samples,
    cov_noise_level,
    max_noise_correction,
    lr,
    num_epochs,
    timesteps_divide_num,
    seed=None,
)

Bases: TrainableGenerator

Conditional Gaussian generator for time series perturbations.

Implements a variational autoencoder with RNN encoder and Gaussian decoder for learning conditional distributions P(X_t|X_{0:t-1}). Enables sophisticated temporal perturbations that preserve realistic time series patterns and feature dependencies.

Conditional generator model to predict perturbed values.

Parameters:

  • feature_size (int) –

    Number of features in the input (f)

  • hidden_size (int) –

    The encoder's hidden layer size

  • latent_size (int) –

    The encoder's latent space size

  • mid_layer_size (int) –

    The mid-layer size used in Encoder and Decoder

  • prediction_size (int) –

    The number of predictions to make. The prediction window [t:t+p] (p)

  • num_samples (int) –

    Number of Zs to sample from the latent distribution (n)

  • cov_noise_level (float) –

    The noise to add to the covariance to make it positive definite (PD)

  • max_noise_correction (int) –

    Maximum number of covariance PD correction iterations

  • lr (float) –

    Training learning rate used with Adam optimizer

  • num_epochs (int) –

    Training number of epochs

  • timesteps_divide_num (int) –

    Used to divide the time series. E.g, when set to 1, it means predict only at time t=T using X0:T-1

  • seed (int | None, default: None ) –

    the seed to used for reproducible sampling at inference time. If not provided the sampling is nondeterministic

Source code in muppet/components/perturbator/generator/conditional_timestep_generator.py
def __init__(
    self,
    feature_size: int,
    hidden_size: int,
    latent_size: int,
    mid_layer_size: int,
    prediction_size: int,
    num_samples: int,
    cov_noise_level: float,
    max_noise_correction: int,
    lr: float,
    num_epochs: int,
    timesteps_divide_num: int,
    seed: int | None = None,
) -> None:
    """Conditional generator model to predict perturbed values.

    Args:
        feature_size (int): Number of features in the input (f)
        hidden_size (int): The encoder's hidden layer size
        latent_size (int): The encoder's latent space size
        mid_layer_size (int): The mid-layer size used in Encoder and Decoder
        prediction_size (int): The number of predictions to make. The prediction window [t:t+p] (p)
        num_samples (int): Number of Zs to sample from the latent distribution (n)
        cov_noise_level (float): The noise to add to the covariance to make it positive definite (PD)
        max_noise_correction (int): Maximum number of covariance PD correction iterations
        lr (float): Training learning rate used with Adam optimizer
        num_epochs (int): Training number of epochs
        timesteps_divide_num (int): Used to divide the time series. E.g, when set to 1, it means predict only at time t=T using X0:T-1
        seed: the seed to used for reproducible sampling at inference time. If not provided the sampling is nondeterministic

    """
    # general parameters
    self.seed = seed
    if self.seed:
        torch.manual_seed(seed=self.seed)
    self.device = "cuda" if torch.cuda.is_available() else "cpu"

    # init untrained generator
    self.is_trained = False

    # architectures parameters
    self.feature_size = feature_size
    self.hidden_size = hidden_size
    self.latent_size = latent_size
    self.mid_layer_size = mid_layer_size
    self.prediction_size = prediction_size
    self.output_size = self.feature_size * self.prediction_size

    # decoder's parameters
    self.num_samples = num_samples
    self.cov_noise_level = cov_noise_level
    self.max_noise_correction = max_noise_correction

    # training specific parameters (used only by this generator)
    self.timesteps_divide_num = timesteps_divide_num

    # global training parameters (used by all generators)
    super().__init__(lr=lr, num_epochs=num_epochs)

    # initiate encoder and decoder
    self.rnn_encoder = GaussianRNNEncoder(
        feature_size=self.feature_size,
        hidden_size=self.hidden_size,
        latent_size=self.latent_size,
        mid_layer_size=self.mid_layer_size,
        device=self.device,
    )

    self.decoder = GaussianDecoder(
        feature_size=self.feature_size,
        output_size=self.output_size,
        latent_size=self.latent_size,
        mid_layer_size=self.mid_layer_size,
        device=self.device,
    )
Functions
likelihood_distribution
likelihood_distribution(past)

Estimate the mean and (co)variance of the joint distribution P(X_t|X_0:t-1).

Parameters:

  • past (Tensor) –

    Batch of past data of the shape (b, f, t).

Returns: mean and (co)variance

Source code in muppet/components/perturbator/generator/conditional_timestep_generator.py
def likelihood_distribution(
    self,
    past: torch.Tensor,
) -> Tuple[torch.Tensor, torch.Tensor]:
    """Estimate the mean and (co)variance of the joint distribution P(X_t|X_0:t-1).

    Args:
        past (torch.Tensor): Batch of past data of the shape (b, f, t).

    Returns: mean and (co)variance

    """
    # estimate the past's Gaussian distribution
    mu, std = self.rnn_encoder.latent_distribution(X=past)
    mean, covariance = self.decoder.likelihood_distribution(
        mu=mu,
        std=std,
        num_samples=self.num_samples,
        cov_noise_level=self.cov_noise_level,
        max_noise_correction=self.max_noise_correction,
    )

    # multivar: (n, p*f), (n, p*f, p*f) | univar: (n, p), (n, p)
    return mean, covariance
joint_sample
joint_sample(past)

Generate the missing measurements at time current(=t) based on past (X0:t-1) through sampling from the joint distribution P(X_t|X_0:t-1).

Parameters:

  • past (Tensor) –

    Batch of previous data measurements (b, f, t).

Returns:

  • Tensor

    torch.Tensor: A sample from the the Gaussian distribution of P(X_t|X_0:t-1).

Source code in muppet/components/perturbator/generator/conditional_timestep_generator.py
def joint_sample(
    self,
    past: torch.Tensor,
) -> torch.Tensor:
    """Generate the missing measurements at time current(=t) based on past (X0:t-1) through sampling from the joint distribution P(X_t|X_0:t-1).

    Args:
        past (torch.Tensor): Batch of previous data measurements (b, f, t).

    Returns:
        torch.Tensor: A sample from the the Gaussian distribution of P(X_t|X_0:t-1).

    """
    # multivar: (n, p*f), (n, p*f, p*f) | univar:(n, p), (n, p)
    mean, covariance = self.likelihood_distribution(past=past)

    # univariate case
    if self.feature_size == 1:
        std = torch.sqrt(covariance).squeeze(dim=-1)
        sample = torch.normal(mean=mean, std=std)  # (n, p)
        return sample

    likelihood = MultivariateNormal(loc=mean, covariance_matrix=covariance)

    return likelihood.rsample()  # (n, p*f)
generate
generate(past, current, features_to_perturb)

Generate values for the features_to_perturb at time current(=t) based on past (historical data) through conditional sampling from P(X_{S^,t}|X_{S,t}).

Takes 'current' the measurements at time t, and returns same 'current' at time t with features in S^ being replaced by values estimated from the Gaussian distribution.

Parameters:

  • past (Tensor) –

    Batch of previous data measurements (b, f, t)

  • current (Tensor) –

    Batch of measurements at time t (b, f)

  • features_to_perturb (set) –

    Set of features' indices that are not known/measured. We sample on these features

Returns:

  • full_sample ( Tensor ) –

    The imputed sample at time t with the generated values for missing measurements (S^). (b, f, t)

Source code in muppet/components/perturbator/generator/conditional_timestep_generator.py
def generate(
    self,
    past: torch.Tensor,
    current: torch.Tensor,
    features_to_perturb: set,
) -> torch.Tensor:
    """Generate values for the features_to_perturb at time current(=t) based on past (historical data) through conditional sampling from P(X_{S^,t}|X_{S,t}).

    Takes 'current' the measurements at time t, and returns same 'current' at time t with features in S^ being replaced by values estimated from the Gaussian distribution.

    Args:
        past (torch.Tensor): Batch of previous data measurements (b, f, t)
        current (torch.Tensor): Batch of measurements at time t (b, f)
        features_to_perturb (set): Set of features' indices that are not known/measured. We sample on these features

    Returns:
        full_sample (torch.Tensor): The imputed sample at time t with the generated values for missing measurements (S^). (b, f, t)

    """
    conditioning_features = sorted(
        set(range(current.shape[-1])) - set(features_to_perturb)
    )
    # when len(S)=0
    # TODO check if it could work fine with no feature kept unchanged, only perturbed feature (ex 1channel timeseries)
    # TODO see univariate case
    assert len(conditioning_features) > 0, (
        "ConditionalGaussianFeatureGenerator should be conditionned on at least one feature."
    )

    # when len(S)=feature_size: when 'don't perturb any feature' ==> return current
    if len(conditioning_features) == self.feature_size:
        return current

    # (b, f)
    assert len(current.shape) == 2, (
        f"The passed data at time t 'current' has different shape than what is expected! Expected: (batch, features), but received: {current.shape}"
    )

    # estimate mean and covariance of P(X_t|X_0:t-1) or P(X_{t}|past) if univariate case
    mean, covariance = self.likelihood_distribution(
        past
    )  # (n, p*f), multivar:(n, p*f, p*f), univar:(n, p)

    # UNIVARIATE CASE 1: f=1 => len(S)=1
    if self.feature_size == 1:
        assert len(conditioning_features) == self.feature_size, (
            "For univariate case, the features to explain must match the initialized feature size!"
        )
        # P(x_{t}|past)
        std = torch.sqrt(covariance).squeeze(dim=-1)  # (n, p)
        sample = torch.normal(mean=mean, std=std)
        return sample, mean

    conditioning_inds = [
        list(
            range(i * self.prediction_size, (i + 1) * self.prediction_size)
        )
        for i in conditioning_features
    ]
    # e.g [0, 1, 2, 3, ..., len(S)*p-1]
    conditioning_inds = list(
        itertools.chain.from_iterable(conditioning_inds)
    )

    perturb_inds = list(
        set(range(self.output_size)) - set(conditioning_inds)
    )

    conditioning_len = len(conditioning_inds)
    perturb_len = len(perturb_inds)

    cov_1_2 = covariance[:, perturb_inds, :][:, :, conditioning_inds].view(
        -1, perturb_len, conditioning_len
    )  # (n, s^, s)
    cov_2_2 = covariance[:, conditioning_inds, :][
        :, :, conditioning_inds
    ].view(-1, conditioning_len, conditioning_len)  # (n, s, s)
    cov_1_1 = covariance[:, perturb_inds, :][:, :, perturb_inds].view(
        -1, perturb_len, perturb_len
    )  # (n, s^, s^)

    # make full sample of the same shape as the mean by repeating the batch num_samples times,
    #  and the duplicating the features values prediction_size times (f1,f2)==> (f1,f1,f2,f2)
    # shape (n, p*f)
    full_sample = (
        current.unsqueeze(0)
        .repeat(self.num_samples, 1, 1)
        .reshape(-1, self.feature_size)
        .float()
    )
    full_sample = (
        full_sample.unsqueeze(2)
        .repeat(1, 1, self.prediction_size)
        .reshape(full_sample.shape[0], -1)
    ).to(self.device)

    conditioning_fs_mean = mean[:, conditioning_inds].view(
        -1, conditioning_len
    )
    conditioning_fs_vals = full_sample[:, conditioning_inds].view(
        -1, conditioning_len
    )
    conditioning_fs_vals_mean_diff = (
        conditioning_fs_vals - conditioning_fs_mean
    ).view(-1, conditioning_len, 1)  # (n, s, 1)

    temp = torch.bmm(cov_1_2, torch.inverse(cov_2_2))  # (n, s^, s)
    conditioning_fs_comp_mean = mean[:, perturb_inds].view(
        -1, perturb_len
    )  # (n, s^)

    mean_conditional = conditioning_fs_comp_mean + torch.bmm(
        temp, conditioning_fs_vals_mean_diff
    ).squeeze(-1)  # (n,s^,1)=>(n,s^)

    cov_conditional = cov_1_1 - torch.bmm(
        temp, torch.transpose(cov_1_2, 2, 1)
    )  # (n, s^, s^)

    # P(x_{s^,t}|x_{s,t})
    likelihood = MultivariateNormal(
        loc=mean_conditional, covariance_matrix=cov_conditional
    )
    # (n, s^)
    sample = likelihood.rsample()
    full_sample[:, perturb_inds] = sample  # (n, p*f)

    return full_sample.detach()
run_epoch
run_epoch(dataloader, in_train)

Run one training epoch

Parameters:

  • dataloader (DataLoader) –

    The train loader

  • in_train (bool) –

    Either if training or evaluating. E.g set to True ==> training mode.

Returns:

  • float ( float ) –

    the epoch loss.

Source code in muppet/components/perturbator/generator/conditional_timestep_generator.py
def run_epoch(
    self,
    dataloader: DataLoader,
    in_train: bool,
) -> float:
    """Run one training epoch

    Args:
        dataloader (DataLoader): The train loader
        in_train (bool, optional): Either if training or evaluating. E.g set to True ==> training mode.

    Returns:
        float: the epoch loss.

    """
    if in_train:
        self.train()

    else:
        self.eval()

    # divide the timesteps
    try:
        signal_length = dataloader.dataset.shape[-1]  # (b, f, t)
    except AttributeError:
        signal_length = dataloader.dataset.dataset.features.shape[-1]
        # dataloader.dataset is a torch subset

    if self.timesteps_divide_num == 1:
        # when only predicting at time t=T
        timepoints = [signal_length - self.prediction_size]
    else:
        assert self.timesteps_divide_num < signal_length + 1, (
            f"If the time series needs to be devided, it must respect its lenght. Provided timesteps_divide_num excceded the signal length: {signal_length}!"
        )
        timepoints = [
            int(tt)
            for tt in np.logspace(
                1.0,
                np.log10(signal_length - self.prediction_size),
                num=self.timesteps_divide_num,
            )
        ]
    epoch_loss = 0
    for _, (signals, true_label) in enumerate(dataloader):
        for t in timepoints:
            if in_train:
                self.optimizer.zero_grad()
            # the label is the future measures t:t+p (#Xt:t+p)
            label = signals[:, :, t : t + self.prediction_size].reshape(
                signals.shape[0], -1
            )
            # match label to number of generated samples (num_samples) ==> (n, p*f)
            label = (
                label.unsqueeze(0)
                .repeat(self.num_samples, 1, 1)
                .reshape(-1, self.feature_size * self.prediction_size)
                .to(self.device)
            )

            prediction = self.joint_sample(
                past=signals[:, :, :t]
            )  # (n, p*f), X0:t-1
            reconstruction_loss = torch.nn.MSELoss(reduction="none")(
                prediction.float(), label.float()
            )
            reconstruction_loss = reconstruction_loss.mean().float()

            epoch_loss = epoch_loss + reconstruction_loss.item()

            if in_train:
                reconstruction_loss.backward(retain_graph=True)
                self.optimizer.step()

    return float(epoch_loss) / len(dataloader)
GaussianRNNEncoder
GaussianRNNEncoder(
    feature_size,
    hidden_size,
    latent_size,
    mid_layer_size,
    device,
)

Bases: Module

RNN encoder for mapping input sequences to Gaussian latent spaces.

Encodes time series data using a GRU and maps it to latent Gaussian parameters (mean and standard deviation). Used in variational approaches for conditional time series generation.

An RNN encoder that is responsible of transforming the input x into an encoding space (hidden) using one-layer GRU, and then maps it to the latent space the represent the Gaussian parameters for every input sample.

Parameters:

  • feature_size (int) –

    The number of input features

  • hidden_size (int) –

    The RNN/GRU hidden space size.

  • latent_size (int) –

    The latent space size. It is multiplied by two as it represents the mean and covariance of the latent representation Z of x.

  • mid_layer_size (int) –

    The size of a mid layer between the hidden space (RNN encoding) and the latent space (Z).

  • device (str) –

    The device to use.

Source code in muppet/components/perturbator/generator/conditional_timestep_generator.py
def __init__(
    self,
    feature_size: int,
    hidden_size: int,
    latent_size: int,
    mid_layer_size: int,
    device,
) -> None:
    """An RNN encoder that is responsible of transforming the input x into an encoding space (hidden) using one-layer GRU, and then
    maps it to the latent space the represent the Gaussian parameters for every input sample.


    Args:
        feature_size (int): The number of input features
        hidden_size (int): The RNN/GRU hidden space size.
        latent_size (int): The latent space size. It is multiplied by two as it represents the mean and covariance of the latent
            representation Z of x.
        mid_layer_size (int): The size of a mid layer between the hidden space (RNN encoding) and the latent space (Z).
        device (str): The device to use.

    """
    super().__init__()
    self.device = device
    # RNN masker
    self.rnn = torch.nn.GRU(
        input_size=feature_size, hidden_size=hidden_size, num_layers=1
    ).to(self.device)  # 1-layer GRU
    for layer_p in self.rnn._all_weights:
        for p in layer_p:
            if "weight" in p:
                torch.nn.init.normal_(self.rnn.__getattr__(p), 0.0, 0.02)

    # latent space masker (Z)
    self.dist_predictor = torch.nn.Sequential(
        torch.nn.Linear(
            in_features=hidden_size, out_features=mid_layer_size
        ),
        torch.nn.Tanh(),
        torch.nn.BatchNorm1d(num_features=mid_layer_size),
        torch.nn.Linear(
            in_features=mid_layer_size, out_features=latent_size * 2
        ),
    ).to(self.device)
Functions
latent_distribution
latent_distribution(X)

Estimate mean and the std of the distribution of the latent representation Z of X.

Parameters:

  • X (Tensor) –

    Input time series (b, f, t)

Returns:

  • Tuple[Tensor, Tensor]

    A tuple of mu and std. Each of shape (b, latent_size)

Source code in muppet/components/perturbator/generator/conditional_timestep_generator.py
def latent_distribution(
    self,
    X: torch.Tensor,
) -> Tuple[torch.Tensor, torch.Tensor]:
    """Estimate mean and the std of the distribution of the latent representation Z of X.

    Args:
        X: Input time series (b, f, t)

    Returns:
        A tuple of mu and std. Each of shape (b, latent_size)
    """
    X = X.permute(2, 0, 1).float()  # reshape to (t, b, f)

    # _: encoding/mapping of every t to the hidden_size space (t, b, hidden_size),
    # final_h_state: the last layers hidden state (num_layers=1, b, hidden_size)

    _, final_h_state = self.rnn(
        X.to(self.device)
    )  # (num_layers=1, b, hidden_size)

    # maps the batch inputs from hidden_size space to latent_size space (latent variable Z)
    mu_std = self.dist_predictor(
        final_h_state[0, :, :]
    )  # passing only (b, hidden_size), ignore the dim=0 bcz we use 1-layer GRU (num_layers=1)

    # semantic meaning of mean and std
    mu = mu_std[:, : mu_std.shape[1] // 2]  # (b, latent_size)
    std = mu_std[:, mu_std.shape[1] // 2 :]  # (b, latent_size)

    return mu, std
GaussianDecoder
GaussianDecoder(
    feature_size,
    output_size,
    latent_size,
    mid_layer_size,
    device,
)

Bases: Module

Gaussian decoder for generating distributions from latent representations.

Decodes latent variables into likelihood distributions over the output space. Supports both univariate (with variance) and multivariate (with covariance) Gaussian distributions for flexible time series generation.

A Gaussian decoder that estimate the likelihood distribution of the latent representation Z (encoding) of X.

Parameters:

  • feature_size (int) –

    The number of input features

  • output_size (int) –

    The expected output size (something like number of predictions to make * number of input features)

  • latent_size (int) –

    The latent representation size (output size of encoder).

  • mid_layer_size (int) –

    The size of a mid-layer between the latent space and final output mapping.

  • device (str) –

    The device to use.

Source code in muppet/components/perturbator/generator/conditional_timestep_generator.py
def __init__(
    self,
    feature_size: int,
    output_size: int,
    latent_size: int,
    mid_layer_size: int,
    device,
) -> None:
    """A Gaussian decoder that estimate the likelihood distribution of the latent representation Z (encoding) of X.

    Args:
        feature_size (int): The number of input features
        output_size (int): The expected output size (something like number of predictions to make * number of input features)
        latent_size (int, optional): The latent representation size (output size of encoder).
        mid_layer_size (int, optional): The size of a mid-layer between the latent space and final output mapping.
        device (str, optional): The device to use.

    """
    super().__init__()

    self.feature_size = feature_size
    self.output_size = output_size
    self.device = device

    # Gaussian mean generator network from the latent space Z. The output_size is proportional to the number of input features as we are estimating
    #  the mean of every feature.
    self.mean_generator = torch.nn.Sequential(
        torch.nn.Linear(
            in_features=latent_size, out_features=mid_layer_size
        ),
        torch.nn.Tanh(),
        torch.nn.BatchNorm1d(num_features=mid_layer_size),
        torch.nn.Linear(
            in_features=mid_layer_size, out_features=self.output_size
        ),
    ).to(self.device)
    # UNIVARIATE CASE
    if feature_size == 1:
        # Gaussian variance generator network from the latent space Z. This is used for the univariate time series.
        # The output_size is proportional to the number of input features as we are estimating the variance of every feature.
        self.var_generator = torch.nn.Sequential(
            torch.nn.Linear(
                in_features=latent_size, out_features=mid_layer_size
            ),
            torch.nn.Tanh(),
            torch.nn.BatchNorm1d(num_features=mid_layer_size),
            torch.nn.Linear(
                in_features=mid_layer_size, out_features=self.output_size
            ),
            torch.nn.ReLU(),
        ).to(self.device)
    # MULTIVARIATE CASE
    else:
        # Gaussian covariance generator network from Z. The output_size is proportional to the number of input features as we are estimating
        #  the covariance of every feature. Because it's the covariance matrix we generate output_size*output_size values.
        self.cov_generator = torch.nn.Sequential(
            torch.nn.Linear(
                in_features=latent_size, out_features=mid_layer_size
            ),
            torch.nn.Tanh(),
            torch.nn.BatchNorm1d(num_features=mid_layer_size),
            torch.nn.Linear(
                in_features=mid_layer_size,
                out_features=self.output_size * self.output_size,
            ),
            torch.nn.ReLU(),
        ).to(self.device)
Functions
likelihood_distribution
likelihood_distribution(
    mu,
    std,
    num_samples,
    cov_noise_level,
    max_noise_correction,
)

Estimate the likelihood Gaussian distribution of the output (proportional to number of features and needed predictions) given the latent representation Z (encoding) of X.

Parameters:

  • mu (Tensor) –

    The mean of the latent distribution. Shape = (b, latent_size)

  • std (Tensor) –

    The std of the latent distribution. Shape = (b, latent_size)

  • num_samples (int) –

    Number of Zs to sample from the latent distribution. In case multi-sampling is needed!

  • cov_noise_level (float) –

    The noise to add to the covariance to make it positive definite (PD).

  • max_noise_correction (int) –

    Maximum number of covariance PD correction iterations.

Returns:

  • tuple ( Tuple[Tensor, Tensor] ) –

    estimated mean and covariance or variance if univariate case

Note: n = bnum_samples, output_size = pf (number of prediction to make * input features) where p=prediction_size the prediction window.

Source code in muppet/components/perturbator/generator/conditional_timestep_generator.py
def likelihood_distribution(
    self,
    mu: torch.Tensor,
    std: torch.Tensor,
    num_samples: int,
    cov_noise_level: float,
    max_noise_correction: int,
) -> Tuple[torch.Tensor, torch.Tensor]:
    """Estimate the likelihood Gaussian distribution of the output (proportional to number of features and needed predictions) given the latent
    representation Z (encoding) of X.

    Args:
        mu (torch.Tensor): The mean of the latent distribution. Shape = (b, latent_size)
        std (torch.Tensor): The std of the latent distribution. Shape = (b, latent_size)
        num_samples (int, optional): Number of Zs to sample from the latent distribution. In case multi-sampling is needed!
        cov_noise_level (float, optional): The noise to add to the covariance to make it positive definite (PD).
        max_noise_correction (int, optional): Maximum number of covariance PD correction iterations.

    Returns:
        tuple: estimated mean and covariance or variance if univariate case

    Note: n = b*num_samples, output_size = p*f (number of prediction to make * input features) where p=prediction_size the prediction window.
    """
    # sample Z from the distribution
    if num_samples == 1:
        Z = mu + std * torch.randn_like(mu).to(
            self.device
        )  # (b, latent_size)
    else:
        rand = torch.randn((num_samples, *mu.shape))
        Z = mu.unsqueeze(0) + std.unsqueeze(0) * rand
        Z = Z.reshape(-1, Z.shape[-1]).to(self.device)  # (n, latent_size)

    # Generate the distribution P(X|H,Z)
    mean = self.mean_generator(Z)  # (n, output_size)

    # UNIVARIATE CASE:
    if self.feature_size == 1:
        variance = self.var_generator(Z)  # (n, p)
        return mean, variance

    # MULTIVARIATE CASE:
    # make len(Z)=n=b*num_samples of identity matrix of shape (p*f, p*f)
    cov_noise = (
        torch.eye(self.output_size).unsqueeze(0).repeat(len(Z), 1, 1)
        * cov_noise_level
    )
    cov_noise = cov_noise.to(self.device)

    # self.cov_generator(Z): (n, output_size*output_size) output_size=p*f
    A = self.cov_generator(Z).view(
        -1, self.output_size, self.output_size
    )  # (n, p*f, p*f)
    A_transpose = torch.transpose(A, 1, 2)  # transpose A on dim 1 and 2

    # perform batch matrix-multi
    # torch.use_deterministic_algorithms(True)
    covariance = torch.bmm(A, A_transpose) + cov_noise  # (n, p*f, p*f)

    # check if cov is positive definite and try to add noise to it max_noise_correction of times, if no success log the problem and use identity matrix for cov
    covariance = check_cov_pd(
        covariance_matrix=covariance,
        cov_noise_level=cov_noise_level,
        device=self.device,
        max_noise_correction=max_noise_correction,
    )

    return mean, covariance

Functions

check_cov_pd
check_cov_pd(
    covariance_matrix,
    cov_noise_level,
    device,
    max_noise_correction=20,
)

Check if a covariance matrix is Positive Definite (PD) if not keep adding noise to it till it becomes PD. If max_noise_correction is exceeded, return the identity matrix.

Parameters:

  • covariance_matrix (Tensor) –

    A matrix of shape (n, k, k)

  • cov_noise_level (_type_) –

    A noise value to be added to make the cov PD

  • max_noise_correction (int, default: 20 ) –

    Number of tries to correct the matrix, if exceeded return I.

  • device (str) –

    The device to use.

Returns:

  • Tensor

    torch.Tensor: A PD covariance matrix with noise added to the original one or the identity matrix I of same shape.

Source code in muppet/components/perturbator/generator/conditional_timestep_generator.py
def check_cov_pd(
    covariance_matrix: torch.Tensor,
    cov_noise_level,
    device,
    max_noise_correction: int = 20,
) -> torch.Tensor:
    """Check if a covariance matrix is Positive Definite (PD) if not keep adding noise to it till it becomes PD. If max_noise_correction is exceeded,
    return the identity matrix.

    Args:
        covariance_matrix (torch.Tensor): A matrix of shape (n, k, k)
        cov_noise_level (_type_): A noise value to be added to make the cov PD
        max_noise_correction (int, optional): Number of tries to correct the matrix, if exceeded return I.
        device (str, optional): The device to use.

    Returns:
        torch.Tensor: A PD covariance matrix with noise added to the original one or the identity matrix I of same shape.

    """
    cov_noise = (
        torch.eye(*covariance_matrix[0].size())
        .unsqueeze(0)
        .repeat(len(covariance_matrix), 1, 1)
        * cov_noise_level
    )  # (n, output_size, output_size)
    cov_noise = cov_noise.to(device)

    count_loop = 0
    while True:
        valid = constraints.positive_definite.check(covariance_matrix)
        if valid.all():
            return covariance_matrix
        else:
            error_index = torch.where(~valid)[0]
            covariance_matrix[error_index, :, :] = (
                covariance_matrix[error_index, :, :]
                + cov_noise * cov_noise_level
            )
            logger.warning(
                f"Covariance matrix is not positive definite at {len(error_index)} indices."
            )
            logger.warning(
                f"Adding {cov_noise_level}I to the matrix at those indices"
            )

            count_loop += 1
            logger.info(f"count_loop={count_loop}")
            if count_loop > max_noise_correction:
                covariance_matrix[error_index, :, :] = cov_noise
                logger.warning(
                    "Attempt to add more noise failed. Setting that covariance to I"
                )
                valid_loop = constraints.positive_definite.check(
                    covariance_matrix
                )
                np.save(
                    f"debug.array.{error_index}.npy",
                    covariance_matrix[error_index, :, :].detach().cpu().numpy(),
                )
                if valid_loop.all():
                    return covariance_matrix
                else:
                    logger.warning("Should not be here.")
                    return covariance_matrix

muppet.components.perturbator.generator.tabular_generator

Tabular data generators for perturbation-based explanations.

This module provides generators specifically designed for tabular data perturbations in the MUPPET XAI framework. These generators create realistic substitute values for masked features during the perturbation process, enabling meaningful explanations for tabular machine learning models.

Tabular data presents unique challenges for perturbation-based explanations due to mixed data types (numerical and categorical), feature correlations, and distribution properties. The generators in this module address these challenges by implementing different sampling strategies tailored to tabular characteristics.

The module contains

GaussianSamplingGenerator: Simple statistical generator using Gaussian distributions estimated from historical data for time series or sequential tabular data StandardGaussianTabularGenerator: Advanced generator for mixed tabular data with separate handling of numerical and categorical features RandomSampleTabularGenerator: Frequency-based generator that samples from observed feature value distributions in training data

Key Features
  • Handles mixed numerical and categorical features appropriately
  • Preserves feature distributions and correlations from training data
  • Supports instance-centered perturbations for local explanations
  • Configurable sampling strategies (statistical vs. frequency-based)
  • Deterministic sampling for reproducible explanations

These generators are typically used with tabular perturbators and are essential for methods like LIME, SHAP, and other feature attribution techniques applied to structured data, enabling realistic counterfactual analysis and feature importance discovery.

Classes

GaussianSamplingGenerator
GaussianSamplingGenerator(seed=None)

Bases: Generator

Simple Gaussian sampling generator for tabular data imputation.

Generates replacement values for perturbed features by sampling from normal distributions. Provides basic statistical imputation without considering feature correlations or data distributions.

A simple random sampling generator. Used for imputing missing values by a sampled ones from a Normal Distribution.

Parameters:

  • seed (int, default: None ) –

    Seed to control reproducibility

Source code in muppet/components/perturbator/generator/tabular_generator.py
def __init__(self, seed: int | None = None) -> None:
    """A simple random sampling generator. Used for imputing missing values by a sampled ones from a Normal Distribution.

    Args:
        seed (int, optional): Seed to control reproducibility

    """
    self.seed = seed
    self.is_trained = True

    super().__init__()
Functions
generate
generate(past, current, features_to_perturb)

Return sampled values from a Normal Distribution.

past (torch.Tensor): past measurements from which Mean and Std will be estimated (b=1, f, t) current (torch.Tensor): current time step to perturbate (b=1, f, 1) features_to_perturb (torch.Tensor): features to perturb

Returns:

  • Tensor

    torch.Tensor: sampled values

Source code in muppet/components/perturbator/generator/tabular_generator.py
def generate(self, past, current, features_to_perturb) -> torch.Tensor:
    """Return sampled values from a Normal Distribution.

    past (torch.Tensor): past measurements from which Mean and Std will be estimated (b=1, f, t)
    current (torch.Tensor): current time step to perturbate (b=1, f, 1)
    features_to_perturb (torch.Tensor): features to perturb

    Returns:
        torch.Tensor: sampled values
    """
    if self.seed:
        torch.manual_seed(seed=self.seed)

    # Estimate mean and std for the the normal distribution
    mean = torch.mean(past, dim=-1).cpu().numpy()
    std = torch.std(past, dim=-1, unbiased=False).cpu().numpy()

    # Sample from the normal distribution
    res = torch.from_numpy(
        np.random.normal(loc=mean, scale=std, size=mean.shape)
    )

    # Take into account features to explain
    mask = torch.zeros_like(res, dtype=torch.bool)
    mask[:, list(set(range(mask.shape[-2])) - set(features_to_perturb))] = (
        True
    )

    return res
StandardGaussianTabularGenerator
StandardGaussianTabularGenerator(
    train_data,
    categorical_features=[],
    sample_around_instance=True,
)

Bases: Generator

Advanced generator for mixed tabular data with statistical modeling.

Handles both numerical and categorical features by computing separate statistics and frequencies. Provides instance-centered perturbations for local explanations and maintains feature distributions from training data.

Initialize the StandardGaussianTabularGenerator for mixed data types.

Sets up a generator that handles both numerical and categorical features by computing separate statistics and frequencies, enabling realistic perturbations for tabular machine learning explanations.

Parameters:

  • train_data (Tensor) –

    Training dataset tensor used to compute feature statistics and categorical frequencies. Shape: (n_samples, n_features).

  • categorical_features (list[int], default: [] ) –

    List of column indices that contain categorical data. These features will be handled using frequency-based sampling.

  • sample_around_instance (bool, default: True ) –

    If True, generates perturbations centered around the instance being explained. If False, samples from training data distribution. Useful for local vs. global explanation strategies.

Source code in muppet/components/perturbator/generator/tabular_generator.py
def __init__(
    self,
    train_data: torch.Tensor,
    categorical_features: List[int] = [],
    sample_around_instance: bool = True,
) -> None:
    """Initialize the StandardGaussianTabularGenerator for mixed data types.

    Sets up a generator that handles both numerical and categorical features by
    computing separate statistics and frequencies, enabling realistic perturbations
    for tabular machine learning explanations.

    Args:
        train_data (torch.Tensor): Training dataset tensor used to compute feature statistics
            and categorical frequencies. Shape: (n_samples, n_features).
        categorical_features (list[int]): List of column indices that contain categorical data.
            These features will be handled using frequency-based sampling.
        sample_around_instance (bool): If True, generates perturbations centered around
            the instance being explained. If False, samples from training data
            distribution. Useful for local vs. global explanation strategies.
    """
    self.means_tensor = None
    self.std_tensor = None
    self.sample_around_instance = sample_around_instance
    self.categorical_frequencies = []
    self.train_data = train_data
    self.categorical_features = categorical_features
    self.random_state = np.random.RandomState(
        seed=None
    )  # Initialization of the random state
    super().__init__()
    self.train_generator()
Functions
train_generator
train_generator()

Train the generator to compute summary statistics from the training data.

Source code in muppet/components/perturbator/generator/tabular_generator.py
def train_generator(self) -> None:
    """Train the generator to compute summary statistics from the training data."""
    b, f = (
        self.train_data.shape
    )  # Get the shape of the training data (b: batch, f: number of features)
    numerical_features = list(
        set(range(f)) - set(self.categorical_features)
    )  # Identify numerical features

    # Fit StandardScaler to numerical features
    standard_scaler = StandardScaler()
    # Handle potential GPU tensor conversion
    try:
        standard_scaler.fit(self.train_data[:, numerical_features].numpy())
    except TypeError:  # Added handling for tensors on GPU
        standard_scaler.fit(
            self.train_data[:, numerical_features].cpu().numpy()
        )

    # Save means and standard deviations as tensors and ensure they are on the same device as train_data
    self.means_tensor = torch.tensor(standard_scaler.mean_).to(
        DEVICE
    )  # Added device handling
    self.std_tensor = torch.tensor(standard_scaler.scale_).to(
        DEVICE
    )  # Added device handling
    self.numerical_features = numerical_features

    if len(self.categorical_features) == 0:
        return

    # Calculate frequencies of each element for each categorical feature
    self.categorical_frequencies = {}
    for feat_idx in self.categorical_features:
        feat_values = (
            self.train_data[:, feat_idx].cpu().numpy()
        )  # Ensure it's on CPU for numpy operations
        unique, counts = np.unique(feat_values, return_counts=True)
        total_count = len(feat_values)
        freq_dict = dict(zip(unique, counts / total_count))
        self.categorical_frequencies[feat_idx] = freq_dict
generate
generate(x_instance, data_scaled)

Generate a perturbed sample based on the learned statistics.

Parameters:

  • x_instance (Tensor) –

    The instance to be explained, of shape (1, f).

  • data_scaled (Tensor) –

    Pre-scaled data based on normal distribution, of shape (n, 1, f).

Returns:

  • Tensor

    torch.Tensor: Generated sample tensor with perturbations.

Source code in muppet/components/perturbator/generator/tabular_generator.py
def generate(
    self, x_instance: torch.Tensor, data_scaled: torch.Tensor
) -> torch.Tensor:
    """Generate a perturbed sample based on the learned statistics.

    Args:
        x_instance (torch.Tensor): The instance to be explained, of shape (1, f).
        data_scaled (torch.Tensor): Pre-scaled data based on normal distribution, of shape (n, 1, f).

    Returns:
        torch.Tensor: Generated sample tensor with perturbations.
    """
    # Separate numerical and categorical indices
    numerical_indices = self.numerical_features

    # Initialize sampled_values_tensor with the same shape as data_scaled
    sampled_values_tensor = data_scaled.clone()

    # Extract the instance to explain (assume it's of shape (1, f))
    instance_to_explain = x_instance[0, :]

    # Generate perturbations for numerical features
    if len(numerical_indices) > 0:
        numerical_data = data_scaled[:, :, numerical_indices]
        if self.sample_around_instance:
            # Rescale the normal data using the instance's values plus some noise
            for i, num_idx in enumerate(numerical_indices):
                sampled_values_tensor[:, :, num_idx] = (
                    instance_to_explain[num_idx]
                    + numerical_data[:, :, i] * self.std_tensor[i]
                )
        else:
            # Rescale the normal data using the learned means and standard deviations
            for i, num_idx in enumerate(numerical_indices):
                sampled_values_tensor[:, :, num_idx] = (
                    numerical_data[:, :, i] * self.std_tensor[i]
                    + self.means_tensor[i]
                )

    # If categorical frequencies are available, sample categorical features
    if len(self.categorical_frequencies) > 0:
        for feat_idx, freq_dict in self.categorical_frequencies.items():
            unique_values = list(freq_dict.keys())
            probabilities = list(freq_dict.values())

            # Use random_state.choice to sample categorical values
            sampled_values = torch.tensor(
                self.random_state.choice(
                    unique_values,
                    size=data_scaled.shape[0],
                    p=probabilities,
                )
            )

            # Replace the values in the sampled tensor with the sampled categorical values
            sampled_values_tensor[:, :, feat_idx] = sampled_values.view(
                -1, 1
            )  # Ensure correct shape
    return sampled_values_tensor
RandomSampleTabularGenerator
RandomSampleTabularGenerator(
    train_data, method="freq", seed=None
)

Bases: Generator

Generate random sample vectors based on feature values and frequencies from training data.

Attributes:

  • train_data (Tensor) –

    The training data from which feature values will be sampled.

  • n_features (int) –

    The number of features in the training data.

  • feature_values (list[lists]) –

    List of unique values for each feature.

  • method (str) –

    The method to generate samples, either 'freq' or 'mean'.

Methods:

  • train_generator

    A static method reserved for future use; currently does nothing.

  • generate

    Generates a specified number of random sample vectors from the feature values in the training data based on the specified method ('freq' or 'mean').

Initializes the RandomSampleTabularGenerator with training data.

Parameters:

  • train_data (Tensor) –

    The training data used to fit samplers. Expected shape: (num_train_samples, num_features).

  • method (str, default: 'freq' ) –

    The method to generate samples, either 'freq' or 'mean'. Default is 'freq'.

  • seed (int, default: None ) –

    The seed for random number generation. Default is None, which means no fixed seed.

Source code in muppet/components/perturbator/generator/tabular_generator.py
def __init__(self, train_data, method="freq", seed=None):
    """Initializes the RandomSampleTabularGenerator with training data.

    Args:
        train_data (torch.Tensor): The training data used to fit samplers.
            Expected shape: (num_train_samples, num_features).
        method (str, optional): The method to generate samples, either
            'freq' or 'mean'. Default is 'freq'.
        seed (int, optional): The seed for random number generation.
            Default is None, which means no fixed seed.
    """
    self.train_data = train_data
    self.n_features = train_data.shape[1]
    self.method = method
    self.seed = seed

    if self.seed is not None:
        random.seed(self.seed)
        torch.manual_seed(self.seed)

    # Extract unique feature values and their abundances for each feature
    super().__init__()
    self.train_generator()
Functions
train_generator
train_generator()

This method can be extended or implemented in the future if additional training logic is required.

Source code in muppet/components/perturbator/generator/tabular_generator.py
def train_generator(self):
    """This method can be extended or implemented in the future if additional training logic is required."""
    self.feature_values = []
    self.feature_frequencies = []

    for i in range(self.n_features):
        feature_column = self.train_data[:, i]
        unique_values, counts = torch.unique(
            feature_column, return_counts=True
        )
        self.feature_values.append(unique_values.tolist())
        self.feature_frequencies.append(counts.tolist())

    # Calculate the mean of each feature
    self.feature_means = torch.mean(self.train_data, dim=0).tolist()
generate
generate(n_samples)

Generates random samples by frequency based sampling feature values or by using the mean values for each feature.

Parameters:

  • n_samples (int) –

    The number of samples to generate.

Returns:

  • torch.Tensor: A tensor containing the generated random samples with shape (n_samples, 1, n_features).

Source code in muppet/components/perturbator/generator/tabular_generator.py
def generate(self, n_samples):
    """Generates random samples by frequency based sampling feature values
    or by using the mean values for each feature.

    Args:
        n_samples (int): The number of samples to generate.

    Returns:
        torch.Tensor: A tensor containing the generated random samples
            with shape (n_samples, 1, n_features).
    """
    if self.method == "freq":
        generated_tensor = torch.zeros((n_samples, 1, self.n_features))

        for i in range(n_samples):
            sample = []
            for j in range(self.n_features):
                values = self.feature_values[j]
                frequencies = self.feature_frequencies[j]

                # Generate a random sample based on frequencies
                sampled_value = random.choices(
                    values, weights=frequencies, k=1
                )[0]
                sample.append(sampled_value)

            # Convert the sample to a tensor and reshape
            sample_tensor = torch.tensor(sample).float().unsqueeze(0)
            # Store the sample in the generated tensor
            generated_tensor[i] = sample_tensor

    elif self.method == "mean":
        # Generate samples using the mean values of each feature
        generated_tensor = (
            torch.tensor(self.feature_means)
            .float()
            .unsqueeze(0)
            .repeat(n_samples, 1, 1)
        )

    else:
        raise ValueError("Invalid method. Choose either 'freq' or 'mean'.")

    return generated_tensor