models package¶
Submodules¶
models.baseline_imputations module¶
Basic imputation techniques.
- class models.baseline_imputations.Identity(samples, masks, args)¶
Bases:
objectPerforms identity (no imputation).
- test(samples, masks)¶
- train_generator(samples, masks, args)¶
- class models.baseline_imputations.MeanImputation(samples, masks, args, values=None)¶
Bases:
models.baseline_imputations.ValueImputationPerforms mean imputation.
- train(samples, masks, args)¶
- train_generator(samples, masks, args)¶
models.daema module¶
Model implementing the DAEMA paper
- class models.daema.Daema(samples, masks, args)¶
Bases:
objectDAEMA model as presented in the paper.
- Parameters
samples – np.ndarray(Float); samples to use for initialisation
masks – np.ndarray(Float); corresponding mask matrix
args – ArgumentParser; arguments of the program (see pipeline/argument_parser.py)
- test(samples, masks)¶
Imputes the given samples using the network.
- Parameters
samples – np.ndarray(Float); samples to impute
masks – np.ndarray(Float); corresponding mask matrix
- Returns
np.ndarray(Float); imputed samples
- train_generator(samples, masks, args, **kwargs)¶
Trains the network batch after batch as a generator.
- Parameters
samples – np.ndarray(Float); samples to use for training
masks – np.ndarray(Float); corresponding mask matrix
args – ArgumentParser; arguments of the program (see pipeline/argument_parser.py)
kwargs – keyword arguments to be passed to the Adam optimiser
- Returns
Integer; step number
- class models.daema.Generator(n_cols, mask_input, feature_size, attention_mode, activation)¶
Bases:
torch.nn.modules.module.ModuleArchitecture of the DAEMA model
- Parameters
n_cols – Int; number of columns in the dataset
mask_input – Generator.FC, Generator.ELEMENTWISE or None; what input to use for the feature encoder - Generator.FC: Uses masks concatenated to the corresponding samples as input of the feature encoder - Generator.ELEMENTWISE: Uses masks to impute the samples with learned values - None: Uses only the samples as input of the feature encoder
feature_size – (Int or None, Int or None) or None; (d’, d_z) from the paper ((ways, latent_dim))
attention_mode – “classic”, “full”, “sep” or “no”; type of attention to use - full: as done in the paper, one set of weights per feature - classic: one set of weights for all features - sep: same as classic, but having d’ independent networks to produce each latent vector version - no: no attention at all (classical denoising autoencoder)
activation – Str or None; torch.nn activation function to use at the end of the network (or None for no activation)
- ELEMENTWISE = 0¶
- FC = 1¶
- MODES = {1: '_FC', 0: '_EW', None: '_NO'}¶
- forward(samples, masks)¶
Forward function
- Parameters
samples – Tensor; samples with missing values
masks – Tensor; corresponding masks
- Returns
Tensor; imputed samples
- class models.daema.ParallelLinear(in_channels, out_channels, n_layers)¶
Bases:
torch.nn.modules.module.ModuleLayer composed of parallel fully-connected layers.
- Parameters
in_channels – Integer; number of input of each layer
out_channels – Integer; number of output of each layer
n_layers – Integer; number of parallel layers
- forward(input_)¶
Defines the computation performed at every call.
Should be overridden by all subclasses.
Note
Although the recipe for forward pass needs to be defined within this function, one should call the
Moduleinstance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.
- class models.daema.View(shape)¶
Bases:
torch.nn.modules.module.ModuleLayer to reshape the data (keeping the first (batch) dimension as is).
- Parameters
shape – tuple(Integer); expected shape (batch_dimension excluded)
- forward(input_)¶
Defines the computation performed at every call.
Should be overridden by all subclasses.
Note
Although the recipe for forward pass needs to be defined within this function, one should call the
Moduleinstance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.
models.holoclean module¶
Contains the implementation of AimNet, from Holoclean.
- class models.holoclean.AimNet(embedding_size, n_cols, dropout_percent=0.0)¶
Bases:
torch.nn.modules.module.ModuleAimNet architecture as introduced in the AimNet paper (for numerical features only).
- Parameters
embedding_size – Integer: size of the embeddings
n_cols – Integer: number of features
dropout_percent – proportion of values to drop during training
- forward(samples)¶
Forward function
- Parameters
samples – Tensor; samples with missing values
- Returns
Tensor; imputed samples
- class models.holoclean.Holoclean(samples, masks, args)¶
Bases:
objectAimNet procedure as introduced in the AimNet paper (for numerical features only).
- Parameters
samples – np.ndarray(Float); samples to use for initialisation
masks – np.ndarray(Float); corresponding mask matrix
args – ArgumentParser; arguments of the program (see pipeline/argument_parser.py)
- test(samples, masks)¶
Imputes the given samples using the network.
- Parameters
samples – np.ndarray(Float); samples to impute
masks – np.ndarray(Float); corresponding mask matrix
- Returns
np.ndarray(Float); imputed samples
- train_generator(samples, masks, args)¶
Trains the network epoch after epoch as a generator.
- Parameters
samples – np.ndarray(Float); samples to use for training
masks – np.ndarray(Float); corresponding mask matrix
args – ArgumentParser; arguments of the program (see pipeline/argument_parser.py)
- Returns
Integer; epoch number
models.mida module¶
Model implementing the MIDA paper, with some additional possibilities.
- class models.mida.DAE(n_cols, theta=7, depth=3)¶
Bases:
torch.nn.modules.module.ModuleDAE architecture used in the MIDA paper.
- Parameters
n_cols – Integer: number of features
theta – Integer: hyperparameter to control the width of the network (see paper)
depth – Integer: hyperparameter to control the depth of the network (see paper)
- forward(samples)¶
Forward function
- Parameters
samples – Tensor; samples with missing values
- Returns
Tensor; imputed samples
- class models.mida.MIDA(samples, masks, args)¶
Bases:
objectMIDA procedure as introduced in the MIDA paper.
- Parameters
samples – np.ndarray(Float); samples to use for initialisation
masks – np.ndarray(Float); corresponding mask matrix
args – ArgumentParser; arguments of the program (see pipeline/argument_parser.py)
- test(samples, masks)¶
Imputes the given samples using the network.
- Parameters
samples – np.ndarray(Float); samples to impute
masks – np.ndarray(Float); corresponding mask matrix
- Returns
np.ndarray(Float); imputed samples
- train_generator(samples, masks, args, **kwargs)¶
Trains the network batch after batch as a generator.
- Parameters
samples – np.ndarray(Float); samples to use for training
masks – np.ndarray(Float); corresponding mask matrix
args – ArgumentParser; arguments of the program (see pipeline/argument_parser.py)
kwargs – keyword arguments to be passed to the Adam optimiser
- Returns
Integer; step number
models.miss_forest module¶
MissForest model. The code for the MissForest comes mainly from missingpy: https://github.com/epsilon-machine/missingpy/tree/master/missingpy , with some adjustments to make it compatible with our pipeline.
- class models.miss_forest.MissForest(max_iter=10, decreasing=False, missing_values=nan, copy=True, n_estimators=100, criterion=('mse', 'gini'), max_depth=None, min_samples_split=2, min_samples_leaf=1, min_weight_fraction_leaf=0.0, max_features='auto', max_leaf_nodes=None, min_impurity_decrease=0.0, bootstrap=True, oob_score=False, n_jobs=- 1, random_state=None, verbose=0, warm_start=False, class_weight=None)¶
Bases:
objectMissing value imputation using Random Forests.
MissForest imputes missing values using Random Forests in an iterative fashion. By default, the imputer begins imputing missing values of the column (which is expected to be a variable) with the smallest number of missing values – let’s call this the candidate column. The first step involves filling any missing values of the remaining, non-candidate, columns with an initial guess, which is the column mean for columns representing numerical variables and the column mode for columns representing categorical variables. After that, the imputer fits a random forest model with the candidate column as the outcome variable and the remaining columns as the predictors over all rows where the candidate column values are not missing. After the fit, the missing rows of the candidate column are imputed using the prediction from the fitted Random Forest. The rows of the non-candidate columns act as the input data for the fitted model. Following this, the imputer moves on to the next candidate column with the second smallest number of missing values from among the non-candidate columns in the first round. The process repeats itself for each column with a missing value, possibly over multiple iterations or epochs for each column, until the stopping criterion is met. The stopping criterion is governed by the “difference” between the imputed arrays over successive iterations. For numerical variables (num_vars_), the difference is defined as follows:
For categorical variables(cat_vars_), the difference is defined as follows:
sum(X_new[:, cat_vars_] != X_old[:, cat_vars_])) / n_cat_missing
where X_new is the newly imputed array, X_old is the array imputed in the previous round, n_cat_missing is the total number of categorical values that are missing, and the sum() is performed both across rows and columns. Following [1], the stopping criterion is considered to have been met when difference between X_new and X_old increases for the first time for both types of variables (if available).
NOTE: Most parameter definitions below are taken verbatim from the Scikit-Learn documentation at [2] and [3].
- max_iterint, optional (default = 10)
The maximum iterations of the imputation process. Each column with a missing value is imputed exactly once in a given iteration.
- decreasingboolean, optional (default = False)
If set to True, columns are sorted according to decreasing number of missing values. In other words, imputation will move from imputing columns with the largest number of missing values to columns with fewest number of missing values.
- missing_valuesnp.nan, integer, optional (default = np.nan)
The placeholder for the missing values. All occurrences of missing_values will be imputed.
- copyboolean, optional (default = True)
If True, a copy of X will be created. If False, imputation will be done in-place whenever possible.
- criteriontuple, optional (default = (‘mse’, ‘gini’))
The function to measure the quality of a split.The first element of the tuple is for the Random Forest Regressor (for imputing numerical variables) while the second element is for the Random Forest Classifier (for imputing categorical variables).
- n_estimatorsinteger, optional (default=100)
The number of trees in the forest.
- max_depthinteger or None, optional (default=None)
The maximum depth of the tree. If None, then nodes are expanded until all leaves are pure or until all leaves contain less than min_samples_split samples.
- min_samples_splitint, float, optional (default=2)
The minimum number of samples required to split an internal node: - If int, then consider min_samples_split as the minimum number. - If float, then min_samples_split is a fraction and
ceil(min_samples_split * n_samples) are the minimum number of samples for each split.
- min_samples_leafint, float, optional (default=1)
The minimum number of samples required to be at a leaf node. A split point at any depth will only be considered if it leaves at least
min_samples_leaftraining samples in each of the left and right branches. This may have the effect of smoothing the model, especially in regression. - If int, then consider min_samples_leaf as the minimum number. - If float, then min_samples_leaf is a fraction andceil(min_samples_leaf * n_samples) are the minimum number of samples for each node.
- min_weight_fraction_leaffloat, optional (default=0.)
The minimum weighted fraction of the sum total of weights (of all the input samples) required to be at a leaf node. Samples have equal weight when sample_weight is not provided.
- max_featuresint, float, string or None, optional (default=”auto”)
The number of features to consider when looking for the best split: - If int, then consider max_features features at each split. - If float, then max_features is a fraction and
int(max_features * n_features) features are considered at each split.
If “auto”, then max_features=sqrt(n_features).
If “sqrt”, then max_features=sqrt(n_features) (same as “auto”).
If “log2”, then max_features=log2(n_features).
If None, then max_features=n_features.
Note: the search for a split does not stop until at least one valid partition of the node samples is found, even if it requires to effectively inspect more than
max_featuresfeatures.- max_leaf_nodesint or None, optional (default=None)
Grow trees with
max_leaf_nodesin best-first fashion. Best nodes are defined as relative reduction in impurity. If None then unlimited number of leaf nodes.- min_impurity_decreasefloat, optional (default=0.)
A node will be split if this split induces a decrease of the impurity greater than or equal to this value. The weighted impurity decrease equation is the following:
N_t / N * (impurity - N_t_R / N_t * right_impurity - N_t_L / N_t * left_impurity)
where
Nis the total number of samples,N_tis the number of samples at the current node,N_t_Lis the number of samples in the left child, andN_t_Ris the number of samples in the right child.N,N_t,N_t_RandN_t_Lall refer to the weighted sum, ifsample_weightis passed.- bootstrapboolean, optional (default=True)
Whether bootstrap samples are used when building trees.
- oob_scorebool (default=False)
Whether to use out-of-bag samples to estimate the generalization accuracy.
- n_jobsint or None, optional (default=None)
The number of jobs to run in parallel for both fit and predict.
Nonemeans 1 unless in ajoblib.parallel_backendcontext.-1means using all processors. See Glossary for more details.- random_stateint, RandomState instance or None, optional (default=None)
If int, random_state is the seed used by the random number generator; If RandomState instance, random_state is the random number generator; If None, the random number generator is the RandomState instance used by np.random.
- verboseint, optional (default=0)
Controls the verbosity when fitting and predicting.
- warm_startbool, optional (default=False)
When set to
True, reuse the solution of the previous call to fit and add more estimators to the ensemble, otherwise, just fit a whole new forest. See the Glossary.- class_weightdict, list of dicts, “balanced”, “balanced_subsample” or None, optional (default=None)
Weights associated with classes in the form
{class_label: weight}. If not given, all classes are supposed to have weight one. For multi-output problems, a list of dicts can be provided in the same order as the columns of y. Note that for multioutput (including multilabel) weights should be defined for each class of every column in its own dict. For example, for four-class multilabel classification weights should be [{0: 1, 1: 1}, {0: 1, 1: 5}, {0: 1, 1: 1}, {0: 1, 1: 1}] instead of [{1:1}, {2:5}, {3:1}, {4:1}]. The “balanced” mode uses the values of y to automatically adjust weights inversely proportional to class frequencies in the input data asn_samples / (n_classes * np.bincount(y))The “balanced_subsample” mode is the same as “balanced” except that weights are computed based on the bootstrap sample for every tree grown. For multi-output, the weights of each column of y will be multiplied. Note that these weights will be multiplied with sample_weight (passed through the fit method) if sample_weight is specified. NOTE: This parameter is only applicable for Random Forest Classifier objects (i.e., for categorical variables).
- statistics_Dictionary of length two
The first element is an array with the mean of each numerical feature being imputed while the second element is an array of modes of categorical features being imputed (if available, otherwise it will be None).
[1] Stekhoven, Daniel J., and Peter Bühlmann. “MissForest—non-parametric missing value imputation for mixed-type data.” Bioinformatics 28.1 (2011): 112-118.
[2] https://scikit-learn.org/stable/modules/generated/sklearn.ensemble. RandomForestRegressor.html#sklearn.ensemble.RandomForestRegressor
[3] https://scikit-learn.org/stable/modules/generated/sklearn.ensemble. RandomForestClassifier.html#sklearn.ensemble.RandomForestClassifier
>>> from missingpy import MissForest >>> nan = float("NaN") >>> X = [[1, 2, nan], [3, 4, 3], [nan, 6, 5], [8, 8, 7]] >>> imputer = MissForest(random_state=1337) >>> imputer.fit_transform(X) Iteration: 0 Iteration: 1 Iteration: 2 array([[1. , 2. , 3.92 ], [3. , 4. , 3. ], [2.71, 6. , 5. ], [8. , 8. , 7. ]])
- fit(X, y=None, cat_vars=None)¶
Fit the imputer on X.
- X{array-like}, shape (n_samples, n_features)
Input data, where
n_samplesis the number of samples andn_featuresis the number of features.- cat_varsint or array of ints, optional (default = None)
An int or an array containing column indices of categorical variable(s)/feature(s) present in the dataset X.
Noneif there are no categorical variables in the dataset.
- selfobject
Returns self.
- fit_transform(X, X_test=None, y=None, **fit_params)¶
Fit MissForest and impute all missing values in X.
- X{array-like}, shape (n_samples, n_features)
Input data, where
n_samplesis the number of samples andn_featuresis the number of features.
- X{array-like}, shape (n_samples, n_features)
Returns imputed dataset.
- transform(X, X_test=None)¶
Impute all missing values in X.
- X{array-like}, shape = [n_samples, n_features]
The input data to complete.
- X{array-like}, shape = [n_samples, n_features]
The imputed dataset.
- class models.miss_forest.MissForestImpute(samples, masks, args, **kwargs)¶
Bases:
objectMissForest procedure as introduced in the MissForest paper.
- Parameters
samples – np.ndarray(Float); samples to use for initialisation
masks – np.ndarray(Float); corresponding mask matrix
args – ArgumentParser; arguments of the program (see pipeline/argument_parser.py)
kwargs – keyword arguments to be passed to the MissForest class
- test(samples, masks)¶
Imputes the given samples using the network.
- Parameters
samples – np.ndarray(Float); samples to impute
masks – np.ndarray(Float); corresponding mask matrix
- Returns
np.ndarray(Float); imputed samples
- train_generator(samples, masks, args)¶
Stores the training samples to use these when test samples are to be imputed.
- Parameters
samples – np.ndarray(Float); samples to use for training
masks – np.ndarray(Float); corresponding mask matrix
args – ArgumentParser; arguments of the program (see pipeline/argument_parser.py)
- Returns
Integer; step number
Module contents¶
Contains all the models that can be used to impute missing data.