AutoImputation#

This notebook demonstrates the functionality of the autoimpute module, which provides an automated approach to selecting and applying optimal imputation methods for missing data. Rather than manually testing different approaches, autoimpute evaluates multiple methods (tuning their hyperparameters to the specific dataset), identifies which performs best for your specific data, and applies it to generate high-quality imputations.

import pandas as pd
import numpy as np
import plotly.graph_objects as go
from sklearn.datasets import load_diabetes
import warnings

# Set pandas display options to limit table width
pd.set_option("display.width", 600)
pd.set_option("display.max_columns", 10)
pd.set_option("display.expand_frame_repr", False)

from microimpute.comparisons.autoimpute import autoimpute
from microimpute.visualizations.plotting import method_comparison_results
/opt/hostedtoolcache/Python/3.11.13/x64/lib/python3.11/site-packages/rpy2/rinterface/__init__.py:1211: UserWarning: Environment variable "LD_LIBRARY_PATH" redefined by R and overriding existing variable. Current: "/opt/hostedtoolcache/Python/3.11.13/x64/lib", R: "/usr/lib/R/lib:/usr/lib/x86_64-linux-gnu:/usr/lib/jvm/temurin-17-jdk-amd64/lib/server:/opt/hostedtoolcache/Python/3.11.13/x64/lib"
  warnings.warn(
/opt/hostedtoolcache/Python/3.11.13/x64/lib/python3.11/site-packages/rpy2/rinterface/__init__.py:1211: UserWarning: Environment variable "PWD" redefined by R and overriding existing variable. Current: "/home/runner/work/microimpute/microimpute/docs", R: "/home/runner/work/microimpute/microimpute/docs/autoimpute"
  warnings.warn(
/opt/hostedtoolcache/Python/3.11.13/x64/lib/python3.11/site-packages/rpy2/rinterface/__init__.py:1211: UserWarning: Environment variable "LD_LIBRARY_PATH" redefined by R and overriding existing variable. Current: "/usr/lib/R/lib:/usr/lib/x86_64-linux-gnu:/usr/lib/jvm/temurin-17-jdk-amd64/lib/server:/opt/hostedtoolcache/Python/3.11.13/x64/lib", R: "/usr/lib/R/lib:/usr/lib/x86_64-linux-gnu:/usr/lib/jvm/temurin-17-jdk-amd64/lib/server:/usr/lib/R/lib:/usr/lib/x86_64-linux-gnu:/usr/lib/jvm/temurin-17-jdk-amd64/lib/server:/opt/hostedtoolcache/Python/3.11.13/x64/lib"
  warnings.warn(
/opt/hostedtoolcache/Python/3.11.13/x64/lib/python3.11/site-packages/rpy2/rinterface/__init__.py:1211: UserWarning: Environment variable "R_LIBS_SITE" redefined by R and overriding existing variable. Current: "/usr/local/lib/R/site-library/:/usr/local/lib/R/site-library:/usr/lib/R/site-library:/usr/lib/R/library", R: "/usr/local/lib/R/site-library/:/usr/local/lib/R/site-library/:/usr/local/lib/R/site-library/:/usr/local/lib/R/site-library:/usr/lib/R/site-library:/usr/lib/R/library:/usr/lib/R/library:/usr/lib/R/library"
  warnings.warn(
/opt/hostedtoolcache/Python/3.11.13/x64/lib/python3.11/site-packages/rpy2/rinterface/__init__.py:1211: UserWarning: Environment variable "R_PAPERSIZE_USER" redefined by R and overriding existing variable. Current: "a4", R: "letter"
  warnings.warn(
/opt/hostedtoolcache/Python/3.11.13/x64/lib/python3.11/site-packages/rpy2/rinterface/__init__.py:1211: UserWarning: Environment variable "R_SESSION_TMPDIR" redefined by R and overriding existing variable. Current: "/tmp/Rtmp0Bmr09", R: "/tmp/RtmpQMLlLr"
  warnings.warn(

Data preparation#

This demonstration uses the diabetes dataset from scikit-learn. In real-world imputation scenarios, you would typically have a “donor” dataset with complete information for both predictor and target variables, and a “receiver” dataset that lacks some target variables that need to be imputed.

# Load the diabetes dataset
diabetes = load_diabetes()
diabetes_data = pd.DataFrame(diabetes.data, columns=diabetes.feature_names)

# Display the first few rows to understand the data structure
diabetes_data.head()
age sex bmi bp s1 s2 s3 s4 s5 s6
0 0.038076 0.050680 0.061696 0.021872 -0.044223 -0.034821 -0.043401 -0.002592 0.019907 -0.017646
1 -0.001882 -0.044642 -0.051474 -0.026328 -0.008449 -0.019163 0.074412 -0.039493 -0.068332 -0.092204
2 0.085299 0.050680 0.044451 -0.005670 -0.045599 -0.034194 -0.032356 -0.002592 0.002861 -0.025930
3 -0.089063 -0.044642 -0.011595 -0.036656 0.012191 0.024991 -0.036038 0.034309 0.022688 -0.009362
4 0.005383 -0.044642 -0.036385 0.021872 0.003935 0.015596 0.008142 -0.002592 -0.031988 -0.046641

For this demonstration, the diabetes dataset is split into donor and receiver portions. Part of the data is treated as the donor dataset with complete information, and another part as the receiver dataset with some variables that need imputation. autoimpute handles imputation of numerical, categorical and boolean variables, lifting constraints on the choice of data sets and variables.

# Split the data into donor and receiver portions
donor_indices = np.random.choice(
    len(diabetes_data), size=int(0.7 * len(diabetes_data)), replace=False
)
receiver_indices = np.array(
    [i for i in range(len(diabetes_data)) if i not in donor_indices]
)

donor_data = diabetes_data.iloc[donor_indices].reset_index(drop=True)
receiver_data = diabetes_data.iloc[receiver_indices].reset_index(drop=True)

# Define which variables we'll use as predictors and which we want to impute
predictors = ["age", "sex", "bmi", "bp"]
imputed_variables = ["s1", "s4"]

# For demonstration purposes, we'll remove the variables we want to impute from the receiver dataset
receiver_data_without_targets = receiver_data.drop(columns=imputed_variables)

print(f"Donor data shape: {donor_data.shape}")
print(f"Receiver data shape: {receiver_data_without_targets.shape}")
print(f"Predictors: {predictors}")
print(f"Variables to impute: {imputed_variables}")
Donor data shape: (309, 10)
Receiver data shape: (133, 8)
Predictors: ['age', 'sex', 'bmi', 'bp']
Variables to impute: ['s1', 's4']

Running autoimpute#

Use the autoimpute function to automatically evaluate different imputation methods, select the best one, and generate imputations. The function handles all the complexity of model evaluation, selection, and application in a single call.

warnings.filterwarnings("ignore")

# Run the autoimpute process
results = autoimpute(
    donor_data=donor_data,
    receiver_data=receiver_data_without_targets,
    predictors=predictors,
    imputed_variables=imputed_variables,
    tune_hyperparameters=False,  # enable automated hyperparameter tuning if desired
    k_folds=3,  # Number of cross-validation folds
)

print(
    f"Shape of receiver data before imputation: {receiver_data_without_targets.shape} \nShape of receiver data after imputation: {results.receiver_data.shape}"
)
[Parallel(n_jobs=-1)]: Using backend LokyBackend with 4 concurrent workers.
[Parallel(n_jobs=-1)]: Done   3 out of   3 | elapsed:    3.9s finished
[Parallel(n_jobs=-1)]: Using backend LokyBackend with 4 concurrent workers.
[Parallel(n_jobs=-1)]: Batch computation too fast (0.08775091171264648s.) Setting batch_size=2.
[Parallel(n_jobs=-1)]: Done   3 out of   3 | elapsed:    0.1s finished
[Parallel(n_jobs=-1)]: Using backend LokyBackend with 4 concurrent workers.
[Parallel(n_jobs=-1)]: Done   3 out of   3 | elapsed:    0.9s finished
[Parallel(n_jobs=-1)]: Using backend LokyBackend with 4 concurrent workers.
[Parallel(n_jobs=-1)]: Done   3 out of   3 | elapsed:    3.9s finished
Shape of receiver data before imputation: (133, 8) 
Shape of receiver data after imputation: (133, 10)

Understanding the results#

The autoimpute function returns four key objects that provide comprehensive information about the imputation process:

  • imputations: A dictionary where keys are quantiles used for imputation and values are DataFrames containing the imputed values at each quantile

  • imputed_data: The receiver dataset with imputed values integrated into it

  • fitted_model: The best-performing imputation model, already fitted on the donor data

  • method_results_df: A DataFrame with detailed performance metrics for all evaluated imputation methods

# Examine the comparative performance of different imputation methods
print("Cross-validation results for different imputation methods:")
results.cv_results
Cross-validation results for different imputation methods:
0.05 0.1 0.15 0.2 0.25 ... 0.8 0.85 0.9 0.95 mean_loss
QRF 0.004900 0.007691 0.010232 0.013937 0.015926 ... 0.014936 0.012559 0.009987 0.006796 0.015163
OLS 0.003956 0.006627 0.008916 0.010814 0.012404 ... 0.013066 0.011149 0.008662 0.005395 0.012560
QuantReg 0.003792 0.006408 0.008959 0.010947 0.012464 ... 0.013325 0.011449 0.008743 0.005241 0.012556
Matching 0.021751 0.021631 0.021511 0.021390 0.021270 ... 0.019948 0.019828 0.019707 0.019587 0.020669

4 rows × 20 columns

The table above provides a comprehensive view of how each imputation method performs across different quantiles. The ‘mean_loss’ column shows the average quantile loss across all quantiles for each method. Lower values indicate better performance, and autoimpute automatically selects the method with the lowest average loss.

# Identify which method was selected as the best performer
best_method = results.cv_results["mean_loss"].idxmin()
print(f"Best performing method: {best_method}")
print(f"Average loss: {results.cv_results.loc[best_method, 'mean_loss']:.4f}")
Best performing method: QuantReg
Average loss: 0.0126

Visualizing method comparison#

Visualize how different methods perform across quantiles provides insight into which methods are most appropriate for different parts of the distribution.

# Extract the quantiles used in the evaluation
quantiles = [q for q in results.cv_results.columns if isinstance(q, float)]

comparison_viz = method_comparison_results(
    data=results.cv_results,
    metric_name="Test Quantile Loss",
    data_format="wide",
)
fig = comparison_viz.plot(
    title="Autoimpute Method Comparison",
    show_mean=True,
)
fig.show()

The plot above illustrates how each imputation method performs across different quantiles of the distribution. Methods with consistently lower lines generally perform better overall.

comparison_viz.summary()
Method Mean Test Quantile Loss Best Quantile Best Test Quantile Loss Worst Quantile Worst Test Quantile Loss
2 QuantReg 0.012556 0.05 0.003792 0.60 0.017005
1 OLS 0.012560 0.05 0.003956 0.55 0.016918
0 QRF 0.015163 0.05 0.004900 0.50 0.021469
3 Matching 0.020669 0.95 0.019587 0.05 0.021751

By calling summary on the object returned by method_comparison_results function, you can get a summary of the imputation results, including the mean and standard deviation of the quantile loss for each method. This summary can help you understand the performance of different imputation methods in a more concise manner.

Examining the imputed values#

Now let us assess the actual imputed values generated by the best-performing method.

# Examine imputed values (these were imputed for q=0.5 by default)
median_imputations = results.imputations["best_method"] # Extract the best imputations with the "best_method" key
print("Median imputed values:")
median_imputations.head()
Median imputed values:
s1 s4
0 -0.010114 -0.035948
1 -0.021100 -0.009957
2 0.008520 -0.001572
3 0.020999 0.017844
4 -0.017520 -0.024773
# Look at the full receiver dataset with imputed values integrated
print("Receiver dataset with imputed values:")
results.receiver_data.head()
Receiver dataset with imputed values:
age sex bmi bp s2 s3 s5 s6 s1 s4
0 -0.001882 -0.044642 -0.051474 -0.026328 -0.019163 0.074412 -0.068332 -0.092204 -0.010114 -0.035948
1 -0.070900 -0.044642 0.039062 -0.033213 -0.034508 -0.024993 0.067737 -0.013504 -0.021100 -0.009957
2 -0.005515 -0.044642 0.042296 0.049415 -0.023861 0.074412 0.052277 0.027917 0.008520 -0.001572
3 0.070769 0.050680 0.012117 0.056301 0.049416 -0.039719 0.027364 -0.001078 0.020999 0.017844
4 -0.038207 -0.044642 -0.010517 -0.036656 -0.019476 -0.028674 -0.018114 -0.017646 -0.017520 -0.024773

Evaluating imputation quality#

In this demonstration, since the receiver dataset was artificially created by removing variables from the original data, there exists the unique opportunity to evaluate the quality of our imputations by comparing them to the actual values.

# Visualize comparison between actual and imputed values
for var in imputed_variables:
    fig = go.Figure()

    # Plot actual values
    fig.add_trace(
        go.Scatter(
            x=receiver_data.index,
            y=receiver_data[var],
            mode="markers",
            name="Actual values",
            marker=dict(color="blue", size=8),
        )
    )

    # Plot imputed values
    fig.add_trace(
        go.Scatter(
            x=results.receiver_data.index,
            y=results.receiver_data[var],
            mode="markers",
            name="Imputed values",
            marker=dict(color="red", size=8),
        )
    )

    # Customize the plot appearance
    fig.update_layout(
        title=f"Comparison of actual vs imputed values for {var}",
        xaxis_title="Sample Index",
        yaxis_title=f"{var} Value",
        legend_title="Type",
        hovermode="closest",
    )

    fig.show()

The plots above show how well the imputed values (red) match the actual values (blue) that were removed from the receiver dataset. This visual comparison helps assess the quality of the imputations generated by the best-performing method.

Advanced usage#

Custom models and hyperparameters#

The autoimpute function allows for customization of both the models to evaluate and their hyperparameters. This flexibility enables adaptation to specific dataset characteristics and imputation requirements. The models that support hyperparameter specification and tuning are Matching and QRF.

from microimpute.models import *

# Specify a custom subset of models to evaluate
custom_models = [QRF, OLS, Matching]

# Specify custom hyperparameters for some models
custom_hyperparameters = {
    "QRF": {"n_estimators": 200, "max_depth": 10},
    "Matching": {"constrained": True},
}

# Then simply run autoimpute with custom models and hyperparameters
advanced_results = autoimpute(
    donor_data=donor_data,
    receiver_data=receiver_data_without_targets,
    predictors=predictors,
    imputed_variables=imputed_variables,
    models=custom_models,
    hyperparameters=custom_hyperparameters,
    k_folds=3,
)

advanced_results.imputations["best_method"]
[Parallel(n_jobs=-1)]: Using backend LokyBackend with 4 concurrent workers.
[Parallel(n_jobs=-1)]: Done   3 out of   3 | elapsed:    2.3s finished
[Parallel(n_jobs=-1)]: Using backend LokyBackend with 4 concurrent workers.
[Parallel(n_jobs=-1)]: Batch computation too fast (0.07336282730102539s.) Setting batch_size=2.
[Parallel(n_jobs=-1)]: Done   3 out of   3 | elapsed:    0.1s finished
[Parallel(n_jobs=-1)]: Using backend LokyBackend with 4 concurrent workers.
[Parallel(n_jobs=-1)]: Done   3 out of   3 | elapsed:    3.3s finished
s1 s4
0 -0.009053 -0.033622
1 -0.012201 -0.004804
2 0.013738 0.005045
3 0.024221 0.027203
4 -0.012577 -0.021264
... ... ...
128 -0.014756 -0.036982
129 0.010607 0.011019
130 0.006817 -0.009732
131 0.008772 0.012049
132 -0.028967 -0.048025

133 rows × 2 columns

Comparison of imputed values across models#

For comparing, not only performance through quantile loss, but also final results through the evaluation of imputed values, autoimpute supports setting the parameter impute_all to True so that it will not only perform impuation with the model chosen as the best performing but also all others. When set to True, this parameter ensures that autoimpute’s results base clase contains an imputations dictionary and fitted models dictionary for all other models in addition to the “best_method”.

warnings.filterwarnings("ignore")

# Run the autoimpute process
results = autoimpute(
    donor_data=donor_data,
    receiver_data=receiver_data_without_targets,
    predictors=predictors,
    imputed_variables=imputed_variables,
    tune_hyperparameters=False, 
    impute_all=True,
    k_folds=3,
)

print(f"Imputation results available for models: {results.imputations.keys()}")
print(f"The best performing model is: {results.fitted_models['best_method'].__class__.__name__}")
[Parallel(n_jobs=-1)]: Using backend LokyBackend with 4 concurrent workers.
[Parallel(n_jobs=-1)]: Done   3 out of   3 | elapsed:    1.3s finished
[Parallel(n_jobs=-1)]: Using backend LokyBackend with 4 concurrent workers.
[Parallel(n_jobs=-1)]: Batch computation too fast (0.07395458221435547s.) Setting batch_size=2.
[Parallel(n_jobs=-1)]: Done   3 out of   3 | elapsed:    0.1s finished
[Parallel(n_jobs=-1)]: Using backend LokyBackend with 4 concurrent workers.
[Parallel(n_jobs=-1)]: Done   3 out of   3 | elapsed:    0.9s finished
[Parallel(n_jobs=-1)]: Using backend LokyBackend with 4 concurrent workers.
[Parallel(n_jobs=-1)]: Done   3 out of   3 | elapsed:    1.7s finished
{0.5:            s1        s4
0   -0.033216 -0.039493
1    0.012191 -0.021412
2    0.058973  0.034309
3    0.071357  0.071210
4    0.012191  0.034309
..        ...       ...
128 -0.016704 -0.039493
129 -0.044223 -0.039493
130 -0.018080 -0.039493
131  0.038334  0.034309
132 -0.059359 -0.076395

[133 rows x 2 columns]}
{0.5:            s1        s4
0   -0.009053 -0.033622
1   -0.012201 -0.004804
2    0.013738  0.005045
3    0.024221  0.027203
4   -0.012577 -0.021264
..        ...       ...
128 -0.014756 -0.036982
129  0.010607  0.011019
130  0.006817 -0.009732
131  0.008772  0.012049
132 -0.028967 -0.048025

[133 rows x 2 columns]}
{0.5:            s1        s4
0   -0.073119 -0.069383
1    0.012191 -0.039493
2    0.047965  0.034309
3   -0.004321  0.034309
4   -0.002945 -0.039493
..        ...       ...
128 -0.044223 -0.039493
129  0.012191 -0.039493
130  0.078236  0.108111
131  0.093372 -0.002592
132 -0.037344 -0.039493

[133 rows x 2 columns]}
Imputation results available for models: dict_keys(['best_method', 'QRF', 'OLS', 'Matching'])
The best performing model is: QuantRegResults