- Using a External Model to Predict the low-pass filters and minimizing the loss function like an Autoencoder setup.
- Download the temporary dataset using wget and unzip to the
./data
directory. - Use the visualization.py to visualize the produced filters etc..
# Initialization of Filter Prediction Model:
model = FilterConv(in_channels = IN_CHANNELS, out_channels = OUT_CHANNELS)
model.to(device = DEVICE)
# Initialization of Autoencoder Model:
data = torch.load(DATA_PATH)
awt = DWT1d(filter_model = model)
s
# Training:
awt.fit(X = data, batch_size = BATCH_SIZE, num_epochs = NUM_EPOCHS)
name = f"{name_of_your_model}.pth"
torch.save(awt, name)
currently implemented for 1D, using transform1d.py
- Idea based on Paper Recoskie & Mann, 2018
- Implementation based on Yu-Group/adaptive-wavelets
@article{ha2021adaptive,
title={Adaptive wavelet distillation from neural networks through interpretations},
author={Ha, Wooseok and Singh, Chandan and Lanusse, Francois and Upadhyayula, Srigokul and Yu, Bin},
journal={Advances in Neural Information Processing Systems},
volume={34},
year={2021}
}