A partial reimplemnetation of the paper *papername*. The pipeline contains the training and sampling for an image-to-image conditioned LDM model.
This is the repository for the *diffusion project* for the **(PR) Laboratory Deep Learning 23ss**
## Background
Unconditional diffusion models are ...
## Sample Examples
...
## Recreating Results
We used the following modules:
* Python/3.10.4
* CUDA/11.8.0
* cuDNN/8.6.0.163-CUDA-11.8.0
The python libraries are stored in the requirements.txt file and can be installed with:
```pip install -r requirements.txt```
a virtual environment is recommended.
### Model Training
To train the model, follow the steps:
1. Create an experiment folder with the [experiment creator](experiment_creator.ipynb) and the wished settings for training.
2. within the repository folder, run ```python main.py train <path to experimentfolder>/settings```
### Model Sampling
1. Make sure that the checkpoint file is within the **ldm_trained** folder within the experiment folder. Alternatively one can create this folder manually and add the checkpoint file.
2. Also make sure that the correct checkpoint name is given in the json file ```settings/sample_samplesettings.json```
otherwise the sampling will be done with randomly initialized weights.
3. within the repository folder, run ```python main.py sample <path to experimentfolder>/settings```
### Model Evaluation
...
## Comprehensive Description
This repository houses our comprehensive pipeline, designed to conveniently train, sample from, and evaluate our unconditional diffusion model.
This repository houses our comprehensive pipeline, designed to conveniently train, sample from, and evaluate our unconditional diffusion model.
The pipeline is initiated via the experiment_creator.ipynb notebook, which is separately run our local machine. This notebook allows for the configuration of every aspect of the diffusion model, including all hyperparameters. These configurations extend to the underlying neural backbone UNet, as well as the training parameters, such as training from checkpoint, Weights & Biases run name for resumption, optimizer selection, adjustment of the CosineAnnealingLR learning rate schedule parameters, and more. Moreover, it includes parameters for evaluating a and sampling images via a trained diffusion models.
The pipeline is initiated via the experiment_creator.ipynb notebook, which is separately run our local machine. This notebook allows for the configuration of every aspect of the diffusion model, including all hyperparameters. These configurations extend to the underlying neural backbone UNet, as well as the training parameters, such as training from checkpoint, Weights & Biases run name for resumption, optimizer selection, adjustment of the CosineAnnealingLR learning rate schedule parameters, and more. Moreover, it includes parameters for evaluating a and sampling images via a trained diffusion models.