From 6d0e3089b5ad4f3af0712983d1a85254bd9bfbc4 Mon Sep 17 00:00:00 2001
From: Gonzalo Martin Garcia <gonzalomartingarcia0@gmail.com>
Date: Mon, 26 Jun 2023 23:22:56 +0200
Subject: [PATCH] small update to README.md

---
 README.md | 4 ++--
 1 file changed, 2 insertions(+), 2 deletions(-)

diff --git a/README.md b/README.md
index 3534d8b..4248a7a 100644
--- a/README.md
+++ b/README.md
@@ -4,11 +4,11 @@
 This is the repository for the *diffusion project* for the **(PR) Laboratory Deep Learning 23ss**
 
 This repository houses our comprehensive pipeline, designed to conveniently train, sample from, and evaluate our unconditional diffusion model.
-The pipeline is initiated via the experiment_creator.ipynb notebook, which is separately run our local machine. This notebook allows for the configuration of every aspect of the diffusion model, including all hyperparameters. These configurations extend to the underlying neural backbone UNet, as well as the training parameters, such as training from checkpoint, Weights & Biases run name for resumption, optimizer selection, adjustment of the learning rate for manual learning rate scheduling, and more. Moreover, it includes parameters for evaluating a and sampling images via a trained diffusion models.
+The pipeline is initiated via the experiment_creator.ipynb notebook, which is separately run our local machine. This notebook allows for the configuration of every aspect of the diffusion model, including all hyperparameters. These configurations extend to the underlying neural backbone UNet, as well as the training parameters, such as training from checkpoint, Weights & Biases run name for resumption, optimizer selection, adjustment of the CosineAnnealingLR learning rate schedule parameters, and more. Moreover, it includes parameters for evaluating a and sampling images via a trained diffusion models.
 
 Upon execution, the notebook generates individual JSON files, encapsulating all the hyperparameter information. When running the model on the HPC, we can choose between the operations 'train', 'sample', and 'evaluate'. These operations automatically extract the necessary hyperparameters from the JSON files and perform their respective tasks. This process is managed by the main.py file. The remaining files contain all the necessary functions optimized for HPC to perform the aforementioned tasks.
 
-Every uniquely trained diffusion model has its own experiment folder, given by its WANDB run name. It holds four different directories: settings, trained_ddpm, samples, and evaluations. The settings folder holds the JSON files specifying the diffusion model's configurations as well as the arguments for the training, sampling, and evaluation functions. The trained_ddpm folder contains .pth files storing the weights and biases of this experiment's diffusion model, which have been saved at different epoch milestones while training. Upon resuming training, the pipeline automatically finds the highest epoch model in trained_ddpm and continues training from there. When sampling images from these trained diffusion models, the samples are stored in different directories named epoch_{i}. This is done so we know what epoch i version of the diffusion model was used to generate these samples.
+Every uniquely trained diffusion model has its own experiment folder, given by its WANDB run name. It holds four different directories: settings, trained_ddpm, samples, and evaluations. The settings folder holds the JSON files specifying the diffusion model's configurations as well as the arguments for the training, sampling, and evaluation functions. The trained_ddpm folder contains .pth files storing the weights and biases of this experiment's diffusion model, which have been saved at different epoch milestones while training. Upon resuming training, the pipeline takes the specified model in trained_ddpm and continues training from there. When sampling images from these trained diffusion models, the samples are stored in different directories for the milestones under the names epoch_{i}. This is done so we know what epoch i version of the diffusion model was used to generate these samples.
 
  
 
-- 
GitLab