diff --git a/README.md b/README.md
index ddd6db13df79964e742ebfeaecdfa8abc3235bdc..3534d8bd5ee8850588e0c621718a27bba85f4a14 100644
--- a/README.md
+++ b/README.md
@@ -4,9 +4,11 @@
 This is the repository for the *diffusion project* for the **(PR) Laboratory Deep Learning 23ss**
 
 This repository houses our comprehensive pipeline, designed to conveniently train, sample from, and evaluate our unconditional diffusion model.
-The pipeline is initiated via the experiment_creator.ipynb notebook, which is separately run our local machine. This notebook allows for the configuration of every aspect of the diffusion model, including all hyperparameters. These configurations extend to the underlying neural backbone UNet, as well as the training parameters, such as training from checkpoint, Weights & Biases run name for resumption, optimizer selection, adjustment of the learning rate for manual learning rate scheduling, and more. Moreover, it includes parameters for evaluating a and generating images via a trained diffusion models.
+The pipeline is initiated via the experiment_creator.ipynb notebook, which is separately run our local machine. This notebook allows for the configuration of every aspect of the diffusion model, including all hyperparameters. These configurations extend to the underlying neural backbone UNet, as well as the training parameters, such as training from checkpoint, Weights & Biases run name for resumption, optimizer selection, adjustment of the learning rate for manual learning rate scheduling, and more. Moreover, it includes parameters for evaluating a and sampling images via a trained diffusion models.
 
-Upon execution, the notebook generates individual JSON files, encapsulating all the hyperparameter information. When running the model on the HPC, we can choose between the operations 'train', 'generate', and 'evaluate'. These operations automatically extract the necessary hyperparameters from the JSON files and perform their respective tasks. This process is managed by the main.py file.
+Upon execution, the notebook generates individual JSON files, encapsulating all the hyperparameter information. When running the model on the HPC, we can choose between the operations 'train', 'sample', and 'evaluate'. These operations automatically extract the necessary hyperparameters from the JSON files and perform their respective tasks. This process is managed by the main.py file. The remaining files contain all the necessary functions optimized for HPC to perform the aforementioned tasks.
 
-The remaining files contain all the necessary functions optimized for HPC to perform the aforementioned tasks.
+Every uniquely trained diffusion model has its own experiment folder, given by its WANDB run name. It holds four different directories: settings, trained_ddpm, samples, and evaluations. The settings folder holds the JSON files specifying the diffusion model's configurations as well as the arguments for the training, sampling, and evaluation functions. The trained_ddpm folder contains .pth files storing the weights and biases of this experiment's diffusion model, which have been saved at different epoch milestones while training. Upon resuming training, the pipeline automatically finds the highest epoch model in trained_ddpm and continues training from there. When sampling images from these trained diffusion models, the samples are stored in different directories named epoch_{i}. This is done so we know what epoch i version of the diffusion model was used to generate these samples.
+
+