diff --git a/Conditional_Diffusion_Model/masking.ipynb b/Conditional_Diffusion_Model/automated_masking_func.ipynb
similarity index 98%
rename from Conditional_Diffusion_Model/masking.ipynb
rename to Conditional_Diffusion_Model/automated_masking_func.ipynb
index 820fdfdeffc64f5e166478add5bedbc2ece30d17..a19afb63d2d41de5f3d076d6a79fa151fab04089 100644
--- a/Conditional_Diffusion_Model/masking.ipynb
+++ b/Conditional_Diffusion_Model/automated_masking_func.ipynb
@@ -7,14 +7,17 @@
    },
    "source": [
     "# Semi Automatic Masking\n",
+    "\n",
     "For a folder with own images in ***fpath***, find the (top left, bottom right) corner locations for each image and add them in cell ***1. Add Positions***  \n",
     "and save the csv file.\n",
     "\n",
     "Use the custom dataloader ***Addmask*** for sampling from the dataframe.\n",
     "\n",
+    "For what is this used? Our LHQ dataloader randomly draws black rectangles and applies them to the dataset images. However, as users, we would like to choose the exact mask to be inpainted for a desired image. This notebook presents a way to select the rectangular mask by giving two points.\n",
+    "\n",
+    "Another possible solution would be to add the mask with some editing tool. But when compression algorithms are involved, the border pixels between the image and mask become blurry and do not remain pure black. This causes the model not to inpaint them since it has been trained to inpaint pixels in the rectangle with the color value 0.\n",
     "\n",
-    "Why we do it that way? Saving the masked image leads to unsharp boarders\n",
-    "\n"
+    "We used this function to crop out the RWTH Clinic or a llama by inpainting landscape images as a demo in our final presentation."
    ]
   },
   {
diff --git a/README.md b/README.md
index fbd992b17720f68c50323cd77922a90da41c7602..fb4a27c205cd5ae9fa13304bd9a536b3bc193e92 100644
--- a/README.md
+++ b/README.md
@@ -1,33 +1,42 @@
 # Analysis Depot
 
-The Analysis Depot is a repository for Jupyter notebooks initially used to analyze subtasks and/or possible debugging for the actual code later run on the HPC Cluster. It now houses helpful and interesting in-depth descriptions, analyses, and visualizations of our implementations, training, sampling, experiments, and evaluation of the diffusion model and its neural backbone.
+The Analysis Depot is a repository for Jupyter notebooks we initially used to analyze subtasks and/or possible debugging for the actual code later run on the HPC Cluster. It now houses helpful and interesting in-depth descriptions, analyses, and visualizations of our implementations, training, sampling, experiments, and evaluation of our diffusion models and their neural backbone.
 
-# Overview
 
-Diffusion models (DMs) are a class of generative models that offer a unique approach to modeling complex data distributions by simulating a stochastic process, known as a diffusion process, that gradually transforms data from a simple initial distribution into a complex data distribution. More specifically, the simple distribution is given by Gaussian Noise which is iteratively denoised into coherent images through modeling the data distribution present in the training set.
+# Background
 
-The repo is divided into 3 sections pertaining to the type of diffusion model, namely, the unconditional, conditional, and latent diffusion models. We train them on a variety of datasets to perform various generative tasks, covering unconditional, class-conditional image generation, inpainting, and Super Resolution.
-For each of these sections, we provide notebooks with explanations and equations for the class of the DM, .... below we can give a short explanation on the notebooks they all have in common.
+Diffusion models (DMs) are a class of generative models that offer a unique approach to modeling complex data distributions by simulating a stochastic process, known as a diffusion process, that gradually transforms data from a simple initial distribution into a complex data distribution. More specifically, the simple distribution is given by Gaussian Noise which is iteratively denoised into coherent images through modeling the data distribution present in the training set.
 
-### UNet Models 
-[Backbone Folder](Unconditional_Diffusion_Model/neural_backbone): Models for Noise prediction are stored and analysed 
 
-### Dataset Analysis and Dataloading
-[Dataloading notebook](Unconditional_Diffusion_Model/dataloading.ipynb): A presentation and quick analysis of the Landscape dataset is given, which very likely be our choice for training an unconditional diffusion model ( [source link here](https://github.com/universome/alis/blob/master/lhq.md),  store in data folder).
+# Overview
 
-### Pipeline 
-[Pipeline](Unconditional_Diffusion_Model/pipeline.ipynb):  Integrating the DDPM framework into a training loop in order to train our diffusion models on the HPC (simple/HPC version). Also show some training results and other function like a GIF creator. 
+The repository is divided into three sections pertaining to the type of diffusion model, namely, the unconditional, conditional, and latent DM. We train them on a variety of datasets to perform various generative tasks, covering unconditional, class-conditional image generation, inpainting, and latent Super-Resolution.
 
+Each of these sections contains notebooks with explanations and equations for the class of the DM, the neural network UNet architecture backbone, dataloading, sampling, evaluation, etc. They provide our thought process and further analysis on our implementation.
 
-### Evaluation
-[Evaluation folder ](Untitled) : Containing first attempts that were done to get the IS Score from scratch. 
-[Evaluation Notebook](Unconditional_Diffusion_Model/evaluation.ipynb): Collects the analyzed evaluation functions that will be used later on. 
-[Feature Distribution](dim_reduction.ipynb): Applys UMAP mapping on encoded features to visualize feature distribution of samples
 
-### Latent Diffusion
-[LDM construction](Latent_Diffusion_Model/latent_diffusion_model.ipynb) summarizes how the LDM model class has been constructed.
+# Trained Models
 
-[Image Encoding](Latent_Diffusion_Model/image_encoding.ipynb) shows how the images are encoded into latent space and decoded back to RGB images with a vqgan and further a quick analysis of the latent image properties.
+All our models share **AdamW** as the optimizer, **CosineAnnealingLR** as the scheduler (starting at a **learning rate** of **0.0001** and decreasing to an **eta_min** of **1e-10**), and are trained jointly with an EMA model for a **decay** value of **0.9999**. The diffusion process is performed on a Markov chain of length **T=1000**.
 
-[Conditional Unet](Latent_Diffusion_Model/backbone/latent_unet_learned_condition.ipynb) shows how UNet was adapted for image to image conditioning. 
+| Hyperparam.  | LHQ-UDM | Bottleneck | No Attention | Celeb-UDM | Cosine Noise | True Variance | Class-CDM | No CFG | Inpaint-CDM | Small-CDM | LDM-16 | LDM-8 |
+|--------------|---------|------------|--------------|-----------|--------------|---------------|-----------|--------|-------------|-----------|--------|-------|
+| Task         | Uncond. | Uncond.    | Uncond.      | Uncond.   | Uncond.      | Uncond.       | Class Cond. | Class Cond. | Inpainting | Inpainting | SuperRes | SuperRes |
+| Dataset      | LHQ     | LHQ        | LHQ          | CelebAHQ  | CelebAHQ    | CelebAHQ      | AFHQ      | AFHQ   | LHQ        | LHQ       | LHQ    | LHQ   |
+| Split        | 80-20   | 80-20      | 80-20        | 90-10    | 90-10        | 90-10         | 90-10     | 90-10  | 90-10      | 90-10     | 90-10  | 90-10 |
+| Resolution   | 128^2px | 128^2px    | 128^2px      | 128^2px  | 128^2px      | 128^2px       | 128^2px   | 128^2px| 128^2px    | 128^2px   | 512^2px| 512^2px |
+| Noise β_t    | linear  | linear     | linear       | linear   | cosine       | linear        | linear    | linear | linear     | linear    | linear | linear |
+| Variance σ^2 | same    | same       | same         | same     | same         | true  | same     | same   | same       | same      | same   | same   |
+| CFG          | -       | -          | -            | -        | -            | -              | yes      | no     | -          | -        | -      | -     |
+| VQGAN-f     | -       | -          | -            | -        | -            | -              | -        | -      | -          | -        | 16     | 8     |
+| z-shape     | -       | -          | -            | -        | -            | -              | -        | -      | -          | -        | (256,32,32) | (256,64,64) |
+| Parameters   | 37M     | 37M        | 34M          | 37M      | 37M          | 37M            | 37M      | 37M    | 37M        | 11M      | 496M   | 137M  |
+| Channel Mults.| [1,2,4,4,8] | [1,2,4,8,10] | [1,2,4,4,8] | [1,2,4,4,8] | [1,2,4,4,8] | [1,2,4,4,8] | [1,2,4,4,8] | [1,2,4,4,8] | [1,2,4,4,8] | [1,2,2,2,4] | [1,2,4,4,8] | [1,2,2,4,0] |
+| Attention    | yes     | yes         | no           | yes      | yes          | yes            | yes      | yes    | yes        | yes      | no     | yes   |
+| RF* per Block| 7x7     | 3x3         | 7x7          | 7x7      | 7x7          | 7x7            | 7x7      | 7x7    | 7x7        | 7x7      | 7x7    | 7x7   |
+| Batch Size   | 32      | 32          | 32           | 32       | 32           | 32             | 32       | 32     | 32         | 32       | 6      | 8     |
+| Iters.       | 450K    | 225K        | 225K         | 506K     | 506K         | 506K           | 445K     | 468K   | 427K       | 225K     | 1.75M  | 1M    |
+| Epochs       | 200     | 100         | 100          | 600      | 600          | 600            | 950      | 1000   | 190        | 100      | 130    | 100   |
+| Cosine Steps^ | 2       | 1           | 1            | 3        | 1            | 1              | 2        | 2      | 2          | 1        | 1      | 1     |
 
+***RF** pertains to the receptive field; ^**Cosine Steps** represents the number of times the learning rate undergoes gradual reduction through cosine annealing.
\ No newline at end of file