''' Freezes/unfreezes the weights in the given model '''
''' Freezes/unfreezes the weights in the given model for each layer'''
forlayerinmodel.layers:
forlayerinmodel.layers:
model.trainable=trainable
model.trainable=trainable
model.trainable=trainable
model.trainable=trainable
...
@@ -69,14 +43,13 @@ def rectangular_array(n=9):
...
@@ -69,14 +43,13 @@ def rectangular_array(n=9):
defwasserstein_loss(y_true,y_pred):
defwasserstein_loss(y_true,y_pred):
"""Calculates the Wasserstein loss - critic maximises the distance between its output for real and generated samples.
"""Calculates the Wasserstein loss - critic maximises the distance between its output for real and generated samples.
To achieve this generated samples have the label -1 and real samples the label 1. Multiplying the outputs by the labels results to the wasserstein loss"""
To achieve this generated samples have the label -1 and real samples the label 1. Multiplying the outputs by the labels results to the wasserstein loss via the Kantorovich-Rubinstein duality"""
"""Calculates the gradient penalty (for details see arXiv:1704.00028v3).
"""Calculates the gradient penalty (for details see arXiv:1704.00028v3).
The 1-Lipschitz constraint for improved WGANs is enforced by adding a term which penalizes if the gradient norm in the critic unequal to 1.
The 1-Lipschitz constraint for improved WGANs is enforced by adding a term to the loss which penalizes if the gradient norm in the critic unequal to 1"""
(With this loss we penalize gradients with respect to the input of the averaged samples.)"""