-
Notifications
You must be signed in to change notification settings - Fork 9
Open
Description
Hi,
I have a question regarding gradient penalty. I believe the L2-norm calculation of the gradient is incorrect. It should be sum across all axes except the batch axis because the norm of a matrix should be a scalar value. However, your L2-norm calculation returns a matrix not a scalar value. Because you are summing across the channel axis only. Is there a special reason for that?
vagan-code/vagan/model_vagan.py
Line 134 in 6be0b51
| slopes = tf.sqrt(tf.reduce_sum(tf.square(grads), axis=1)) |
Best Regards,
Cem
Metadata
Metadata
Assignees
Labels
No labels