Skip to content

Gradient Penalty #2

@HCA97

Description

@HCA97

Hi,

I have a question regarding gradient penalty. I believe the L2-norm calculation of the gradient is incorrect. It should be sum across all axes except the batch axis because the norm of a matrix should be a scalar value. However, your L2-norm calculation returns a matrix not a scalar value. Because you are summing across the channel axis only. Is there a special reason for that?

slopes = tf.sqrt(tf.reduce_sum(tf.square(grads), axis=1))

Best Regards,
Cem

Metadata

Metadata

Assignees

No one assigned

    Labels

    No labels
    No labels

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions