-
Notifications
You must be signed in to change notification settings - Fork 393
Open
Description
| sqrt_scale = torch.sqrt(scale.to(dtype)) |
I know it's mentioned in the paper that this version of scaling directly parametrize scale instead of exponent, however an unintended side effects is that when the scale goes close to zero it can get into negatives due to some larger random gradient updates, which causes a NaN.
Fix is simple, in our adaptation for our in house models we changed it to torch.sqrt(torch.abs(scale) + eps). The eps is added for preserving gradients (so it never reaches zero). I guess a biased ReLU also probably works, along with other non linear functions.
Metadata
Metadata
Assignees
Labels
No labels