-
Notifications
You must be signed in to change notification settings - Fork 20
Open
Description
I understand that for cross-silo horizontal learning settings, what need to be encrypted are gradients from different data owners, then performing BatchCrypt means HE + "local batch norm", and it will hold true for an acceptable loss of precision, and is not likely to harm convergence and performance of the model.
Do you have any idea on how this could be done in a cross-silo vertical learning setting? since the intermediate results are not only gradients, but linear computation result, as well as a component used for computing the gradient. Applying "BatchNorm" seems doesn't make sense on these, any suggestions?
Metadata
Metadata
Assignees
Labels
No labels