-
Notifications
You must be signed in to change notification settings - Fork 13
Open
Description
Hi, i wanna train the two networks to generate samples from a musical genre dataset, how did you train the two networks to this effect ?
As far as i understood train.py is only to VAE, but then how to retrain the wavenet ?
Did you tried to implement in tf 2.0 ?
Also, iam curious about the possibility of cherry-pick points in the latent space, instead of random generating, to do experimental music, it would be great ...
Thank you
Metadata
Metadata
Assignees
Labels
No labels