-
Notifications
You must be signed in to change notification settings - Fork 22
Description
Max-
Very cool piece of code, I am using for dimensionality reduction in drug discovery and it looks like it could be quite useful...so I am scaling up and I had a question about larger datasets. In the docs you write:
To prepare such a dataset, create a new directory, e.g. '~/my_dataset', and save the training data as individual npy files per example in this directory
Should read this as one npy per record? In other words the dataset could be millions of npy files? I initially though this would allow subsets of data to be stored as 2d arrays, but it would appear as though you intended separate files. You also mention you can save as nested sub directories...does this mean I can specify the train/validation sets in subdirectories?
Thank you for any help you can provide!
Dennis