Skip to content

Question regarding large datasets npy files #5

@dmoccia

Description

@dmoccia

Max-

Very cool piece of code, I am using for dimensionality reduction in drug discovery and it looks like it could be quite useful...so I am scaling up and I had a question about larger datasets. In the docs you write:

To prepare such a dataset, create a new directory, e.g. '~/my_dataset', and save the training data as individual npy files per example in this directory

Should read this as one npy per record? In other words the dataset could be millions of npy files? I initially though this would allow subsets of data to be stored as 2d arrays, but it would appear as though you intended separate files. You also mention you can save as nested sub directories...does this mean I can specify the train/validation sets in subdirectories?

Thank you for any help you can provide!

Dennis

Metadata

Metadata

Assignees

No one assigned

    Labels

    No labels
    No labels

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions