Skip to content

Allow single/float32 precision tensor types #254

@iancze

Description

@iancze

Is your feature request related to a problem or opportunity? Please describe.
In its current form (v0.2), MPoL uses float64 (or complex128) tensor types everywhere. This is because very early in MPoL development, I made the decision for core modules like BaseCube to use tensors of this type. All of the downstream code then builds on tensors of this type. If I recall correctly, I think I made the decision to use float64 because I had some divergent optimisation loops with float32 and I thought loss of precision was at fault because of the large dynamic range of astronomical images. With a few years of understanding between now and then, it seems more likely that the optimisation simply went awry because of a bad learning rate and finicky network architecture (e.g., no softplus or ln pixel mapping), but I never got to the bottom of the issue.

Describe the solution you'd like

  • In a test branch, create a MPoL version that will run with float32 and complex64 types
  • Evaluate if 'modern' MPoL can successfully run to completion with single precision, and what speed up this affords over float64 (if any)
  • If single precision improves performance, do not enforce type float64 in MPoL objects.
  • single precision will also allow MPoL to run on Apple MPS, which does not work with float64.

Metadata

Metadata

Assignees

No one assigned

    Labels

    No labels
    No labels

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions