Skip to content

ELUC: evaluate predictor uncertainty #55

@ofrancon

Description

@ofrancon

Train a model that evaluates point predictions from a predictor model.

For a particular prediction, if the model has been trained on plenty of similar features we can be more confident in the prediction than if the model has never seen this kind of features (out of distribution sample).

See for example Quantifying Point-Prediction Uncertainty in Neural Networks via Residual Estimation with an I/O Kernel for one methodology that could be used to train an uncertainty model. The associated code is available here

Other kind of models could be used too.

Metadata

Metadata

Assignees

No one assigned

    Labels

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions