Skip to content

Quantifying approximation of a model and appropriateness for evaluation #216

@clegaspi

Description

@clegaspi

This feature probably should be coupled with #214 in whatever way we choose to move forward with tagging/annotating quantities/models/symbols.

We need extreme vetting of our current models to determine their uncertainty and limit their use to the cases where it is appropriate.

There have been several suggestions on ways to add in this kind of information:

  • Attach uncertainty values where possible. Possibly having a dynamic uncertainty depending on the properties of the material.
  • Have a confidence rating system (e.g. 0-4) to qualitatively assess the validity of a quantity. Not exactly sure how we might determine this on the fly.
    • None: uncertainty of the model is known precisely
    • 0: unable to determine confidence
    • 1: poor confidence in accuracy
    • 2: reasonable confidence in accuracy
    • 3: high confidence in accuracy
    • 4: value is exact

Metadata

Metadata

Assignees

No one assigned

    Labels

    No labels
    No labels

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions