Skip to content

Consider use of improved double precision to reduce cumulative errors #6

Open
@monty241

Description

@monty241

The code modules all use REAL*4 currently, which is (when I am correct) a single precision 32-bit floating decimal number.

For instance in ops-convec.f90:

  • REAL*4 z0
  • REAL*4 zi

I consider OPS a weather-related calculation model. As far as I know most of these nowadays use 64-bit double precision numbers for calculations. Especially during a large number of calculation steps, precision may get lost or unacceptable cumulative errors may be introduced.

From the historical perspective on 1989, I can imagine that an array with 100K numbers might have been hard to fit into memory of a HP/DEC workstation or time sharing solution, but nowadays a double precision no longer has to incur extra memory costs.
Back then, a FPU (when present) might have similated double precision using multiple single precision calculations or the main processor might have similated them. Nowadays, double precision is seen more often than single precision both in hardware as software from where I stand.

A related issue is created regarding estimating cumulative errors and to assess whether they do not influence the accuracy of the model as documented on the RIVM website.

Activity

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Metadata

Metadata

Assignees

No one assigned

    Labels

    enhancementNew feature or request

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions