Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Consider use of improved double precision to reduce cumulative errors #6

Open
monty241 opened this issue Mar 5, 2020 · 10 comments
Open
Labels
enhancement New feature or request

Comments

@monty241
Copy link

monty241 commented Mar 5, 2020

The code modules all use REAL*4 currently, which is (when I am correct) a single precision 32-bit floating decimal number.

For instance in ops-convec.f90:

  • REAL*4 z0
  • REAL*4 zi

I consider OPS a weather-related calculation model. As far as I know most of these nowadays use 64-bit double precision numbers for calculations. Especially during a large number of calculation steps, precision may get lost or unacceptable cumulative errors may be introduced.

From the historical perspective on 1989, I can imagine that an array with 100K numbers might have been hard to fit into memory of a HP/DEC workstation or time sharing solution, but nowadays a double precision no longer has to incur extra memory costs.
Back then, a FPU (when present) might have similated double precision using multiple single precision calculations or the main processor might have similated them. Nowadays, double precision is seen more often than single precision both in hardware as software from where I stand.

A related issue is created regarding estimating cumulative errors and to assess whether they do not influence the accuracy of the model as documented on the RIVM website.

@gjcats
Copy link

gjcats commented Aug 24, 2020

I replaced all 'real*8' by 'double precision', and all 'real*4' by 'real'. To not raise conflicts with the Fortran unformatted files I added a preprocessor option -DInputIsChars and I converted the unformatted files to (ascii) text. See the code in gjcats/OPS. Also see the testset, provided by RIVM, in gjcats/OPS-testset. To avoid library interface conflicts I added the required routines from library math77 to the sources.

After all this, I compiled with gfortran -fdefault-real-8.

It did not make a difference in the testset.

@monty241
Copy link
Author

Ok, great!

What test set did you compare? I would expect at least some trailing digits to differ due to different rounding.

I had a look at master...gjcats:gfortran, but could not find the updated code.

@monty241
Copy link
Author

Hi,

Github can be a real .... I still don't understand it completely, despite that I first studied it maybe 10 or 15 years ago.

You can commit the changes. That merges the changes in your file system with your local copy of the versions (typically in folder like .git directly below root).

When you "push" the folder, you are uploading it to github's copy of your version on github.

You might want to introduce a separate branch for it: with tortoisegit it is simple. Switch/checkout -> create new branch. But that should have been done before making the changes. Maybe copy changes somewhere, revert the changes, switch to new branch "my-great-test", paste changes back. When pushing to githob's copy of your version, don't version to upload your local branches.

Or just add the changes as an archive.

@gjcats
Copy link

gjcats commented Aug 24, 2020

Can you access gjcats rep OPS, branch OPS8 - which should do the job? (I tried to follow your tortoisegit way)

You asked for the testset repository gjats/OPS-testset

I compared the printer en plotter output of the two examples. They do not have enough decimals printed to see differences.

When compiling gfortran -O0 I get a one grid point value in example1 different in the last printed decimal as compared to the data provided by RIVM. When compiling -O1 the differences with that testset are too large for comfort. But in either case, O0 or O1, double precision did not change any of the printed decimals as compared to the original code (with unformatted input, real*4 and real*8)

@monty241
Copy link
Author

Thanks; I myself found it quite complex the first time.

The OPS8 I've found, but I seem unable to find any differences with the master branch:

gjcats/OPS@master...gjcats:OPS8

Maybe the branch was pushed, but not the changes? It is normally visible when you use the context menu in TortoiseGit:

image

In this example, the red (1) displays the local repository, whereas the red (2) displays the state of the origin (typically the site from which the repository has arrived, in this case github). The red (3) is a tag (a branch without changes).

When the remote origin and the local repository have the same files, the red (1) and (2) are on the same line in the log.

Good to hear that first tests indicate that there is no significant difference in the test case output. I hope I am handy enough to get it running too.

@gjcats
Copy link

gjcats commented Aug 24, 2020

I created a new repository, OPS8, directly under gjcats. I do know how to remove the branch OPS8 under OPS but it is useless.

@gjcats
Copy link

gjcats commented Aug 25, 2020

I noticed that r1mach does not behave properly under r8. I wrote a new one. Eventually, the result now is that under gfortran on Intel i5 the testset is reproduced to within the accuracy provided by RIVM. There is a one digit change in the plt file in example1. Interestingly, that single digit difference is in a different grid point when compiled r8 or r*4. Typical rounding effects.

The code to run r4 or r8 dependent on compiler flag settings is in repository gjcats/OPS8 and the testset is rep gjcats/OPS-testset.

For the testset, gfortran -O3 optimisation is allowable, but with one exception: As soon as I tried to optimise ops_statparexp output values became wrong, very wrong or even NaNs. The produced values then do depend on the calculation precision but that it is not really a relevant observation, because the values are wrong anyway.

@gjcats
Copy link

gjcats commented Aug 30, 2020

I changed the coordinates of source and receptor points from integer to real. I do not see a reason why not, although the accuracy of (integer) 1 m in EPSG:28992 (RDM) is sufficient for this kind of models. Interestingly, after this change, the accuracy (R8 or r4) makes a difference bigger than just the last digit in the reported deposition in example 2.

@monty241
Copy link
Author

Sorry, I didn't get around to working with the OPS8 repository due to business and regulatory reporting requirements to be met.

This is interesting finding. Out of curiositiy: what is the magnitude of the difference of the outcome? Just 2 trailing digits instead of 1 or more, and out of how many digits (15? 7?) And is it just some shifting values between adjacent areas or does the total distribution change or even the grand total?

@RIVM-OPS
Copy link
Collaborator

First of all, we'd like to apologise for the late reaction. We'll try to reply more quickly to future issues.
Thank you for the report and thank you gjcats for the information on the tests you have performed.
We agree that it is useful to know more about the effects of precision and plan to conduct our own tests as well.
We will update this issue once the results of those tests become available.

@RIVM-OPS RIVM-OPS added the enhancement New feature or request label Jan 25, 2022
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
enhancement New feature or request
Projects
None yet
Development

No branches or pull requests

3 participants