Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Update test features: add create_fails script, QC process test, and wait on baseline results #644

Merged
merged 8 commits into from
Oct 20, 2021

Conversation

apcraig
Copy link
Contributor

@apcraig apcraig commented Oct 14, 2021

PR checklist

  • Short (1 sentence) summary of your PR:
    Add create_fails script, QC process test, and batch wait on baseline results when using --diff in test suites

  • Developer(s):
    apcraig

  • Suggest PR reviewers from list in the column to the right.

  • Please copy the PR test results link or provide a summary of testing completed below.
    All tests are bit-for-bit with testing on cheyenne, https://github.com/CICE-Consortium/Test-Results/wiki/cice_by_hash_forks#adab3d33996089d66c9a69a26815409c3edc584c. Tested scripts several ways/times.

  • How much do the PR code changes differ from the unmodified code?

    • bit for bit
    • different at roundoff level
    • more substantial
  • Does this PR create or have dependencies on Icepack or any other models?

    • Yes
    • No
  • Does this PR add any new test cases?

    • Yes, qc process test
    • No
  • Is the documentation being updated? ("Documentation" includes information on the wiki or in the .rst files from doc/source/, which are used to create the online technical docs at https://readthedocs.org/projects/cice-consortium-cice/. A test build of the technical docs will be performed as part of the PR testing.)

    • Yes
    • No, does the documentation need to be updated at a later time?
      • Yes
      • No
  • Please provide any additional information or relevant details below:

  • Add create_fails script. This looks at the results of a test suite and creates a new test suite file (fails.ts) with only the failed tests. That can be edited and passed into cice.setup (--suite fails) to continue code validation. Hopefully this addresses and closes Re-running selected cases in suites #609.

  • Add qcchk and qcchkf tests which use the QC validation process to compare to other results. qcchk verifies the QC test passes and qcchkf verifies that the QC test fails. Add gx3 qc test to prod_suite.ts and verify it works on cheyenne. Had to add the appropriate python env setup to the cheyenne env files. The current tests just validate the QC process in bfbcomp (--diff) mode.

  • Use QC check for regression testing for qcchk tests. With the gx1 qc and gx3 gc tests in the prod_suite, this will now use the QC check as part of the regression testing.

  • Add batch checks when bfbflag (--diff) checking is done to make sure the baseline is complete before the comparison is done. Up to now, there has been an issue if the baseline test is NOT completed before a test that is comparing to that baseline. With this implementation, if the baseline job is still in the queue, then the comparison waits for it to complete in similar style to poll_queue.csh.

@apcraig
Copy link
Contributor Author

apcraig commented Oct 14, 2021

I just updated the regression testing so qcchk tests actually uses the QC validation process as part of their regression validation. I tested this on the new gx3 cases and it seems to work well. This will be applied to the gx1 qc test moving forward as well. These tests are defined in the prod_suite and are only functional on cheyenne at the moment (because of the python env setup). The prod_suite is run weekly on only cheyenne for 3 compilers.

Copy link
Contributor

@eclare108213 eclare108213 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Nice. Approving based only on visual inspection.


rm $tmpfile
echo "$0 done"
echo "Not passed tests written to file...... $outfile"
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Maybe "Failed tests" instead ?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Failed tests isn't quite accurate. Failed != Not passed. But I think you're right on reflection that failed is probably a better word here. I'll change it.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Updated.

echo "$0 done"
echo "Not passed tests written to file...... $outfile"
echo "To setup a new test suite, try something like"
echo " ./cice.setup --suite fails.ts ..."
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

does this work ? I guess you first have to move fails.ts to configuration/scripts/tests/ in the sandbox ? it this what you mean by "something like" ?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

You're right that It wouldn't work exactly as described. You can move fails.ts to the cice.setup directory too. It works from there. But you're right, maybe I'll clarify that a bit. I don't want to get too deep in the weeds though.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Updated.

@apcraig apcraig merged commit 2207d88 into CICE-Consortium:main Oct 20, 2021
@apcraig apcraig deleted the testf branch August 17, 2022 20:56
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

Successfully merging this pull request may close these issues.

Re-running selected cases in suites
3 participants