diff --git a/.DS_Store b/.DS_Store deleted file mode 100644 index 05db801e8..000000000 Binary files a/.DS_Store and /dev/null differ diff --git a/.github/PULL_REQUEST_TEMPLATE b/.github/PULL_REQUEST_TEMPLATE index 4a914ccb0..fe5cbb6ee 100644 --- a/.github/PULL_REQUEST_TEMPLATE +++ b/.github/PULL_REQUEST_TEMPLATE @@ -1,9 +1,14 @@ [Remove this and add a short summary line]: -Developer(s): -Are the code changes bit for bit, different at roundoff level, or more substantial? -Is the documentation being updated with this PR? (Y/N) -If not, does the documentation need to be updated separately? (Y/N) -"Documentation" includes information on the wiki and .rst files in doc/source/, -which are used to create the online technical docs at https://cice-consortium.github.io/CICE/ -Please suggest code reviewers in the column at right. -Other Relevant Details: + +- Developer(s): + +- Please suggest code Pull Request reviewers in the column at right. + +- Are the code changes bit for bit, different at roundoff level, or more substantial? + +- Is the documentation being updated with this PR? (Y/N) +If not, does the documentation need to be updated separately at a later time? (Y/N) +Note: "Documentation" includes information on the wiki and .rst files in doc/source/, +which are used to create the online technical docs at https://readthedocs.org/projects/cice-consortium-cice/. + +- Other Relevant Details: diff --git a/.gitignore b/.gitignore index 060aa4594..9574d2b2b 100644 --- a/.gitignore +++ b/.gitignore @@ -4,3 +4,6 @@ # Ignore doc/build # these are stored in gh-pages orphan branch doc/build + +# Ignore macOS cache files +.DS_Store diff --git a/.travis.yml b/.travis.yml new file mode 100644 index 000000000..d7e674cb3 --- /dev/null +++ b/.travis.yml @@ -0,0 +1,45 @@ +language: cpp + +sudo: false + +addons: + apt: + sources: + - ubuntu-toolchain-r-test + packages: + - tcsh + - pkg-config + - netcdf-bin libnetcdf-dev #libnetcdff-dev (only required on Debian) + - gfortran + - openmpi-bin libopenmpi-dev + - wget + #- lftp + +install: + # Fetch CICE grid files and initial conditions + - "wget ftp://ftp.cgd.ucar.edu/archive/Model-Data/CICE/CICE_data_ic_grid.tar.gz && + tar xvfz CICE_data_ic_grid.tar.gz -C ~" + + # Fetch forcing data + - "wget ftp://ftp.cgd.ucar.edu/archive/Model-Data/CICE/CICE_data_forcing_gx3_all.tar.gz && + tar xvfz CICE_data_forcing_gx3_all.tar.gz -C ~" + + # Mirror entire data folder + #- "lftp ftp://anonymous:travis@travis-ci.org@ftp.cgd.ucar.edu + #-e 'mirror /archive/Model-Data/CICE/ ~/ICEPACK_INPUTDATA; quit'" + +script: + - "./cice.setup --suite travis_suite --testid travisCItest + --mach travisCI --env gnu; + cd testsuite.travisCItest && + ./results.csh" + +notifications: + email: false + +after_failure: + - "for runlog in $TRAVIS_BUILD_DIR/travis_suite.travisCItest/*.travisCItest/logs/cice.runlog.*; do + echo \"### Contents of $runlog ###\" && tail -n 100 $runlog; done" + #- "git config --global user.email 'travis@travis-ci.org' && + #git config --global user.name 'ciceconsortium' && + #./report_results.csh --travisCI" diff --git a/README.md b/README.md index d5aeff614..dea9a2258 100644 --- a/README.md +++ b/README.md @@ -1,39 +1,22 @@ +[![Build Status](https://travis-ci.org/CICE-Consortium/CICE.svg?branch=master)](https://travis-ci.org/CICE-Consortium/CICE) +[![Documentation Status](https://readthedocs.org/projects/cice-consortium-cice/badge/?version=master)](http://cice-consortium-cice.readthedocs.io/en/master/?badge=master) ## Overview - -This repository contains files needed to run versions 6 and higher of the sea ice model CICE, which is now maintained by the CICE Consortium. Versions prior to v6 are found in the [CICE-svn-trunk repository](https://github.com/CICE-Consortium/CICE-svn-trunk). +This repository contains the files and code needed to run the CICE sea ice numerical model starting with version 6. CICE is maintained by the CICE Consortium. Versions prior to v6 are found in the [CICE-svn-trunk repository](https://github.com/CICE-Consortium/CICE-svn-trunk). CICE consists of a top level driver and dynamical core plus the Icepack column physics code, which is included in CICE as a git submodule. Because Icepack is a submodule of CICE, Icepack and CICE development are handled independently with respect to the github repositories even though development and testing may be done together. -## Obtaining CICE - -If you expect to make any changes to the code, we recommend that you first fork both the CICE and Icepack repositories. Basic instructions for working with CICE and Icepack are found in the [Git Workflow Guidance](https://github.com/CICE-Consortium/About-Us/wiki/Git-Workflow-Guidance), linked from the wikis in the primary code repositories -https://github.com/CICE-Consortium/CICE/wiki -https://github.com/CICE-Consortium/Icepack/wiki - -CICE may be obtained in several different ways: [not yet tested] -1. clone the full repository -See [Git Workflow Guidance](https://github.com/CICE-Consortium/About-Us/wiki/Git-Workflow-Guidance) -2. check out only a particular branch, version or tag -In the workflow for step 1 above, substitute - git clone -b branch_name --single-branch --recursive https://github.com/CICE-Consortium/CICE.git local_directory_name -or use svn - svn co https://github.com/CICE-Consortium/CICE/branch_name -where "branch name" can also be a version name -3. download a tarball for a particular version -[how] +If you expect to make any changes to the code, we recommend that you first fork both the CICE and Icepack repositories. Basic instructions for working with CICE and Icepack are found in the Git Workflow Guidance, linked from the Resource Index (below). -## More information +## Useful links +* **CICE wiki**: https://github.com/CICE-Consortium/CICE/wiki -Detailed and searchable online documentation of CICE can be found at https://cice-consortium.github.io/CICE/. In this documentation, a [“Quick Start”](https://cice-consortium.github.io/CICE/cice_1_introduction.html#quick-start-guide) subsection is available with instructions for running the model. A [“Testing”](https://cice-consortium.github.io/CICE/cice_3_user_guide.html#testing-cice) subsection with instructions for setting up standard tests (e.g. regression, restart) is also available. + Information about the CICE model -In-progress documentation (not yet merged into the main repo): https://duvivier.github.io/CICE/ +* **CICE Version Index**: https://github.com/CICE-Consortium/CICE/wiki/CICE-Version-Index -The [wiki](https://github.com/CICE-Consortium/CICE/wiki) pages for each repository contain links to additional information, e.g. -- complete documentation -- larger files such as the gx1 grid, land mask, and forcing files -- testing data + Numbered CICE releases since version 6 with associated documentation and DOIs. -Test results for both CICE and Icepack can be found in the ["Test-Results" repository](https://github.com/CICE-Consortium/Test-Results). +* **Resource Index**: https://github.com/CICE-Consortium/About-Us/wiki/Resource-Index -The ["About-Us" repository](https://github.com/CICE-Consortium/About-Us) includes background and supporting information about the CICE Consortium, including how to interact with it. + List of resources for information about the Consortium and its repositories as well as model documentation, testing, and development. diff --git a/cice.setup b/cice.setup index 7b1d695b0..b41d287b3 100755 --- a/cice.setup +++ b/cice.setup @@ -1,45 +1,43 @@ #!/bin/csh -f set ICE_SANDBOX = `pwd` -set ICE_SCRIPTS = ${ICE_SANDBOX}/configuration/scripts +set ICE_VERSION = unknown +if (-e cicecore/version.txt) then + set ICE_VERSION = `head -1 cicecore/version.txt | sed -e 's/ /_/g'` +endif +set ICE_SCRIPTS = "${ICE_SANDBOX}/configuration/scripts" set initargv = ( $argv[*] ) set helpheader = 0 set dash = "-" set spval = "UnDeFiNeD" -set machcomp = $spval -set machine = $spval +set machcomp = ${spval} +set machine = ${spval} set compilers = intel -set case = $spval -set test = $spval +set case = ${spval} +set test = ${spval} set grid = gx3 set pesx = 4x1 set sets = "" -set bdir = $spval -set testid = $spval -set testsuite = $spval -set acct = $spval -set baseCom = $spval # Baseline compare -set baseGen = $spval # Baseline generate -set bfbcomp = $spval # BFB compare +set bdir = ${spval} +set testid = ${spval} +set testsuite = ${spval} +set queue = ${spval} +set acct = ${spval} +set baseCom = ${spval} # Baseline compare +set baseGen = ${spval} # Baseline generate +set bfbcomp = ${spval} # BFB compare set report = 0 # test results reporting -set versno = `grep "release =" doc/source/conf.py | cut -d \' -f 2` +set sdate = `date -u "+%y%m%d"` +set stime = `date -u "+%H%M%S"` +set docase = 0 +set dotest = 0 +set dosuite = 0 if ($#argv < 1) then set helpheader = 1 endif -set argv = ( $initargv[*] ) -# check for --version -while (1) - if ($#argv < 1) break; - if ("$argv[1]" =~ "--version" ) then - echo "This is Cice ${versno}" - exit -1 - endif - shift argv -end - set argv = ( $initargv[*] ) # check for -h while (1) @@ -63,6 +61,12 @@ NAME cice.setup SYNOPSIS + -h || --help + + --version + + --setvers versno + --case CASE -m MACH [-e ENV][-p MxN][-g GRID][-s SET1,SET2][--acct ACCT] @@ -70,25 +74,27 @@ SYNOPSIS [-e ENV][-p MxN][-g GRID][-s SET1,SET2][--acct ACCT] [--diff TESTNAME][--bdir DIR][--bgen DIR][--bcmp DIR] - --suite SUITE -m MACH --testid ID + --suite SUITE[,SUITE2] -m MACH --testid ID [-e ENV1,ENV2][--acct ACCT][--bdir DIR][--bgen DIR] [--bcmp DIR][--report] DESCRIPTION --help, -h : help --version : generates cice version number + --setvers : updates cice version number in sandbox --case, -c : case, case directory/name (not with --test or --suite) --mach, -m : machine, machine name (required) - --env, -e : compiler(s), comma separated (default = "intel") - --pes, -p : tasks x threads or "m"x"n" (default is 1x1) + --env, -e : compiler(s), comma separated (default = $compilers) + --pes, -p : tasks x threads [x blocksize_x x blocksize_y [x maxblocks]] (default is ${pesx}) --acct : account number for the batch submission - --grid, -g : grid, grid (default = col) + --grid, -g : grid, grid (default = ${grid}) --set, -s : case option setting(s), comma separated (default = " ") + --queue : queue for the batch submission For testing --test : test, test name (not with --case or --suite) - --suite : test suite, pre-defined set of tests (not with --case or --test) + --suite : test suite, pre-defined set or sets of tests, comma separated (not with --case or --test) --bdir : top baseline directory, default ICE_MACHINE_BASELINE --bgen : baselines directory where output from current tests are copied --bcmp : baselines directory where output from current tests are compared @@ -97,16 +103,18 @@ DESCRIPTION --report : automatically post results when tests are complete EXAMPLES + cice.setup --version + cice.setup --setvers 6.0.2 cice.setup -c caseB -m gordon -e cray -s diag1,debug cice.setup --case ~/caseA --mach cheyenne --env intel --set diag24 - cice.setup --test restart -m onyx -e gnu -s debug -testid myid + cice.setup --test restart -m onyx -e gnu -s debug -testid myid -p 8x4 cice.setup --suite base_suite -m conrad --env intel,cray --testid myv1 --bgen myv1 - cice.setup --suite quick_suite -m cheyenne --testid myv1 --bgen myv1 + cice.setup --suite quick_suite,decomp_suite -m cheyenne --testid myv1 --bgen myv1 cice.setup --suite quick_suite -m cheyenne --testid myv2 --bgen myv2 --bcmp myv1 SEE ALSO cice.setup --help or - User Documentation at https://cice-consortium.github.io/Cice/ + User Documentation at https://github.com/cice-consortium/cice/ EOF1 @@ -124,8 +132,8 @@ cat << EOF1 Available --set options are in configuration/scripts/options and include: EOF1 - set soptions1 = `ls -1 configuration/scripts/options | grep set_ | sed 's/set_nml.//g' | sed 's/set_env.//g' ` - set soptions = `echo $soptions1 | fmt -1 | sort ` + set soptions1 = `ls -1 configuration/scripts/options | grep set_ | sed 's/set_nml.//g' | sed 's/set_env.//g' | sed 's/set_files.//g' ` + set soptions = `echo $soptions1 | fmt -1 | sort -u ` foreach sopt ($soptions) echo " $sopt" end @@ -134,7 +142,7 @@ cat << EOF1 Available tests are in configuration/scripts/tests and include: EOF1 - set soptions1 = `ls -1 configuration/scripts/tests | grep test_ | sed 's/test_//g' | sed 's/.script//g' ` + set soptions1 = `ls -1 configuration/scripts/tests | grep test_ | grep script | sed 's/test_//g' | sed 's/.script//g' ` set soptions = `echo $soptions1 | fmt -1 | sort ` foreach sopt ($soptions) echo " $sopt" @@ -161,7 +169,40 @@ echo " " echo "${0}:" set argv = ( $initargv[*] ) +# check for --version +while (1) + if ($#argv < 1) break; + if ("$argv[1]" =~ "--version" ) then + echo "${0}: This is ${ICE_VERSION}" + exit -1 + endif + shift argv +end +set argv = ( $initargv[*] ) +# check for --setvers +while (1) + if ($#argv < 1) break; + if ("$argv[1]" =~ "--setvers" ) then + shift argv + if ( $#argv < 1 ) then + echo "${0}: ERROR in --setvers argument" + exit -1 + endif + set versno = $argv[1] + ${ICE_SCRIPTS}/set_version_number.csh $versno + if ($status != 0) then + echo "${0} ERROR in set_version_numbers.csh" + exit -1 + endif + echo "Setting CICE version to ${versno} and exiting" + exit -1 + endif + shift argv +end + +set argv = ( $initargv[*] ) +# read in all options while (1) if ( $#argv < 1 ) break; set option = $argv[1]; @@ -185,14 +226,18 @@ while (1) if ("$option" == "--case" || "$option" == "-c") then set case = $argv[1] + set docase = 1 else if ("$option" =~ --mach* || "$option" == "-m") then set machine = $argv[1] else if ("$option" =~ --env* || "$option" == "-e") then set compilers = $argv[1] else if ("$option" == "--test") then set test = $argv[1] + set dotest = 1 else if ("$option" == "--grid" || "$option" == "-g") then set grid = $argv[1] + else if ("$option" == "--queue") then + set queue = $argv[1] else if ("$option" == "--pes" || "$option" == "-p") then set pesx = $argv[1] else if ("$option" == "--acct") then @@ -209,6 +254,7 @@ while (1) set bfbcomp = $argv[1] else if ("$option" == "--suite") then set testsuite = $argv[1] + set dosuite = 1 else if ("$option" == "--testid") then set testid = $argv[1] else @@ -221,34 +267,25 @@ while (1) endif end -if ($machine == $spval) then - echo "${0}: ERROR in arguments, -m required" +if (${machine} == ${spval}) then + echo "${0}: ERROR in arguments, --mach required" exit -1 endif -if ($case == $spval && $test == $spval && $testsuite == $spval) then - echo "${0}: ERROR in arguments, -c, -t, or -ts required" +@ dosum = ${docase} + ${dotest} + ${dosuite} +if (${dosum} == 0) then + echo "${0}: ERROR in arguments, --case, --test, or --suite required" exit -1 endif -if ($case != $spval && $test != $spval) then - echo "${0}: ERROR in arguments, cannot use both -c and -t" +if (${dosum} > 1) then + echo "${0}: ERROR in arguments, cannot use more than one of --case, --test, and --suite" exit -1 endif -if ($case != $spval && $testsuite != $spval) then - echo "${0}: ERROR in arguments, cannot use both -c and -ts" - exit -1 -endif - -if ($testsuite != $spval && $test != $spval) then - echo "${0}: ERROR in arguments, cannot use both -ts and -t" - exit -1 -endif - -if ($testsuite == $spval) then +if (${dosuite} == 0) then if ("$compilers" =~ "*,*") then - echo "${0}: ERROR in arguments, cannot set multiple compilers without -ts" + echo "${0}: ERROR in arguments, cannot set multiple compilers without --suite" exit -1 else set compiler = ${compilers} @@ -257,36 +294,64 @@ if ($testsuite == $spval) then endif # tcraig, lets find another way to validate argument -#if ($test != $spval && $test != 'smoke' && $test != '10day' && $test != 'annual' \ -# && $test != 'restart') then -# echo "${0}: ERROR in arguments. $test is not a valid test" +#if (${test} != ${spval} && ${test} != 'smoke' && ${test} != '10day' && ${test} != 'annual' \ +# && ${test} != 'restart') then +# echo "${0}: ERROR in arguments. ${test} is not a valid test" # exit -1 #endif -if (($testsuite != $spval || $test != $spval) && $testid == $spval) then - echo "${0}: ERROR in arguments. -testid must be passed if using -ts or -t" +if ((${dosuite} == 1 || ${dotest} == 1) && ${testid} == ${spval}) then + echo "${0}: ERROR in arguments. --testid must be passed if using --suite or --test" exit -1 endif -#Update version.txt -echo "CICE ${versno}" >! cicecore/version.txt - -# Check to see if this is a test-suite run. If so, loop through the various -# tests and create a separate folder for each -if ( $testsuite != $spval ) then - set tsdir = "${testsuite}.${testid}" - if (-e ${testsuite}) then - set tsfile = "${testsuite}" - else if (-e ${testsuite}.ts) then - set tsfile = "${testsuite}.ts" - else - if (-e ${ICE_SCRIPTS}/tests/${testsuite}.ts) then - set tsfile = "${ICE_SCRIPTS}/tests/${testsuite}.ts" +#--------------------------------------------------------------------- +# Setup tsfile and test suite support stuff + +set tsdir = "." +set tsfile = "caselist.${sdate}-${stime}" +if ( ${dosuite} == 1 ) then + set tsdir = "testsuite.${testid}" + set tsfile = "testsuite.${testid}.${sdate}-${stime}.list" +endif +if (-e $tsfile) then + echo "${0}: ERROR in tsfile, this should never happen" + exit -1 +endif + +if ( ${dosuite} == 0 ) then + set teststring = "${test} ${grid} ${pesx} ${sets}" + if ( $bfbcomp != ${spval} ) then + if ( ${sets} == "" ) then + set teststring = "${teststring} none ${bfbcomp}" else - echo "${0}: ERROR, cannot find testsuite file ${testsuite}, also checked ${ICE_SCRIPTS}/tests/${testsuite}.ts" - exit -1 + set teststring = "${teststring} ${bfbcomp}" endif endif + echo ${teststring} >! ${tsfile} + set sets = "" + +else + set tarrays = `echo ${testsuite} | sed 's/,/ /g' | fmt -1 | sort -u` + foreach tarray ( ${tarrays} ) + if (-e ${tarray}) then + cat ${tarray} >> $tsfile + else if (-e ${tarray}.ts) then + cat ${tarray}.ts >> $tsfile + else + if (-e ${ICE_SCRIPTS}/tests/${tarray}.ts) then + cat ${ICE_SCRIPTS}/tests/${tarray}.ts >> $tsfile + else + echo "${0}: ERROR, cannot find testsuite file ${tarray}, also checked ${ICE_SCRIPTS}/tests/${tarray}.ts" + exit -1 + endif + endif + end + + if (-e ./${tsdir}) then + echo "${0}: ERROR, ${tsdir} already exists" + exit -1 + endif mkdir ./${tsdir} cp -f ${ICE_SCRIPTS}/tests/report_results.csh ./${tsdir} @@ -307,11 +372,8 @@ EOF0 set hashdate = `git log | grep -i date | head -1 | cut -d : -f 2-` set cdate = `date -u "+%Y-%m-%d"` set ctime = `date -u "+%H:%M:%S"` - set vers = "unknown" + set vers = ${ICE_VERSION} set shhash = `echo ${hash} | cut -c 1-10` - if (-e cicecore/version.txt) then - set vers = `head -1 cicecore/version.txt` - endif cat >! ./${tsdir}/results.csh << EOF0 #!/bin/csh -f @@ -334,9 +396,31 @@ EOF0 chmod +x ./${tsdir}/suite.run chmod +x ./${tsdir}/results.csh - set ncompilers = "`echo $compilers | sed -e 's/,/ /g'`" +endif + +#------------------------------------------------------------------- +# Loop over cases/tests - foreach compiler ( $ncompilers ) +set ncompilers = "`echo $compilers | sed -e 's/,/ /g'`" + +# check that machines and compilers are valid before starting big loop +set doabort = false +foreach compiler ( $ncompilers ) + set machcomp = ${machine}_${compiler} + foreach file (env.${machcomp} Macros.${machcomp}) + if !(-e ${ICE_SCRIPTS}/machines/$file) then + echo "${0}: ERROR, ${ICE_SCRIPTS}/machines/$file not found" + set doabort = true + endif + end +end +if (${doabort} == true) then + exit -1 +endif + +set sets_base = "${sets}" +set bfbcomp_base = "$bfbcomp" +foreach compiler ( $ncompilers ) set machcomp = ${machine}_${compiler} foreach line ( "`cat $tsfile`" ) # Check if line is a comment line @@ -351,148 +435,185 @@ EOF0 set sets_tmp = `echo $line | cut -d' ' -f4` set bfbcomp_tmp = `echo $line | cut -d' ' -f5` # Create a new sets_base variable to store sets passed to cice.setup - if (! $?sets_base ) then - set sets_base = "$sets" - endif + # Append sets from .ts file to the $sets variable - if ( $sets_tmp == "none" ) then - set sets = "$sets_base" - else - if ( $sets_base == "" ) then - set sets = "$sets_tmp" - else - set sets = "$sets_base,$sets_tmp" - # Remove duplictes in the sets variable - set sets = "`echo $sets | sed -e 's/\b\([a-z]\+\)[ ,\n]\1/\1/g'`" - endif - endif + set sets = "$sets_base,$sets_tmp" + # Create a new bfbcomp_base variable to store bfbcomp passed to cice.setup - if (! $?bfbcomp_base ) then - set bfbcomp_base = "$bfbcomp" - endif # Use bfbcomp_base or bfbcomp_tmp if ( $bfbcomp_tmp == "" ) then set bfbcomp = "$bfbcomp_base" else set bfbcomp = "$bfbcomp_tmp" endif -endif -if ($case =~ $spval) then - if ($sets != "") then - set sarray = `echo $sets | sed 's/,/ /g' | fmt -1 | sort -n` - set soptions = "JustStartingNowOK" - foreach field ($sarray) - if (${soptions} =~ "JustStartingNowOK") then - set soptions = ${field} + set fbfbcomp = ${spval} + if ($bfbcomp != ${spval}) then + set fbfbcomp = ${machcomp}_${bfbcomp}.${testid} + endif + + set testname_noid = ${spval} + # create case for test cases + if (${docase} == 0) then + set soptions = "" + # Create sorted array and remove duplicates and "none" + set setsarray = `echo ${sets} | sed 's/,/ /g' | fmt -1 | sort -u` + if ("${setsarray}" != "") then + foreach field (${setsarray}) + if (${field} != "none") then + set soptions = ${soptions}"_"${field} + endif + end + endif + # soptions starts with _ + set testname_noid = "${machcomp}_${test}_${grid}_${pesx}${soptions}" + set testname_base = "${machcomp}_${test}_${grid}_${pesx}${soptions}.${testid}" + if (${dosuite} == 1) then + set testname = "${tsdir}/$testname_base" else - set soptions = ${soptions}"_"${field} + set testname = "$testname_base" endif - end - # Only include $testid in testname if this is not a baseline-generating run - set testname_noid = "${machcomp}_${test}_${grid}_${pesx}_${soptions}" - set testname_base = "${machcomp}_${test}_${grid}_${pesx}_${soptions}.${testid}" - else - set testname_noid = "${machcomp}_${test}_${grid}_${pesx}" - set testname_base = "${machcomp}_${test}_${grid}_${pesx}.${testid}" - endif - if ($testsuite != $spval) then - set testname = "${tsdir}/$testname_base" - else - set testname = "$testname_base" - endif - set case = ${testname} -endif + set case = ${testname} + endif -if (-d $case) then - echo "${0}: ERROR, case $case already exists" - exit -1 -endif -mkdir -p $case -echo "`date`${0} $initargv[*]" >> $case/README.case + if (-d ${case}) then + echo "${0}: ERROR, case ${case} already exists" +# exit -1 + continue + endif -#------------------------------------------------------------ -# Setup case directory, copy files to case directory + if (${case} =~ */*) then + set casename = $case:t + else + set casename = $case + endif -cd ${case} -set casedir = `pwd` -set casescr = "${casedir}/casescripts" -if !( -d ${casescr}) mkdir ${casescr} + #------------------------------------------------------------ + # Setup case directory, copy files to case directory -# from basic script dir to case -foreach file (cice.build cice.settings Makefile ice_in makdep.c) - if !(-e ${ICE_SCRIPTS}/$file) then - echo "${0}: ERROR, ${ICE_SCRIPTS}/$file not found" - exit -1 - endif - cp -f -p ${ICE_SCRIPTS}/$file ${casedir} -end + mkdir -p ${case} + echo "`date`${0} $initargv[*]" >> ${case}/README.case -# from machines dir to case -foreach file (env.${machcomp} Macros.${machcomp}) - if !(-e ${ICE_SCRIPTS}/machines/$file) then - echo "${0}: ERROR, ${ICE_SCRIPTS}/machines/$file not found" - exit -1 - endif - cp -f -p ${ICE_SCRIPTS}/machines/$file ${casedir} -end + cd ${case} + set casedir = `pwd` + set casescr = "${casedir}/casescripts" + if !( -d ${casescr}) mkdir ${casescr} -# from basic script dir to casescr -foreach file (parse_namelist.sh parse_settings.sh parse_namelist_from_settings.sh cice_decomp.csh cice.run.setup.csh cice.test.setup.csh) - if !(-e ${ICE_SCRIPTS}/$file) then - echo "${0}: ERROR, ${ICE_SCRIPTS}/$file not found" - exit -1 - endif - cp -f -p ${ICE_SCRIPTS}/$file ${casescr} -end + # from basic script dir to case + foreach file (cice.build cice.settings Makefile ice_in makdep.c setup_run_dirs.csh) + if !(-e ${ICE_SCRIPTS}/$file) then + echo "${0}: ERROR, ${ICE_SCRIPTS}/$file not found" + exit -1 + endif + cp -f -p ${ICE_SCRIPTS}/$file ${casedir} + end -if ($case =~ */*) then - set casename = $case:t -else - set casename = $case -endif + # from machines dir to case + foreach file (env.${machcomp} Macros.${machcomp}) + if !(-e ${ICE_SCRIPTS}/machines/$file) then + echo "${0}: ERROR, ${ICE_SCRIPTS}/machines/$file not found" + exit -1 + endif + cp -f -p ${ICE_SCRIPTS}/machines/$file ${casedir} + end -cd ${casedir} -source ./env.${machcomp} || exit 2 + # from basic script dir to casescr + foreach file (parse_namelist.sh parse_settings.sh parse_namelist_from_settings.sh cice_decomp.csh cice.run.setup.csh cice.test.setup.csh) + if !(-e ${ICE_SCRIPTS}/$file) then + echo "${0}: ERROR, ${ICE_SCRIPTS}/$file not found" + exit -1 + endif + cp -f -p ${ICE_SCRIPTS}/$file ${casescr} + end -echo ICE_SANDBOX = ${ICE_SANDBOX} -echo ICE_CASENAME = ${casename} -echo ICE_CASEDIR = ${casedir} -echo ICE_MACHINE = ${machine} -echo ICE_COMPILER = ${compiler} + cd ${casedir} + source ./env.${machcomp} || exit 2 -#------------------------------------------------------------ -# Compute a default blocksize + set quietmode = false + if ($?ICE_MACHINE_QUIETMODE) then + set quietmode = ${ICE_MACHINE_QUIETMODE} + endif -set chck = `echo ${pesx} | sed 's/^[0-9][0-9]*x[0-9][0-9]*$/OK/'` -if (${chck} == OK) then - set task = `echo ${pesx} | sed s/x.\*//` - set thrd = `echo ${pesx} | sed s/.\*x//` -else - echo "${0}: ERROR in -p argument, ${pesx}, must be mxn" - exit -1 -endif + if (${acct} == ${spval}) then + if (-e ~/.cice_proj) then + set acct = `head -1 ~/.cice_proj` + else + set acct = ${ICE_MACHINE_ACCT} + endif + endif -setenv ICE_DECOMP_GRID ${grid} -setenv ICE_DECOMP_NTASK ${task} -setenv ICE_DECOMP_NTHRD ${thrd} + if (${queue} == ${spval}) then + if (-e ~/.cice_queue) then + set queue = `head -1 ~/.cice_queue` + else + set queue = ${ICE_MACHINE_QUEUE} + endif + endif -source ${casescr}/cice_decomp.csh -if ($status != 0) then - echo "${0}: ERROR, cice_decomp.csh aborted" - exit -1 -endif + #------------------------------------------------------------ + # Compute a default blocksize -#------------------------------------------------------------ -# Copy in and update cice.settings and ice_in files + set chck = `echo ${pesx} | sed 's/^[0-9][0-9]*x[0-9][0-9]*x[0-9][0-9]*x[0-9][0-9]*x[0-9][0-9]*$/OK/'` + if (${chck} == OK) then + set task = `echo ${pesx} | cut -d x -f 1` + set thrd = `echo ${pesx} | cut -d x -f 2` + set blckx = `echo ${pesx} | cut -d x -f 3` + set blcky = `echo ${pesx} | cut -d x -f 4` + set mblck = `echo ${pesx} | cut -d x -f 5` + else + set chck = `echo ${pesx} | sed 's/^[0-9][0-9]*x[0-9][0-9]*x[0-9][0-9]*x[0-9][0-9]*$/OK/'` + if (${chck} == OK) then + set task = `echo ${pesx} | cut -d x -f 1` + set thrd = `echo ${pesx} | cut -d x -f 2` + set blckx = `echo ${pesx} | cut -d x -f 3` + set blcky = `echo ${pesx} | cut -d x -f 4` + set mblck = 0 + else + set chck = `echo ${pesx} | sed 's/^[0-9][0-9]*x[0-9][0-9]*$/OK/'` + if (${chck} == OK) then + set task = `echo ${pesx} | cut -d x -f 1` + set thrd = `echo ${pesx} | cut -d x -f 2` + set blckx = 0 + set blcky = 0 + set mblck = 0 + else + echo "${0}: ERROR in -p argument, ${pesx}, must be [m]x[n], [m]x[n]x[bx]x[by], or [m]x[n]x[bx]x[by]x[mb] " + exit -1 + endif + endif + endif + + setenv ICE_DECOMP_GRID ${grid} + setenv ICE_DECOMP_NTASK ${task} + setenv ICE_DECOMP_NTHRD ${thrd} + setenv ICE_DECOMP_BLCKX ${blckx} + setenv ICE_DECOMP_BLCKY ${blcky} + setenv ICE_DECOMP_MXBLCKS ${mblck} + + source ${casescr}/cice_decomp.csh + if ($status != 0) then + echo "${0}: ERROR, cice_decomp.csh aborted" + exit -1 + endif -set fimods = ${casescr}/ice_in.mods -set fsmods = ${casescr}/cice.settings.mods + echo "ICE_SANDBOX = ${ICE_SANDBOX}" + echo "ICE_CASENAME = ${casename}" + echo "ICE_CASEDIR = ${casedir}" + echo "ICE_MACHINE = ${machine}" + echo "ICE_COMPILER = ${compiler}" + echo "ICE_PES = ${task}x${thrd}" + echo "ICE_GRID = ${grid} (${ICE_DECOMP_NXGLOB}x${ICE_DECOMP_NYGLOB}) blocksize=${ICE_DECOMP_BLCKX}x${ICE_DECOMP_BLCKY}x${ICE_DECOMP_MXBLCKS}" -cp ice_in ${casescr}/ice_in.base -cp cice.settings ${casescr}/cice.settings.base -if (-e ${fimods}) rm ${fimods} -if (-e ${fsmods}) rm ${fsmods} + #------------------------------------------------------------ + # Copy in and update cice.settings and ice_in files + + set fimods = ${casescr}/ice_in.mods + set fsmods = ${casescr}/cice.settings.mods + + cp ice_in ${casescr}/ice_in.base + cp cice.settings ${casescr}/cice.settings.base + if (-e ${fimods}) rm ${fimods} + if (-e ${fsmods}) rm ${fsmods} cat >! ${fimods} << EOF1 # cice.setup settings @@ -500,23 +621,23 @@ cat >! ${fimods} << EOF1 nprocs = ${task} distribution_type = '${ICE_DECOMP_DECOMP}' processor_shape = '${ICE_DECOMP_DSHAPE}' +version_name = '${ICE_VERSION}' EOF1 -# If this is a baseline-compare test, modify ICE_RUNDIR -if ($bdir != $spval) then - setenv basedir_tmp ${bdir} -else - setenv basedir_tmp ${ICE_MACHINE_BASELINE} -endif -if ("$baseGen" =~ "default") then - set d1 = `echo ${cdate} | cut -c 3- | sed 's/-//g'` - set t1 = `echo ${ctime} | sed 's/://g'` - set baseGen = cice.${shhash}.${d1}-${t1} -endif -if ("$baseCom" =~ "default") then - set baseCom = `ls -t $basedir_tmp | head -1` -endif - + # If this is a baseline-compare test, modify ICE_RUNDIR + if ($bdir != ${spval}) then + setenv basedir_tmp ${bdir} + else + setenv basedir_tmp ${ICE_MACHINE_BASELINE} + endif + if ("$baseGen" =~ "default") then + set d1 = `echo ${cdate} | cut -c 3- | sed 's/-//g'` + set t1 = `echo ${ctime} | sed 's/://g'` + set baseGen = cice.${shhash}.${d1}-${t1} + endif + if ("$baseCom" =~ "default") then + set baseCom = `ls -t $basedir_tmp | head -1` + endif cat >! ${fsmods} << EOF1 # cice.setup settings @@ -534,8 +655,6 @@ setenv ICE_NXGLOB ${ICE_DECOMP_NXGLOB} setenv ICE_NYGLOB ${ICE_DECOMP_NYGLOB} setenv ICE_NTASKS ${task} setenv ICE_NTHRDS ${thrd} -setenv ICE_DECOMP ${ICE_DECOMP_DECOMP} -setenv ICE_DSHAPE ${ICE_DECOMP_DSHAPE} setenv ICE_MXBLCKS ${ICE_DECOMP_MXBLCKS} setenv ICE_BLCKX ${ICE_DECOMP_BLCKX} setenv ICE_BLCKY ${ICE_DECOMP_BLCKY} @@ -543,122 +662,137 @@ setenv ICE_BASELINE ${basedir_tmp} setenv ICE_BASEGEN ${baseGen} setenv ICE_BASECOM ${baseCom} setenv ICE_SPVAL ${spval} +setenv ICE_QUIETMODE ${quietmode} +setenv ICE_TEST ${test} +setenv ICE_TESTNAME ${testname_noid} +setenv ICE_BFBCOMP ${fbfbcomp} +setenv ICE_ACCOUNT ${acct} +setenv ICE_QUEUE ${queue} EOF1 -if ($bfbcomp != $spval) then - echo "setenv ICE_BFBCOMP ${machcomp}_${bfbcomp}.${testid}" >> ${fsmods} -else - echo "setenv ICE_BFBCOMP ${spval}" >> ${fsmods} -endif + if (${sets} != "") then + set setsx = `echo ${sets} | sed 's/,/ /g'` + set setsxorig = "$setsx" + set setsx = "" + foreach name ($setsxorig) + if (-e ${ICE_SCRIPTS}/options/set_files.${name}) then + echo "adding options files from set_files.${name}" + echo "`date`${0} adding options files from set_files.${name}" >> ${casedir}/README.case + set setsnew = `cat ${ICE_SCRIPTS}/options/set_files.${name}` + foreach nset ($setsnew) + if ($nset !~ "#*") then + set setsx = "$setsx $nset" + endif + end + else + set setsx = "$setsx $name" + endif + end -if ($test != $spval) then - echo "setenv ICE_TEST ${test}" >> ${fsmods} - echo "setenv ICE_TESTNAME ${testname_noid}" >> ${fsmods} -else - echo "setenv ICE_TEST ${spval}" >> ${fsmods} - echo "setenv ICE_TESTNAME ${spval}" >> ${fsmods} -endif + else + set setsx = "" + endif -if ($acct != $spval) then - echo "setenv ICE_ACCOUNT ${acct}" >> ${fsmods} -else - if (-e ~/.cice_proj) then - set account_name = `head -1 ~/.cice_proj` - echo "setenv ICE_ACCOUNT ${account_name}" >> ${fsmods} - else - echo "setenv ICE_ACCOUNT ${ICE_MACHINE_ACCT}" >> ${fsmods} - endif -endif + if (${docase} == 0) then + # from test options to casescr in case any test time changes are applied + if (-e ${ICE_SCRIPTS}/tests/test_${test}.files) then + cp -f -p ${ICE_SCRIPTS}/tests/test_${test}.files ${casescr} + foreach file (`cat ${casescr}/test_${test}.files`) + if (-e ${ICE_SCRIPTS}/options/$file) then + cp -f -p ${ICE_SCRIPTS}/options/$file ${casescr} + else + echo "${0}: ERROR, could not find $file from test_${test}.files" + exit -1 + endif + end + endif + endif -if ($sets != "") then - set setsx = `echo $sets | sed 's/,/ /g'` -else - set setsx = "" -endif -if ($test != $spval) then - set testx = ${test} - # from test options to casescr in case any test time changes are applied - cp -f -p ${ICE_SCRIPTS}/options/test_nml.${test}* ${casescr} >& /dev/null -else - set testx = "" -endif + foreach name (${grid} $setsx) + set found = 0 + if (-e ${ICE_SCRIPTS}/options/set_nml.${name}) then -foreach name ($testx $grid $setsx) - set found = 0 - if (-e ${ICE_SCRIPTS}/options/set_nml.${name}) then cat >> ${fimods} << EOF2 # set_nml.${name} EOF2 - cat ${ICE_SCRIPTS}/options/set_nml.${name} >> ${fimods} + cat ${ICE_SCRIPTS}/options/set_nml.${name} >> ${fimods} + cat >> ${fimods} << EOF2 EOF2 - echo "adding namelist mods set_nml.${name}" - echo "`date`${0} adding namelist modes set_nml.${name}" >> ${casedir}/README.case - set found = 1 - endif - if (-e ${ICE_SCRIPTS}/options/set_env.${name}) then + echo "adding namelist mods set_nml.${name}" + echo "`date`${0} adding namelist modes set_nml.${name}" >> ${casedir}/README.case + set found = 1 + endif + if (-e ${ICE_SCRIPTS}/options/set_env.${name}) then + cat >> ${fsmods} << EOF2 # set_env.${name} EOF2 - cat ${ICE_SCRIPTS}/options/set_env.${name} >> ${fsmods} + + cat ${ICE_SCRIPTS}/options/set_env.${name} >> ${fsmods} + cat >> ${fimods} << EOF2 EOF2 - echo "adding env mods set_env.${name}" - echo "`date`${0} adding namelist modes set_env.${name}" >> ${casedir}/README.case - set found = 1 - endif - if (${found} == 0) then - echo "${0}: ERROR, ${ICE_SCRIPTS}/options/set_[nml,env].${name} not found" - exit -1 - endif -end -${casescr}/parse_settings.sh cice.settings ${fsmods} -${casescr}/parse_namelist.sh ice_in ${fimods} -source ./cice.settings -source ./env.${machcomp} || exit 2 -${casescr}/parse_namelist_from_settings.sh ice_in cice.settings + echo "adding env mods set_env.${name}" + echo "`date`${0} adding namelist modes set_env.${name}" >> ${casedir}/README.case + set found = 1 + endif + if (${found} == 0) then + echo "${0}: ERROR, ${ICE_SCRIPTS}/options/set_[nml,env].${name} not found" + exit -1 + endif + end -#------------------------------------------------------------ -# Generate run script + ${casescr}/parse_settings.sh cice.settings ${fsmods} + ${casescr}/parse_namelist.sh ice_in ${fimods} + source ./cice.settings + source ./env.${machcomp} || exit 2 + ${casescr}/parse_namelist_from_settings.sh ice_in cice.settings -source ./cice.settings -source ./env.${machcomp} || exit 2 + #------------------------------------------------------------ + # Generate run script -${casescr}/cice.run.setup.csh -if ($status != 0) then - echo "${0}: ERROR, cice.run.setup.csh aborted" - exit -1 -endif + source ./cice.settings + source ./env.${machcomp} || exit 2 -#------------------------------------------------------------ + ${casescr}/cice.run.setup.csh + if ($status != 0) then + echo "${0}: ERROR, cice.run.setup.csh aborted" + exit -1 + endif -if ($test != $spval) then - # Print information to stdout - echo "Creating scripts for $test test" + #------------------------------------------------------------ + # Generate test script - # Generate test script - ${casescr}/cice.test.setup.csh - if ($status != 0) then - echo "${0}: ERROR, cice.test.setup.csh aborted" - exit -1 - endif + if (${docase} == 0) then + # Print information to stdout + echo "Creating scripts for ${test} test" - # Initial test_output file - echo "#---" >! test_output - echo "PEND ${testname_noid} " >> test_output + # Generate test script + ${casescr}/cice.test.setup.csh + if ($status != 0) then + echo "${0}: ERROR, cice.test.setup.csh aborted" + exit -1 + endif -endif + # Initial test_output file + echo "#---" >! test_output + echo "PEND ${testname_noid} " >> test_output + endif -if ( $testsuite != $spval ) then - cd ${ICE_SANDBOX} - # Write build and run commands to suite.run + #------------------------------------------------------------ + # Generate testsuite stuff + + if ( ${dosuite} == 1 ) then + cd ${ICE_SANDBOX} + # Write build and run commands to suite.run cat >> ./${tsdir}/results.csh << EOF cat $testname_base/test_output >> results.log @@ -671,17 +805,23 @@ cd $testname_base cd .. EOF - # Reset case for the next test in suite - set case = $spval + # Reset case for the next test in suite + set case = ${spval} - echo "" - echo "---" - echo "" + echo "" + echo "---" + echo "" + endif - # This is the foreach end for the testsuite - end - # This is the foreach end for the compilers + # This is the foreach end for the testsuite end +# This is the foreach end for the compilers +end + +#----------------------------------------------------- +# more testsuite stuff + +if ( ${dosuite} == 1 ) then # Add code to results.csh to count the number of failures cat >> ./${tsdir}/results.csh << EOF @@ -704,6 +844,7 @@ echo "" echo "\$success of \$total tests PASSED" echo "\$failures of \$total tests FAILED" echo "\$pends of \$total tests PENDING" +exit \$failures EOF # build and submit tests @@ -719,6 +860,8 @@ EOF endif +#--------------------------------------------- + echo " " echo "${0} done" echo " " diff --git a/cicecore/cicedynB/analysis/ice_history_shared.F90 b/cicecore/cicedynB/analysis/ice_history_shared.F90 index 718966284..5ac0feae2 100644 --- a/cicecore/cicedynB/analysis/ice_history_shared.F90 +++ b/cicecore/cicedynB/analysis/ice_history_shared.F90 @@ -48,6 +48,9 @@ module ice_history_shared character (len=char_len_long), public :: & pointer_file ! input pointer file for restarts + character (len=char_len), public :: & + version_name + !--------------------------------------------------------------- ! Instructions for adding a field: (search for 'example') ! Here: diff --git a/cicecore/cicedynB/dynamics/ice_dyn_eap.F90 b/cicecore/cicedynB/dynamics/ice_dyn_eap.F90 index 5affe3b05..855b76634 100644 --- a/cicecore/cicedynB/dynamics/ice_dyn_eap.F90 +++ b/cicecore/cicedynB/dynamics/ice_dyn_eap.F90 @@ -94,9 +94,9 @@ subroutine eap (dt) basal_stress_coeff, basalstress use ice_flux, only: rdg_conv, rdg_shear, strairxT, strairyT, & strairx, strairy, uocn, vocn, ss_tltx, ss_tlty, iceumask, fm, & - strtltx, strtlty, strocnx, strocny, strintx, strinty, & + strtltx, strtlty, strocnx, strocny, strintx, strinty, taubx, tauby, & strocnxT, strocnyT, strax, stray, & - Cbu, taubx, tauby, hwater, & + Tbu, hwater, & stressp_1, stressp_2, stressp_3, stressp_4, & stressm_1, stressm_2, stressm_3, stressm_4, & stress12_1, stress12_2, stress12_3, stress12_4 @@ -236,7 +236,10 @@ subroutine eap (dt) endif #endif - !$OMP PARALLEL DO PRIVATE(iblk,i,j,ilo,ihi,jlo,jhi,this_block) +! tcraig, tcx, turned off this threaded region, in evp, this block and +! the icepack_ice_strength call seems to not be thread safe. more +! debugging needed + !$TCXOMP PARALLEL DO PRIVATE(iblk,i,j,ilo,ihi,jlo,jhi,this_block) do iblk = 1, nblocks !----------------------------------------------------------------- @@ -265,6 +268,7 @@ subroutine eap (dt) strtltx (:,:,iblk), strtlty (:,:,iblk), & strocnx (:,:,iblk), strocny (:,:,iblk), & strintx (:,:,iblk), strinty (:,:,iblk), & + taubx (:,:,iblk), tauby (:,:,iblk), & waterx (:,:,iblk), watery (:,:,iblk), & forcex (:,:,iblk), forcey (:,:,iblk), & stressp_1 (:,:,iblk), stressp_2 (:,:,iblk), & @@ -275,7 +279,7 @@ subroutine eap (dt) stress12_3(:,:,iblk), stress12_4(:,:,iblk), & uvel_init (:,:,iblk), vvel_init (:,:,iblk), & uvel (:,:,iblk), vvel (:,:,iblk), & - Cbu (:,:,iblk)) + Tbu (:,:,iblk)) !----------------------------------------------------------------- ! Initialize structure tensor @@ -320,7 +324,7 @@ subroutine eap (dt) fld2(:,:,2,iblk) = vvel(:,:,iblk) enddo ! iblk - !$OMP END PARALLEL DO + !$TCXOMP END PARALLEL DO call icepack_warnings_flush(nu_diag) if (icepack_warnings_aborted()) call abort_ice(error_message="subname", & @@ -352,6 +356,20 @@ subroutine eap (dt) call ice_HaloMask(halo_info_mask, halo_info, halomask) endif + !----------------------------------------------------------------- + ! basal stress coefficients (landfast ice) + !----------------------------------------------------------------- + + if (basalstress) then + do iblk = 1, nblocks + call basal_stress_coeff (nx_block, ny_block, & + icellu (iblk), & + indxui(:,iblk), indxuj(:,iblk), & + vice(:,:,iblk), aice(:,:,iblk), & + hwater(:,:,iblk), Tbu(:,:,iblk)) + enddo + endif + do ksub = 1,ndte ! subcycling !----------------------------------------------------------------- @@ -395,21 +413,6 @@ subroutine eap (dt) strtmp (:,:,:)) ! call ice_timer_stop(timer_tmp1) ! dynamics - !----------------------------------------------------------------- - ! basal stress calculation (landfast ice) - !----------------------------------------------------------------- - - if (basalstress) then - call basal_stress_coeff (nx_block, ny_block, & - icellu (iblk), & - indxui(:,iblk), indxuj(:,iblk), & - vice(:,:,iblk), aice(:,:,iblk), & - hwater(:,:,iblk), & - uvel(:,:,iblk), vvel(:,:,iblk), & - Cbu(:,:,iblk)) - endif - - !----------------------------------------------------------------- ! momentum equation !----------------------------------------------------------------- @@ -417,6 +420,7 @@ subroutine eap (dt) call stepu (nx_block, ny_block, & icellu (iblk), Cdn_ocn (:,:,iblk), & indxui (:,iblk), indxuj (:,iblk), & + ksub, & aiu (:,:,iblk), strtmp (:,:,:), & uocn (:,:,iblk), vocn (:,:,iblk), & waterx (:,:,iblk), watery (:,:,iblk), & @@ -424,10 +428,11 @@ subroutine eap (dt) umassdti (:,:,iblk), fm (:,:,iblk), & uarear (:,:,iblk), & strocnx (:,:,iblk), strocny (:,:,iblk), & - strintx (:,:,iblk), strinty (:,:,iblk), & + strintx (:,:,iblk), strinty (:,:,iblk), & + taubx (:,:,iblk), tauby (:,:,iblk), & uvel_init(:,:,iblk), vvel_init(:,:,iblk),& uvel (:,:,iblk), vvel (:,:,iblk), & - Cbu (:,:,iblk)) + Tbu (:,:,iblk)) ! load velocity into array for boundary updates fld2(:,:,1,iblk) = uvel(:,:,iblk) @@ -477,16 +482,6 @@ subroutine eap (dt) !$OMP END PARALLEL DO enddo ! subcycling - - ! calculate basal stress component for outputs - if ( basalstress ) then - !$OMP PARALLEL DO PRIVATE(iblk) - do iblk = 1, nblocks - taubx(:,:,iblk) = -Cbu(:,:,iblk)*uvel(:,:,iblk) - tauby(:,:,iblk) = -Cbu(:,:,iblk)*vvel(:,:,iblk) - enddo - !$OMP END PARALLEL DO - endif deallocate(fld2) if (maskhalo_dyn) call ice_HaloDestroy(halo_info_mask) diff --git a/cicecore/cicedynB/dynamics/ice_dyn_evp.F90 b/cicecore/cicedynB/dynamics/ice_dyn_evp.F90 index 39fb0cd96..eda0a860c 100644 --- a/cicecore/cicedynB/dynamics/ice_dyn_evp.F90 +++ b/cicecore/cicedynB/dynamics/ice_dyn_evp.F90 @@ -44,7 +44,7 @@ module ice_dyn_evp use ice_fileunits, only: nu_diag use ice_exit, only: abort_ice use icepack_intfc, only: icepack_warnings_flush, icepack_warnings_aborted - use icepack_intfc, only: icepack_ice_strength + use icepack_intfc, only: icepack_ice_strength, icepack_query_parameters #ifdef CICE_IN_NEMO use icepack_intfc, only: calc_strair #endif @@ -81,9 +81,9 @@ subroutine evp (dt) use ice_domain_size, only: max_blocks, ncat use ice_flux, only: rdg_conv, rdg_shear, strairxT, strairyT, & strairx, strairy, uocn, vocn, ss_tltx, ss_tlty, iceumask, fm, & - strtltx, strtlty, strocnx, strocny, strintx, strinty, & + strtltx, strtlty, strocnx, strocny, strintx, strinty, taubx, tauby, & strocnxT, strocnyT, strax, stray, & - Cbu, taubx, tauby, hwater, & + Tbu, hwater, & stressp_1, stressp_2, stressp_3, stressp_4, & stressm_1, stressm_2, stressm_3, stressm_4, & stress12_1, stress12_2, stress12_3, stress12_4 @@ -228,7 +228,9 @@ subroutine evp (dt) endif #endif - !$OMP PARALLEL DO PRIVATE(iblk,ilo,ihi,jlo,jhi,this_block) +! tcraig, tcx, threading here leads to some non-reproducbile results and failures in icepack_ice_strength +! need to do more debugging + !$TCXOMP PARALLEL DO PRIVATE(iblk,ilo,ihi,jlo,jhi,this_block) do iblk = 1, nblocks !----------------------------------------------------------------- @@ -257,6 +259,7 @@ subroutine evp (dt) strtltx (:,:,iblk), strtlty (:,:,iblk), & strocnx (:,:,iblk), strocny (:,:,iblk), & strintx (:,:,iblk), strinty (:,:,iblk), & + taubx (:,:,iblk), tauby (:,:,iblk), & waterx (:,:,iblk), watery (:,:,iblk), & forcex (:,:,iblk), forcey (:,:,iblk), & stressp_1 (:,:,iblk), stressp_2 (:,:,iblk), & @@ -267,7 +270,7 @@ subroutine evp (dt) stress12_3(:,:,iblk), stress12_4(:,:,iblk), & uvel_init (:,:,iblk), vvel_init (:,:,iblk), & uvel (:,:,iblk), vvel (:,:,iblk), & - Cbu (:,:,iblk)) + Tbu (:,:,iblk)) !----------------------------------------------------------------- ! ice strength @@ -291,7 +294,7 @@ subroutine evp (dt) fld2(:,:,2,iblk) = vvel(:,:,iblk) enddo ! iblk - !$OMP END PARALLEL DO + !$TCXOMP END PARALLEL DO call icepack_warnings_flush(nu_diag) if (icepack_warnings_aborted()) call abort_ice(error_message="subname", & @@ -323,6 +326,20 @@ subroutine evp (dt) call ice_HaloMask(halo_info_mask, halo_info, halomask) endif + !----------------------------------------------------------------- + ! basal stress coefficients (landfast ice) + !----------------------------------------------------------------- + + if (basalstress) then + do iblk = 1, nblocks + call basal_stress_coeff (nx_block, ny_block, & + icellu (iblk), & + indxui(:,iblk), indxuj(:,iblk), & + vice(:,:,iblk), aice(:,:,iblk), & + hwater(:,:,iblk), Tbu(:,:,iblk)) + enddo + endif + do ksub = 1,ndte ! subcycling !----------------------------------------------------------------- @@ -354,27 +371,14 @@ subroutine evp (dt) strtmp (:,:,:) ) ! endif ! yield_curve - !----------------------------------------------------------------- - ! basal stress calculation (landfast ice) - !----------------------------------------------------------------- - - if (basalstress) then - call basal_stress_coeff (nx_block, ny_block, & - icellu (iblk), & - indxui(:,iblk), indxuj(:,iblk), & - vice(:,:,iblk), aice(:,:,iblk), & - hwater(:,:,iblk), & - uvel(:,:,iblk), vvel(:,:,iblk), & - Cbu(:,:,iblk)) - endif - !----------------------------------------------------------------- ! momentum equation !----------------------------------------------------------------- call stepu (nx_block, ny_block, & icellu (iblk), Cdn_ocn (:,:,iblk), & - indxui (:,iblk), indxuj (:,iblk), & + indxui (:,iblk), indxuj (:,iblk), & + ksub, & aiu (:,:,iblk), strtmp (:,:,:), & uocn (:,:,iblk), vocn (:,:,iblk), & waterx (:,:,iblk), watery (:,:,iblk), & @@ -382,10 +386,11 @@ subroutine evp (dt) umassdti (:,:,iblk), fm (:,:,iblk), & uarear (:,:,iblk), & strocnx (:,:,iblk), strocny (:,:,iblk), & - strintx (:,:,iblk), strinty (:,:,iblk), & + strintx (:,:,iblk), strinty (:,:,iblk), & + taubx (:,:,iblk), tauby (:,:,iblk), & uvel_init(:,:,iblk), vvel_init(:,:,iblk),& uvel (:,:,iblk), vvel (:,:,iblk), & - Cbu (:,:,iblk)) + Tbu (:,:,iblk)) ! load velocity into array for boundary updates fld2(:,:,1,iblk) = uvel(:,:,iblk) @@ -412,16 +417,6 @@ subroutine evp (dt) !$OMP END PARALLEL DO enddo ! subcycling - - ! calculate basal stress component for outputs - if ( basalstress ) then - !$OMP PARALLEL DO PRIVATE(iblk) - do iblk = 1, nblocks - taubx(:,:,iblk) = -Cbu(:,:,iblk)*uvel(:,:,iblk) - tauby(:,:,iblk) = -Cbu(:,:,iblk)*vvel(:,:,iblk) - enddo - !$OMP END PARALLEL DO - endif deallocate(fld2) if (maskhalo_dyn) call ice_HaloDestroy(halo_info_mask) @@ -603,6 +598,7 @@ subroutine stress (nx_block, ny_block, & tensionne, tensionnw, tensionse, tensionsw, & ! tension shearne, shearnw, shearse, shearsw , & ! shearing Deltane, Deltanw, Deltase, Deltasw , & ! Delt + puny , & ! puny c0ne, c0nw, c0se, c0sw , & ! useful combinations c1ne, c1nw, c1se, c1sw , & ssigpn, ssigps, ssigpe, ssigpw , & @@ -738,6 +734,11 @@ subroutine stress (nx_block, ny_block, & ! phrase "flush to zero". !----------------------------------------------------------------- +! call icepack_query_parameters(puny_out=puny) +! call icepack_warnings_flush(nu_diag) +! if (icepack_warnings_aborted()) call abort_ice(error_message="subname", & +! file=__FILE__, line=__LINE__) + ! stressp_1(i,j) = sign(max(abs(stressp_1(i,j)),puny),stressp_1(i,j)) ! stressp_2(i,j) = sign(max(abs(stressp_2(i,j)),puny),stressp_2(i,j)) ! stressp_3(i,j) = sign(max(abs(stressp_3(i,j)),puny),stressp_3(i,j)) diff --git a/cicecore/cicedynB/dynamics/ice_dyn_shared.F90 b/cicecore/cicedynB/dynamics/ice_dyn_shared.F90 index b82fe49fb..8d976f9fd 100644 --- a/cicecore/cicedynB/dynamics/ice_dyn_shared.F90 +++ b/cicecore/cicedynB/dynamics/ice_dyn_shared.F90 @@ -380,6 +380,7 @@ subroutine evp_prep2 (nx_block, ny_block, & strtltx, strtlty, & strocnx, strocny, & strintx, strinty, & + taubx, tauby, & waterx, watery, & forcex, forcey, & stressp_1, stressp_2, & @@ -390,7 +391,7 @@ subroutine evp_prep2 (nx_block, ny_block, & stress12_3, stress12_4, & uvel_init, vvel_init, & uvel, vvel, & - Cbu) + Tbu) use ice_constants, only: c0, c1 @@ -437,7 +438,7 @@ subroutine evp_prep2 (nx_block, ny_block, & real (kind=dbl_kind), dimension (nx_block,ny_block), & intent(out) :: & - Cbu, & ! coefficient for basal stress + Tbu, & ! coefficient for basal stress (N/m^2) uvel_init,& ! x-component of velocity (m/s), beginning of time step vvel_init,& ! y-component of velocity (m/s), beginning of time step umassdti, & ! mass of U-cell/dt (kg/m^2 s) @@ -459,7 +460,9 @@ subroutine evp_prep2 (nx_block, ny_block, & strocnx , & ! ice-ocean stress, x-direction strocny , & ! ice-ocean stress, y-direction strintx , & ! divergence of internal ice stress, x (N/m^2) - strinty ! divergence of internal ice stress, y (N/m^2) + strinty , & ! divergence of internal ice stress, y (N/m^2) + taubx , & ! basal stress, x-direction (N/m^2) + tauby ! basal stress, y-direction (N/m^2) ! local variables @@ -480,7 +483,9 @@ subroutine evp_prep2 (nx_block, ny_block, & forcex (i,j) = c0 forcey (i,j) = c0 umassdti (i,j) = c0 - Cbu (i,j) = c0 + Tbu (i,j) = c0 + taubx (i,j) = c0 + tauby (i,j) = c0 if (revp==1) then ! revised evp stressp_1 (i,j) = c0 @@ -610,6 +615,7 @@ end subroutine evp_prep2 subroutine stepu (nx_block, ny_block, & icellu, Cw, & indxui, indxuj, & + ksub, & aiu, str, & uocn, vocn, & waterx, watery, & @@ -618,13 +624,15 @@ subroutine stepu (nx_block, ny_block, & uarear, & strocnx, strocny, & strintx, strinty, & + taubx, tauby, & uvel_init, vvel_init,& uvel, vvel, & - Cbu) + Tbu) integer (kind=int_kind), intent(in) :: & nx_block, ny_block, & ! block dimensions - icellu ! total count when iceumask is true + icellu, & ! total count when iceumask is true + ksub ! subcycling iteration integer (kind=int_kind), dimension (nx_block*ny_block), & intent(in) :: & @@ -632,7 +640,7 @@ subroutine stepu (nx_block, ny_block, & indxuj ! compressed index in j-direction real (kind=dbl_kind), dimension (nx_block,ny_block), intent(in) :: & - Cbu, & ! coefficient for basal stress + Tbu, & ! coefficient for basal stress (N/m^2) uvel_init,& ! x-component of velocity (m/s), beginning of timestep vvel_init,& ! y-component of velocity (m/s), beginning of timestep aiu , & ! ice fraction on u-grid @@ -660,7 +668,9 @@ subroutine stepu (nx_block, ny_block, & strocnx , & ! ice-ocean stress, x-direction strocny , & ! ice-ocean stress, y-direction strintx , & ! divergence of internal ice stress, x (N/m^2) - strinty ! divergence of internal ice stress, y (N/m^2) + strinty , & ! divergence of internal ice stress, y (N/m^2) + taubx , & ! basal stress, x-direction (N/m^2) + tauby ! basal stress, y-direction (N/m^2) real (kind=dbl_kind), dimension (nx_block,ny_block), & intent(inout) :: & @@ -675,9 +685,13 @@ subroutine stepu (nx_block, ny_block, & uold, vold , & ! old-time uvel, vvel vrel , & ! relative ice-ocean velocity cca,ccb,ab2,cc1,cc2,& ! intermediate variables - taux, tauy , & ! part of ocean stress term + taux, tauy , & ! part of ocean stress term + Cb , & ! complete basal stress coeff rhow ! + real (kind=dbl_kind) :: & + u0 = 5e-5_dbl_kind ! residual velocity for basal stress (m/s) + !----------------------------------------------------------------- ! integrate the momentum equation !----------------------------------------------------------------- @@ -700,9 +714,11 @@ subroutine stepu (nx_block, ny_block, & ! ice/ocean stress taux = vrel*waterx(i,j) ! NOTE this is not the entire tauy = vrel*watery(i,j) ! ocn stress term - + + Cb = Tbu(i,j) / (sqrt(uold**2 + vold**2) + u0) ! for basal stress ! revp = 0 for classic evp, 1 for revised evp - cca = (brlx + revp)*umassdti(i,j) + vrel * cosw + Cbu(i,j) ! kg/m^2 s + cca = (brlx + revp)*umassdti(i,j) + vrel * cosw + Cb ! kg/m^2 s + ccb = fm(i,j) + sign(c1,fm(i,j)) * vrel * sinw ! kg/m^2 s ab2 = cca**2 + ccb**2 @@ -728,6 +744,14 @@ subroutine stepu (nx_block, ny_block, & !----------------------------------------------------------------- strocnx(i,j) = taux strocny(i,j) = tauy + + ! calculate basal stress component for outputs + if (ksub == ndte) then ! on last subcycling iteration + if ( basalstress ) then + taubx(i,j) = -uvel(i,j)*Tbu(i,j) / (sqrt(uold**2 + vold**2) + u0) + tauby(i,j) = -vvel(i,j)*Tbu(i,j) / (sqrt(uold**2 + vold**2) + u0) + endif + endif enddo ! ij @@ -835,20 +859,25 @@ subroutine evp_finish (nx_block, ny_block, & end subroutine evp_finish !======================================================================= -! Computes basal stress Cb coefficients (landfast ice) +! Computes basal stress Tbu coefficients (landfast ice) ! -! Lemieux, J. F., B. Tremblay, F. Dupont, M. Plante, G. Smith, D. Dumont (2015). -! A basal stress parameterization form modeling landfast ice. J. Geophys. Res. +! Lemieux, J. F., B. Tremblay, F. Dupont, M. Plante, G.C. Smith, D. Dumont (2015). +! A basal stress parameterization form modeling landfast ice, J. Geophys. Res. ! Oceans, 120, 3157-3173. ! -! author: Philippe Blain, CMC (coop summer 2015) +! Lemieux, J. F., F. Dupont, P. Blain, F. Roy, G.C. Smith, G.M. Flato (2016). +! Improving the simulation of landfast ice by combining tensile strength and a +! parameterization for grounded ridges, J. Geophys. Res. Oceans, 121. +! +! author: JF Lemieux, Philippe Blain (ECCC) ! - subroutine basal_stress_coeff (nx_block, ny_block, icellu, & +! note: Tbu is a part of the Cb as defined in Lemieux et al. 2015 and 2016. +! + subroutine basal_stress_coeff (nx_block, ny_block, & + icellu, & indxui, indxuj, & vice, aice, & - hwater, & - uold, vold, & - Cbu) + hwater, Tbu) use ice_constants, only: c0, c1 @@ -864,25 +893,19 @@ subroutine basal_stress_coeff (nx_block, ny_block, icellu, & real (kind=dbl_kind), dimension (nx_block,ny_block), intent(in) :: & aice , & ! concentration of ice at tracer location vice , & ! volume per unit area of ice at tracer location - hwater , & ! water depth at tracer location - uold , & ! u component of ice speed at previous iteration - vold ! v component of ice speed at previous iteration + hwater ! water depth at tracer location real (kind=dbl_kind), dimension (nx_block,ny_block), intent(inout) :: & - Cbu ! coefficient for basal stress - -! -!EOP + Tbu ! coefficient for basal stress (N/m^2) real (kind=dbl_kind) :: & au, & ! concentration of ice at u location hu, & ! volume per unit area of ice at u location (mean thickness) hwu, & ! water depth at u location hcu, & ! critical thickness at u location - k1 = 8.0_dbl_kind , & ! first free parameter for landfast parametrization - k2 = 15.0_dbl_kind, & ! second free parameter (Nm^-3) for landfast parametrization - u0 = 5e-5_dbl_kind, & ! residual velocity (m/s) - CC = 20.0_dbl_kind ! CC=Cb factor in Lemieux et al 2015 + k1 = 20.0_dbl_kind , & ! first free parameter for landfast parametrization + k2 = 15.0_dbl_kind , & ! second free parameter (N/m^3) for landfast parametrization + alphab = 20.0_dbl_kind ! alphab=Cb factor in Lemieux et al 2015 integer (kind=int_kind) :: & i, j, ij @@ -896,14 +919,11 @@ subroutine basal_stress_coeff (nx_block, ny_block, icellu, & hwu = min(hwater(i,j),hwater(i+1,j),hwater(i,j+1),hwater(i+1,j+1)) hu = max(vice(i,j),vice(i+1,j),vice(i,j+1),vice(i+1,j+1)) - ! calculate basal stress factor ! 1- calculate critical thickness hcu = au * hwu / k1 - ! 2- calculate stress factor - - Cbu(i,j) = ( k2 / (sqrt(uold(i,j)**2 + vold(i,j)**2) + u0) ) & - * max(c0,(hu - hcu)) * exp(-CC * (c1 - au)) + ! 2- calculate basal stress factor + Tbu(i,j) = k2 * max(c0,(hu - hcu)) * exp(-alphab * (c1 - au)) enddo ! ij diff --git a/cicecore/cicedynB/dynamics/ice_transport_remap.F90 b/cicecore/cicedynB/dynamics/ice_transport_remap.F90 index 34c2bfefe..4ff344668 100644 --- a/cicecore/cicedynB/dynamics/ice_transport_remap.F90 +++ b/cicecore/cicedynB/dynamics/ice_transport_remap.F90 @@ -634,10 +634,12 @@ subroutine horizontal_remap (dt, ntrace, & endif ! nghost - !$OMP PARALLEL DO PRIVATE(iblk,i,j,ilo,ihi,jlo,jhi,this_block,n,m, & - !$OMP edgearea_e,edgearea_n,edge,iflux,jflux, & - !$OMP xp,yp,indxing,indxjng,mflxe,mflxn, & - !$OMP mtflxe,mtflxn,triarea,istop,jstop,l_stop) + !--- tcraig, tcx, this omp loop leads to a seg fault in gnu + !--- need to check private variables and debug further + !$TCXOMP PARALLEL DO PRIVATE(iblk,i,j,ilo,ihi,jlo,jhi,this_block,n,m, & + !$TCXOMP edgearea_e,edgearea_n,edge,iflux,jflux, & + !$TCXOMP xp,yp,indxing,indxjng,mflxe,mflxn, & + !$TCXOMP mtflxe,mtflxn,triarea,istop,jstop,l_stop) do iblk = 1, nblocks l_stop = .false. @@ -845,7 +847,7 @@ subroutine horizontal_remap (dt, ntrace, & enddo ! n enddo ! iblk - !$OMP END PARALLEL DO + !$TCXOMP END PARALLEL DO end subroutine horizontal_remap diff --git a/cicecore/cicedynB/general/ice_flux.F90 b/cicecore/cicedynB/general/ice_flux.F90 index a83b7aaae..50d6cd31d 100644 --- a/cicecore/cicedynB/general/ice_flux.F90 +++ b/cicecore/cicedynB/general/ice_flux.F90 @@ -109,7 +109,7 @@ module ice_flux real (kind=dbl_kind), dimension (nx_block,ny_block,max_blocks), public :: & fm , & ! Coriolis param. * mass in U-cell (kg/s) - Cbu ! coefficient for basal stress (landfast ice) + Tbu ! coefficient for basal stress (N/m^2) !----------------------------------------------------------------- ! Thermodynamic component diff --git a/cicecore/cicedynB/general/ice_forcing.F90 b/cicecore/cicedynB/general/ice_forcing.F90 index e545bcc00..31017356e 100644 --- a/cicecore/cicedynB/general/ice_forcing.F90 +++ b/cicecore/cicedynB/general/ice_forcing.F90 @@ -168,6 +168,8 @@ subroutine init_forcing_atmo ! Determine the current and final year of the forcing cycle based on ! namelist input; initialize the atmospheric forcing data filenames. + character(len=*), parameter :: subname = '(init_forcing_atmo)' + fyear = fyear_init + mod(nyr-1,ycycle) ! current year fyear_final = fyear_init + ycycle - 1 ! last year in forcing cycle @@ -177,6 +179,16 @@ subroutine init_forcing_atmo write (nu_diag,*) ' Final forcing data year = ',fyear_final endif + if (trim(atm_data_type) == 'hadgem' .and. & + trim(precip_units) /= 'mks') then + if (my_task == master_task) then + write (nu_diag,*) 'WARNING: HadGEM atmospheric data chosen with wrong precip_units' + write (nu_diag,*) 'WARNING: Changing precip_units to mks (i.e. kg/m2 s).' + endif + call abort_ice(error_message=subname//' HadGEM precip_units error', & + file=__FILE__, line=__LINE__) + endif + !------------------------------------------------------------------- ! Get filenames for input forcing data !------------------------------------------------------------------- @@ -240,9 +252,11 @@ subroutine init_forcing_ocn(dt) real (kind=dbl_kind), dimension (nx_block,ny_block,max_blocks) :: & work1 + character(len=*), parameter :: subname = '(init_forcing_ocn)' + call icepack_query_parameters(secday_out=secday) call icepack_warnings_flush(nu_diag) - if (icepack_warnings_aborted()) call abort_ice(error_message="subname", & + if (icepack_warnings_aborted()) call abort_ice(error_message=subname, & file=__FILE__, line=__LINE__) nbits = 64 ! double precision data @@ -406,6 +420,8 @@ subroutine ocn_freezing_temperature integer (kind=int_kind) :: & i, j, iblk ! horizontal indices + character(len=*), parameter :: subname = '(ocn_freezing_temperature)' + !$OMP PARALLEL DO PRIVATE(iblk,i,j) do iblk = 1, nblocks do j = 1, ny_block @@ -417,7 +433,7 @@ subroutine ocn_freezing_temperature !$OMP END PARALLEL DO call icepack_warnings_flush(nu_diag) - if (icepack_warnings_aborted()) call abort_ice(error_message="subname", & + if (icepack_warnings_aborted()) call abort_ice(error_message=subname, & file=__FILE__, line=__LINE__) end subroutine ocn_freezing_temperature @@ -444,6 +460,8 @@ subroutine get_forcing_atmo type (block) :: & this_block ! block information for current block + + character(len=*), parameter :: subname = '(get_forcing_atmo)' fyear = fyear_init + mod(nyr-1,ycycle) ! current year if (trim(atm_data_type) /= 'default' .and. istep <= 1 & @@ -453,7 +471,7 @@ subroutine get_forcing_atmo call icepack_query_tracer_indices(nt_Tsfc_out=nt_Tsfc) call icepack_warnings_flush(nu_diag) - if (icepack_warnings_aborted()) call abort_ice(error_message="subname", & + if (icepack_warnings_aborted()) call abort_ice(error_message=subname, & file=__FILE__, line=__LINE__) ftime = time ! forcing time @@ -544,6 +562,8 @@ subroutine get_forcing_ocn (dt) real (kind=dbl_kind), intent(in) :: & dt ! time step + character(len=*), parameter :: subname = '(get_forcing_ocn)' + if (trim(sst_data_type) == 'clim' .or. & trim(sss_data_type) == 'clim') then call ocn_data_clim(dt) @@ -615,6 +635,8 @@ subroutine read_data (flag, recd, yr, ixm, ixx, ixp, & ! adjusted at beginning and end of data arg ! value of time argument in field_data + character(len=*), parameter :: subname = '(read_data)' + call ice_timer_start(timer_readwrite) ! reading/writing nbits = 64 ! double precision data @@ -758,6 +780,8 @@ subroutine read_data_nc (flag, recd, yr, ixm, ixx, ixp, & ! local variables + character(len=*), parameter :: subname = '(read_data_nc)' + #ifdef ncdf integer (kind=int_kind) :: & nrec , & ! record number to read @@ -901,6 +925,8 @@ subroutine read_clim_data (readflag, recd, ixm, ixx, ixp, & nrec , & ! record number to read arg ! value of time argument in field_data + character(len=*), parameter :: subname = '(read_clim_data)' + call ice_timer_start(timer_readwrite) ! reading/writing nbits = 64 ! double precision data @@ -986,6 +1012,8 @@ subroutine read_clim_data_nc (readflag, recd, ixm, ixx, ixp, & arg , & ! value of time argument in field_data fid ! file id for netCDF routines + character(len=*), parameter :: subname = '(read_clim_data_nc)' + call ice_timer_start(timer_readwrite) ! reading/writing nbits = 64 ! double precision data @@ -1052,9 +1080,11 @@ subroutine interp_coeff_monthly (recslot) real (kind=dbl_kind) :: & daymid(0:13) ! month mid-points + character(len=*), parameter :: subname = '(interp_coeff_monthly)' + call icepack_query_parameters(secday_out=secday) call icepack_warnings_flush(nu_diag) - if (icepack_warnings_aborted()) call abort_ice(error_message="subname", & + if (icepack_warnings_aborted()) call abort_ice(error_message=subname, & file=__FILE__, line=__LINE__) daymid(1:13) = 14._dbl_kind ! time frame ends 0 sec into day 15 @@ -1112,9 +1142,11 @@ subroutine interp_coeff (recnum, recslot, secint, dataloc) t1, t2 , & ! seconds elapsed at data points rcnum ! recnum => dbl_kind + character(len=*), parameter :: subname = '(interp_coeff)' + call icepack_query_parameters(secday_out=secday) call icepack_warnings_flush(nu_diag) - if (icepack_warnings_aborted()) call abort_ice(error_message="subname", & + if (icepack_warnings_aborted()) call abort_ice(error_message=subname, & file=__FILE__, line=__LINE__) secyr = dayyr * secday ! seconds in a year @@ -1166,6 +1198,8 @@ subroutine interpolate_data (field_data, field) integer (kind=int_kind) :: i,j, iblk + character(len=*), parameter :: subname = '(interpolate data)' + !$OMP PARALLEL DO PRIVATE(iblk,i,j) do iblk = 1, nblocks do j = 1, ny_block @@ -1195,6 +1229,8 @@ subroutine file_year (data_file, yr) integer (kind=int_kind) :: i + character(len=*), parameter :: subname = '(file_year)' + if (trim(atm_data_type) == 'hadgem') then ! netcdf i = index(data_file,'.nc') - 5 tmpname = data_file @@ -1268,11 +1304,13 @@ subroutine prepare_forcing (nx_block, ny_block, & logical (kind=log_kind) :: calc_strair + character(len=*), parameter :: subname = '(prepare_forcing)' + call icepack_query_parameters(Tffresh_out=Tffresh) call icepack_query_parameters(secday_out=secday) call icepack_query_parameters(calc_strair_out=calc_strair) call icepack_warnings_flush(nu_diag) - if (icepack_warnings_aborted()) call abort_ice(error_message="subname", & + if (icepack_warnings_aborted()) call abort_ice(error_message=subname, & file=__FILE__, line=__LINE__) do j = jlo, jhi @@ -1472,10 +1510,12 @@ subroutine longwave_parkinson_washington(Tair, cldf, flw) real(kind=dbl_kind) :: & Tffresh, stefan_boltzmann + character(len=*), parameter :: subname = '(longwave_parkinson_washington)' + call icepack_query_parameters(Tffresh_out=Tffresh, & stefan_boltzmann_out=stefan_boltzmann) call icepack_warnings_flush(nu_diag) - if (icepack_warnings_aborted()) call abort_ice(error_message="subname", & + if (icepack_warnings_aborted()) call abort_ice(error_message=subname, & file=__FILE__, line=__LINE__) flw = stefan_boltzmann*Tair**4 & @@ -1520,11 +1560,13 @@ subroutine longwave_rosati_miyakoda(cldf, Tsfc, & real(kind=dbl_kind) :: & Tffresh, stefan_boltzmann, emissivity + character(len=*), parameter :: subname = '(longwave_rosati_miyakoda)' + call icepack_query_parameters(Tffresh_out=Tffresh, & stefan_boltzmann_out=stefan_boltzmann, & emissivity_out=emissivity) call icepack_warnings_flush(nu_diag) - if (icepack_warnings_aborted()) call abort_ice(error_message="subname", & + if (icepack_warnings_aborted()) call abort_ice(error_message=subname, & file=__FILE__, line=__LINE__) fcc = c1 - 0.8_dbl_kind * cldf @@ -1555,6 +1597,8 @@ subroutine ncar_files (yr) integer (kind=int_kind), intent(in) :: & yr ! current forcing year + character(len=*), parameter :: subname = '(ncar_files)' + fsw_file = & trim(atm_data_dir)//'/MONTHLY/swdn.1996.dat' call file_year(fsw_file,yr) @@ -1624,9 +1668,11 @@ subroutine ncar_data logical (kind=log_kind) :: readm, read6 + character(len=*), parameter :: subname = '(ncar_data)' + call icepack_query_parameters(secday_out=secday) call icepack_warnings_flush(nu_diag) - if (icepack_warnings_aborted()) call abort_ice(error_message="subname", & + if (icepack_warnings_aborted()) call abort_ice(error_message=subname, & file=__FILE__, line=__LINE__) !------------------------------------------------------------------- @@ -1672,7 +1718,8 @@ subroutine ncar_data maxrec, rain_file, fsnow_data, & field_loc_center, field_type_scalar) else - call abort_ice ('nonbinary atm_data_format unavailable') + call abort_ice (error_message=subname//'nonbinary atm_data_format unavailable', & + file=__FILE__, line=__LINE__) ! The routine exists, for example: ! call read_data_nc (readm, 0, fyear, ixm, month, ixp, & ! maxrec, fsw_file, 'fsw', fsw_data, & @@ -1740,7 +1787,8 @@ subroutine ncar_data maxrec, humid_file, Qa_data, & field_loc_center, field_type_scalar) else - call abort_ice ('nonbinary atm_data_format unavailable') + call abort_ice (error_message=subname//'nonbinary atm_data_format unavailable', & + file=__FILE__, line=__LINE__) endif ! Interpolate @@ -1772,6 +1820,8 @@ subroutine LY_files (yr) integer (kind=int_kind), intent(in) :: & yr ! current forcing year + character(len=*), parameter :: subname = '(LY_files)' + flw_file = & trim(atm_data_dir)//'/MONTHLY/cldf.omip.dat' @@ -1845,10 +1895,12 @@ subroutine LY_data type (block) :: & this_block ! block information for current block + character(len=*), parameter :: subname = '(LY_data)' + call icepack_query_parameters(Tffresh_out=Tffresh) call icepack_query_parameters(secday_out=secday) call icepack_warnings_flush(nu_diag) - if (icepack_warnings_aborted()) call abort_ice(error_message="subname", & + if (icepack_warnings_aborted()) call abort_ice(error_message=subname, & file=__FILE__, line=__LINE__) !------------------------------------------------------------------- @@ -1938,7 +1990,8 @@ subroutine LY_data humid_file, Qa_data, & field_loc_center, field_type_scalar) else - call abort_ice ('nonbinary atm_data_format unavailable') + call abort_ice (error_message=subname//'nonbinary atm_data_format unavailable', & + file=__FILE__, line=__LINE__) endif ! Interpolate @@ -2068,9 +2121,11 @@ subroutine compute_shortwave(nx_block, ny_block, & integer (kind=int_kind) :: & i, j + character(len=*), parameter :: subname = '(compute_shortwave)' + call icepack_query_parameters(secday_out=secday, pi_out=pi) call icepack_warnings_flush(nu_diag) - if (icepack_warnings_aborted()) call abort_ice(error_message="subname", & + if (icepack_warnings_aborted()) call abort_ice(error_message=subname, & file=__FILE__, line=__LINE__) do j=jlo,jhi @@ -2119,9 +2174,11 @@ subroutine Qa_fixLY(nx_block, ny_block, Tair, Qa) real (kind=dbl_kind) :: & Tffresh, puny + character(len=*), parameter :: subname = '(Qa_fixLY)' + call icepack_query_parameters(Tffresh_out=Tffresh, puny_out=puny) call icepack_warnings_flush(nu_diag) - if (icepack_warnings_aborted()) call abort_ice(error_message="subname", & + if (icepack_warnings_aborted()) call abort_ice(error_message=subname, & file=__FILE__, line=__LINE__) worka = Tair - Tffresh @@ -2159,10 +2216,12 @@ subroutine hadgem_files (yr) logical (kind=log_kind) :: calc_strair, calc_Tsfc + character(len=*), parameter :: subname = '(hadgem_files)' + call icepack_query_parameters(calc_strair_out=calc_strair, & calc_Tsfc_out=calc_Tsfc) call icepack_warnings_flush(nu_diag) - if (icepack_warnings_aborted()) call abort_ice(error_message="subname", & + if (icepack_warnings_aborted()) call abort_ice(error_message=subname, & file=__FILE__, line=__LINE__) ! ----------------------------------------------------------- @@ -2356,11 +2415,13 @@ subroutine hadgem_data calc_strair, & calc_Tsfc + character(len=*), parameter :: subname = '(hadgem_data)' + call icepack_query_parameters(Lsub_out=Lsub) call icepack_query_parameters(calc_strair_out=calc_strair, & calc_Tsfc_out=calc_Tsfc) call icepack_warnings_flush(nu_diag) - if (icepack_warnings_aborted()) call abort_ice(error_message="subname", & + if (icepack_warnings_aborted()) call abort_ice(error_message=subname, & file=__FILE__, line=__LINE__) !------------------------------------------------------------------- @@ -2583,6 +2644,8 @@ subroutine monthly_files (yr) integer (kind=int_kind), intent(in) :: & yr ! current forcing year + character(len=*), parameter :: subname = '(monthly_files)' + flw_file = & trim(atm_data_dir)//'/MONTHLY/cldf.omip.dat' @@ -2652,6 +2715,8 @@ subroutine monthly_data type (block) :: & this_block ! block information for current block + character(len=*), parameter :: subname = '(monthly_data)' + !------------------------------------------------------------------- ! monthly data ! @@ -2841,6 +2906,8 @@ subroutine oned_data ws1 = 621.97_dbl_kind, & ! for saturation mixing ratio Pair = 1020._dbl_kind ! Sea level pressure (hPa) + character(len=*), parameter :: subname = '(oned_data)' + diag = .false. ! write diagnostic information do iblk = 1, nblocks @@ -2926,6 +2993,8 @@ subroutine oned_files(yr) integer (kind=int_kind), intent(in) :: & yr ! current forcing year + character(len=*), parameter :: subname = '(oned_files)' + fsw_file = & trim(atm_data_dir)//'/hourlysolar_brw1989_5yr.nc' @@ -2989,6 +3058,8 @@ subroutine ocn_data_clim (dt) logical (kind=log_kind) :: readm + character(len=*), parameter :: subname = '(ocean_data_clim)' + if (my_task == master_task .and. istep == 1) then if (trim(sss_data_type)=='clim') then write (nu_diag,*) ' ' @@ -3149,12 +3220,14 @@ subroutine ocn_data_ncar_init real (kind=dbl_kind), dimension (nx_block,ny_block,max_blocks) :: & work1 + character(len=*), parameter :: subname = '(ocn_data_ncar_init)' + if (my_task == master_task) then write (nu_diag,*) 'WARNING: evp_prep calculates surface tilt' write (nu_diag,*) 'WARNING: stress from geostrophic currents,' write (nu_diag,*) 'WARNING: not data from ocean forcing file.' - write (nu_diag,*) 'WARNING: Alter ice_dyn_evp.F if desired.' + write (nu_diag,*) 'WARNING: Alter ice_dyn_evp.F90 if desired.' if (restore_sst) write (nu_diag,*) & 'SST restoring timescale = ',trestore,' days' @@ -3183,10 +3256,12 @@ subroutine ocn_data_ncar_init status = nf90_inquire_dimension(fid,dimid,len=nlat) if( nlon .ne. nx_global ) then - call abort_ice ('ice: ocn frc file nlon ne nx_global') + call abort_ice (error_message=subname//'ice: ocn frc file nlon ne nx_global', & + file=__FILE__, line=__LINE__) endif if( nlat .ne. ny_global ) then - call abort_ice ('ice: ocn frc file nlat ne ny_global') + call abort_ice (error_message=subname//'ice: ocn frc file nlat ne ny_global', & + file=__FILE__, line=__LINE__) endif endif ! master_task @@ -3297,6 +3372,8 @@ subroutine ocn_data_ncar_init_3D real (kind=dbl_kind), dimension (nx_block,ny_block,max_blocks) :: & work1, work2 + character(len=*), parameter :: subname = '(ocn_data_ncar_init_3D)' + if (my_task == master_task) then write (nu_diag,*) 'WARNING: evp_prep calculates surface tilt' @@ -3332,10 +3409,12 @@ subroutine ocn_data_ncar_init_3D status = nf90_inquire_dimension(fid,dimid,len=nlat) if( nlon .ne. nx_global ) then - call abort_ice ('ice: ocn frc file nlon ne nx_global') + call abort_ice (error_message=subname//'ice: ocn frc file nlon ne nx_global', & + file=__FILE__, line=__LINE__) endif if( nlat .ne. ny_global ) then - call abort_ice ('ice: ocn frc file nlat ne ny_global') + call abort_ice (error_message=subname//'ice: ocn frc file nlat ne ny_global', & + file=__FILE__, line=__LINE__) endif endif ! master_task @@ -3388,7 +3467,8 @@ subroutine ocn_data_ncar_init_3D else ! binary format - call abort_ice ('new ocean forcing is netcdf only') + call abort_ice (error_message=subname//'new ocean forcing is netcdf only', & + file=__FILE__, line=__LINE__) endif @@ -3426,6 +3506,8 @@ subroutine ocn_data_ncar(dt) real (kind=dbl_kind), dimension (nx_block,ny_block,max_blocks) :: & work1 + character(len=*), parameter :: subname = '(ocn_data_ncar)' + !------------------------------------------------------------------- ! monthly data ! @@ -3587,6 +3669,8 @@ subroutine ocn_data_oned(dt) integer :: i, j, iblk + character(len=*), parameter :: subname = '(ocn_data_oned)' + sss (:,:,:) = 34.0_dbl_kind ! sea surface salinity (ppt) call ocn_freezing_temperature @@ -3641,6 +3725,8 @@ subroutine ocn_data_hadgem(dt) character (char_len_long) :: & filename ! name of netCDF file + character(len=*), parameter :: subname = '(ocn_data_hadgem)' + !------------------------------------------------------------------- ! monthly data ! @@ -3824,6 +3910,9 @@ subroutine read_data_nc_point (flag, recd, yr, ixm, ixx, ixp, & real (kind=dbl_kind), dimension(2), & intent(out) :: & field_data ! 2 values needed for interpolation + + character(len=*), parameter :: subname = '(read_data_nc_point)' + #ifdef ncdf integer (kind=int_kind) :: & nrec , & ! record number to read @@ -3946,6 +4035,8 @@ subroutine ISPOL_files(yr) integer (kind=int_kind), intent(in) :: & yr ! current forcing year + character(len=*), parameter :: subname = '(ISPOL_files)' + fsw_file = & trim(atm_data_dir)//'/fsw_sfc_4Xdaily.nc' @@ -4076,10 +4167,12 @@ subroutine ISPOL_data logical (kind=log_kind) :: readm, read1 + character(len=*), parameter :: subname = '(ISPOL_data)' + diag = .false. ! write diagnostic information call icepack_query_parameters(secday_out=secday) call icepack_warnings_flush(nu_diag) - if (icepack_warnings_aborted()) call abort_ice(error_message="subname", & + if (icepack_warnings_aborted()) call abort_ice(error_message=subname, & file=__FILE__, line=__LINE__) #ifdef ncdf @@ -4291,6 +4384,8 @@ subroutine ocn_data_ispol_init nlat , & ! number of longitudes of data nlon ! number of latitudes of data + character(len=*), parameter :: subname = '(ocn_data_ispol_init)' + if (my_task == master_task) then if (restore_sst) write (nu_diag,*) & @@ -4331,7 +4426,8 @@ subroutine ocn_data_ispol_init #endif else ! binary format - call abort_ice ('new ocean forcing is netcdf only') + call abort_ice (error_message=subname//'new ocean forcing is netcdf only', & + file=__FILE__, line=__LINE__) endif !echmod - currents cause Fram outflow to be too large diff --git a/cicecore/cicedynB/general/ice_forcing_bgc.F90 b/cicecore/cicedynB/general/ice_forcing_bgc.F90 index 2609905d2..4e2819f0a 100644 --- a/cicecore/cicedynB/general/ice_forcing_bgc.F90 +++ b/cicecore/cicedynB/general/ice_forcing_bgc.F90 @@ -847,6 +847,7 @@ subroutine faero_optics if (icepack_warnings_aborted()) call abort_ice(error_message="subname", & file=__FILE__, line=__LINE__) +#ifdef ncdf if (modal_aero) then diag = .true. ! write diagnostic information optics_file = & @@ -885,6 +886,11 @@ subroutine faero_optics enddo enddo endif ! modal_aero +#else + if (modal_aero) then + call abort_ice('faero_optics: netcdf required for modal_aero') + endif +#endif end subroutine faero_optics diff --git a/cicecore/cicedynB/general/ice_init.F90 b/cicecore/cicedynB/general/ice_init.F90 index 1d6c0dfc4..4576aacbe 100644 --- a/cicecore/cicedynB/general/ice_init.F90 +++ b/cicecore/cicedynB/general/ice_init.F90 @@ -13,11 +13,11 @@ module ice_init use ice_kinds_mod - use ice_communicate, only: my_task, master_task + use ice_communicate, only: my_task, master_task, ice_barrier use ice_constants, only: c0, c1, c2, c3, p2, p5 use ice_exit, only: abort_ice use ice_fileunits, only: nu_nml, nu_diag, nml_filename, diag_type, & - ice_stdout, get_fileunit, release_fileunit, bfbflag + ice_stdout, get_fileunit, release_fileunit, bfbflag, flush_fileunit use ice_fileunits, only: inst_suffix use icepack_intfc, only: icepack_warnings_flush, icepack_warnings_aborted use icepack_intfc, only: icepack_aggregate @@ -66,7 +66,7 @@ subroutine input_data restart, restart_ext, restart_dir, restart_file, pointer_file, & runid, runtype, use_restart_time, restart_format, lcdf64 use ice_history_shared, only: hist_avg, history_dir, history_file, & - incond_dir, incond_file + incond_dir, incond_file, version_name use ice_flux, only: update_ocn_f, l_mpond_fresh use ice_flux_bgc, only: cpl_bgc use ice_forcing, only: & @@ -117,6 +117,7 @@ subroutine input_data real (kind=real_kind) :: rpcesm, rplvl, rptopo real (kind=dbl_kind) :: Cf, puny + integer :: abort_flag character(len=*), parameter :: subname='(input_data)' @@ -135,7 +136,7 @@ subroutine input_data print_global, print_points, latpnt, lonpnt, & dbug, histfreq, histfreq_n, hist_avg, & history_dir, history_file, cpl_bgc, & - write_ic, incond_dir, incond_file + write_ic, incond_dir, incond_file, version_name namelist /grid_nml/ & grid_format, grid_type, grid_file, kmt_file, & @@ -186,9 +187,11 @@ subroutine input_data ! default values !----------------------------------------------------------------- + abort_flag = 0 + call icepack_query_parameters(puny_out=puny) call icepack_warnings_flush(nu_diag) - if (icepack_warnings_aborted()) call abort_ice(error_message="subname", & + if (icepack_warnings_aborted()) call abort_ice(error_message=subname//'Icepack Abort0', & file=__FILE__, line=__LINE__) days_per_year = 365 ! number of days in a year @@ -236,6 +239,7 @@ subroutine input_data grid_file = 'unknown_grid_file' gridcpl_file = 'unknown_gridcpl_file' kmt_file = 'unknown_kmt_file' + version_name = 'unknown_version_name' kitd = 1 ! type of itd conversions (0 = delta, 1 = linear) kcatbound = 1 ! category boundary formula (0 = old, 1 = new, etc) @@ -391,7 +395,8 @@ subroutine input_data endif call broadcast_scalar(nml_error, master_task) if (nml_error /= 0) then - call abort_ice('ice: error reading namelist') + call abort_ice(subname//'ERROR: reading namelist', & + file=__FILE__, line=__LINE__) endif call release_fileunit(nu_nml) @@ -427,236 +432,9 @@ subroutine input_data if (trim(diag_type) == 'file') call get_fileunit(nu_diag) #endif - if (my_task == master_task) then - if (trim(diag_type) == 'file') then - write(ice_stdout,*) 'Diagnostic output will be in file ',diag_file - open (nu_diag, file=diag_file, status='unknown') - endif - write(nu_diag,*) '--------------------------------' - write(nu_diag,*) ' CICE model diagnostic output ' - write(nu_diag,*) '--------------------------------' - write(nu_diag,*) ' ' - endif - - if (trim(runtype) == 'continue') restart = .true. - if (trim(runtype) /= 'continue' .and. (restart)) then - if (ice_ic == 'none' .or. ice_ic == 'default') then - if (my_task == master_task) then - write(nu_diag,*) & - 'WARNING: runtype, restart, ice_ic are inconsistent:' - write(nu_diag,*) trim(runtype), restart, trim(ice_ic) - write(nu_diag,*) & - 'WARNING: Need ice_ic = .' - write(nu_diag,*) & - 'WARNING: Initializing using ice_ic conditions' - endif - restart = .false. - endif - endif - if (trim(runtype) == 'initial' .and. .not.(restart)) then - if (ice_ic /= 'none' .and. ice_ic /= 'default') then - if (my_task == master_task) then - write(nu_diag,*) & - 'WARNING: runtype, restart, ice_ic are inconsistent:' - write(nu_diag,*) trim(runtype), restart, trim(ice_ic) - write(nu_diag,*) & - 'WARNING: Initializing with NO ICE: ' - write(nu_diag,*) ' ' - endif - ice_ic = 'none' - endif - endif - -#ifndef ncdf - ! netcdf is unavailable - grid_format = 'bin' - atm_data_format = 'bin' - ocn_data_format = 'bin' -#endif - - chartmp = advection(1:6) - if (chartmp /= 'upwind' .and. chartmp /= 'remap ') advection = 'remap' - - if (ncat == 1 .and. kitd == 1) then - if (my_task == master_task) then - write (nu_diag,*) 'Remapping the ITD is not allowed for ncat=1.' - write (nu_diag,*) 'Use kitd = 0 (delta function ITD) with kcatbound = 0' - write (nu_diag,*) 'or for column configurations use kcatbound = -1' - call abort_ice('Error: kitd incompatability: ncat=1 and kitd=1') - endif - endif - - if (ncat /= 1 .and. kcatbound == -1) then - if (my_task == master_task) then - write (nu_diag,*) & - 'WARNING: ITD required for ncat > 1' - write (nu_diag,*) & - 'WARNING: Setting kitd and kcatbound to default values' - endif - kitd = 1 - kcatbound = 0 - endif - - if (kdyn == 2 .and. revised_evp) then - if (my_task == master_task) then - write (nu_diag,*) & - 'WARNING: revised_evp = T with EAP dynamics' - write (nu_diag,*) & - 'WARNING: Setting revised_evp = F' - endif - revised_evp = .false. - endif - - rpcesm = c0 - rplvl = c0 - rptopo = c0 - if (tr_pond_cesm) rpcesm = c1 - if (tr_pond_lvl ) rplvl = c1 - if (tr_pond_topo) rptopo = c1 - - tr_pond = .false. ! explicit melt ponds - if (rpcesm + rplvl + rptopo > puny) tr_pond = .true. - - if (rpcesm + rplvl + rptopo > c1 + puny) then - if (my_task == master_task) then - write (nu_diag,*) 'WARNING: Must use only one melt pond scheme' - call abort_ice('ice: multiple melt pond schemes') - endif - endif - - if (tr_pond_lvl .and. .not. tr_lvl) then - if (my_task == master_task) then - write (nu_diag,*) 'WARNING: tr_pond_lvl=T but tr_lvl=F' - write (nu_diag,*) 'WARNING: Setting tr_lvl=T' - endif - tr_lvl = .true. - endif - - if (tr_pond_lvl .and. abs(hs0) > puny) then - if (my_task == master_task) then - write (nu_diag,*) 'WARNING: tr_pond_lvl=T and hs0/=0' - write (nu_diag,*) 'WARNING: Setting hs0=0' - endif - hs0 = c0 - endif - - if (tr_pond_cesm .and. trim(frzpnd) /= 'cesm') then - if (my_task == master_task) then - write (nu_diag,*) 'WARNING: tr_pond_cesm=T' - write (nu_diag,*) 'WARNING: frzpnd, dpscale not used' - endif - frzpnd = 'cesm' - endif - - if (trim(shortwave) /= 'dEdd' .and. tr_pond .and. calc_tsfc) then - if (my_task == master_task) then - write (nu_diag,*) 'WARNING: Must use dEdd shortwave' - write (nu_diag,*) 'WARNING: with tr_pond and calc_tsfc=T.' - write (nu_diag,*) 'WARNING: Setting shortwave = dEdd' - endif - shortwave = 'dEdd' - endif - - if (tr_aero .and. n_aero==0) then - if (my_task == master_task) then - write (nu_diag,*) 'WARNING: aerosols activated but' - write (nu_diag,*) 'WARNING: not allocated in tracer array.' - write (nu_diag,*) 'WARNING: Activate in compilation script.' - endif - call abort_ice('ice: aerosol tracer conflict: comp_ice, ice_in') - endif - - if (tr_aero .and. trim(shortwave) /= 'dEdd') then - if (my_task == master_task) then - write (nu_diag,*) 'WARNING: aerosols activated but dEdd' - write (nu_diag,*) 'WARNING: shortwave is not.' - write (nu_diag,*) 'WARNING: Setting shortwave = dEdd' - endif - shortwave = 'dEdd' - endif - - rfracmin = min(max(rfracmin,c0),c1) - rfracmax = min(max(rfracmax,c0),c1) - - if (trim(atm_data_type) == 'monthly' .and. calc_strair) & - calc_strair = .false. - - if (ktherm == 2 .and. .not. calc_Tsfc) then - if (my_task == master_task) then - write (nu_diag,*) 'WARNING: ktherm = 2 and calc_Tsfc = F' - write (nu_diag,*) 'WARNING: Setting calc_Tsfc = T' - endif - calc_Tsfc = .true. - endif - - if (ktherm == 1 .and. trim(tfrz_option) /= 'linear_salt') then - if (my_task == master_task) then - write (nu_diag,*) & - 'WARNING: ktherm = 1 and tfrz_option = ',trim(tfrz_option) - write (nu_diag,*) & - 'WARNING: For consistency, set tfrz_option = linear_salt' - endif - endif - if (ktherm == 2 .and. trim(tfrz_option) /= 'mushy') then - if (my_task == master_task) then - write (nu_diag,*) & - 'WARNING: ktherm = 2 and tfrz_option = ',trim(tfrz_option) - write (nu_diag,*) & - 'WARNING: For consistency, set tfrz_option = mushy' - endif - endif - - if (trim(atm_data_type) == 'hadgem' .and. & - trim(precip_units) /= 'mks') then - if (my_task == master_task) & - write (nu_diag,*) & - 'WARNING: HadGEM atmospheric data chosen with wrong precip_units' - write (nu_diag,*) & - 'WARNING: Changing precip_units to mks (i.e. kg/m2 s).' - precip_units='mks' - endif - - if (formdrag) then - if (trim(atmbndy) == 'constant') then - if (my_task == master_task) then - write (nu_diag,*) 'WARNING: atmbndy = constant not allowed with formdrag' - write (nu_diag,*) 'WARNING: Setting atmbndy = default' - endif - atmbndy = 'default' - endif - - if (.not. calc_strair) then - if (my_task == master_task) then - write (nu_diag,*) 'WARNING: formdrag=T but calc_strair=F' - write (nu_diag,*) 'WARNING: Setting calc_strair=T' - endif - calc_strair = .true. - endif - - if (tr_pond_cesm) then - if (my_task == master_task) then - write (nu_diag,*) 'ERROR: formdrag=T but frzpnd=''cesm''' - call abort_ice('ice_init: Formdrag and no hlid') - endif - endif - - if (.not. tr_lvl) then - if (my_task == master_task) then - write (nu_diag,*) 'WARNING: formdrag=T but tr_lvl=F' - write (nu_diag,*) 'WARNING: Setting tr_lvl=T' - endif - tr_lvl = .true. - endif - endif - - if (trim(fbot_xfer_type) == 'Cdn_ocn' .and. .not. formdrag) then - if (my_task == master_task) then - write (nu_diag,*) 'WARNING: formdrag=F but fbot_xfer_type=Cdn_ocn' - write (nu_diag,*) 'WARNING: Setting fbot_xfer_type = constant' - endif - fbot_xfer_type = 'constant' - endif - + !----------------------------------------------------------------- + ! broadcast namelist settings + !----------------------------------------------------------------- call broadcast_scalar(days_per_year, master_task) call broadcast_scalar(use_leap_years, master_task) @@ -799,9 +577,214 @@ subroutine input_data pointer_file = trim(pointer_file) // trim(inst_suffix) #endif + !----------------------------------------------------------------- + ! verify inputs + !----------------------------------------------------------------- + + if (my_task == master_task) then + if (trim(diag_type) == 'file') then + write(ice_stdout,*) 'Diagnostic output will be in file ',diag_file + open (nu_diag, file=diag_file, status='unknown') + endif + write(nu_diag,*) '--------------------------------' + write(nu_diag,*) ' CICE model diagnostic output ' + write(nu_diag,*) '--------------------------------' + write(nu_diag,*) ' ' + endif + + if (trim(runtype) == 'continue' .and. .not.restart) then + if (my_task == master_task) & + write(nu_diag,*) 'WARNING: runtype=continue, setting restart=.true.' + restart = .true. + endif + + if (trim(runtype) /= 'continue' .and. restart .and. & + (ice_ic == 'none' .or. ice_ic == 'default')) then + if (my_task == master_task) & + write(nu_diag,*) 'WARNING: runtype ne continue and ice_ic=none|default, setting restart=.false.' + restart = .false. + endif + + if (trim(runtype) == 'initial' .and. .not.(restart) .and. & + ice_ic /= 'none' .and. ice_ic /= 'default') then + if (my_task == master_task) then + write(nu_diag,*) 'ERROR: runtype, restart, ice_ic are inconsistent:' + write(nu_diag,*) 'ERROR: runtype=',trim(runtype), 'restart=',restart, 'ice_ic=',trim(ice_ic) + write(nu_diag,*) 'ERROR: Please review user guide' + endif + abort_flag = 1 + endif + +#ifndef ncdf + if (grid_format /= 'bin' .or. atm_data_format /= 'bin' .or. ocn_data_format /= 'bin') then + if (my_task == master_task) then + write(nu_diag,*)'ERROR: ncdf CPP flag unset, data formats must be bin' + write(nu_diag,*)'ERROR: check grid_format, atm_data_format, ocn_data_format or set ncdf CPP' + endif + abort_flag = 2 + endif +#endif + + if (advection /= 'remap' .and. advection /= 'upwind' .and. advection /= 'none') then + if (my_task == master_task) write(nu_diag,*)'ERROR: invalid advection=',trim(advection) + abort_flag = 3 + endif + + if (ncat == 1 .and. kitd == 1) then + if (my_task == master_task) then + write(nu_diag,*) 'ERROR: kitd incompatability: ncat=1 and kitd=1' + write(nu_diag,*) 'ERROR: Remapping the ITD is not allowed for ncat=1.' + write(nu_diag,*) 'ERROR: Use kitd = 0 (delta function ITD) with kcatbound = 0' + write(nu_diag,*) 'ERROR: or for column configurations use kcatbound = -1' + endif + abort_flag = 4 + endif + + if (ncat /= 1 .and. kcatbound == -1) then + if (my_task == master_task) then + write(nu_diag,*) 'ERROR: ITD required for ncat > 1' + write(nu_diag,*) 'ERROR: ncat=',ncat,' kcatbound=',kcatbound + write(nu_diag,*) 'ERROR: Please review user guide' + endif + abort_flag = 5 + endif + + if (kdyn == 2 .and. revised_evp) then + if (my_task == master_task) then + write(nu_diag,*) 'WARNING: revised_evp = T with EAP dynamics' + write(nu_diag,*) 'WARNING: revised_evp is ignored' + endif + revised_evp = .false. + endif + + rpcesm = c0 + rplvl = c0 + rptopo = c0 + if (tr_pond_cesm) rpcesm = c1 + if (tr_pond_lvl ) rplvl = c1 + if (tr_pond_topo) rptopo = c1 + + tr_pond = .false. ! explicit melt ponds + if (rpcesm + rplvl + rptopo > puny) tr_pond = .true. + + if (rpcesm + rplvl + rptopo > c1 + puny) then + if (my_task == master_task) then + write(nu_diag,*) 'ERROR: Must use only one melt pond scheme' + endif + abort_flag = 6 + endif + + if (tr_pond_lvl .and. .not. tr_lvl) then + if (my_task == master_task) then + write(nu_diag,*) 'WARNING: tr_pond_lvl=T but tr_lvl=F' + write(nu_diag,*) 'WARNING: Setting tr_lvl=T' + endif + tr_lvl = .true. + endif + +! tcraig - this was originally implemented by resetting hs0=0. EH says it might be OK +! to not reset it but extra calculations are done and it might not be bfb. In our +! testing, we should explicitly set hs0 to 0. when setting tr_pond_lvl=T, and otherwise +! this will abort (safest option until additional testing is done) + if (tr_pond_lvl .and. abs(hs0) > puny) then + if (my_task == master_task) then + write(nu_diag,*) 'ERROR: tr_pond_lvl=T and hs0 /= 0' + endif + abort_flag = 7 + endif + + if (trim(shortwave) /= 'dEdd' .and. tr_pond .and. calc_tsfc) then + if (my_task == master_task) then + write(nu_diag,*) 'ERROR: tr_pond=T, calc_tsfc=T, invalid shortwave' + write(nu_diag,*) 'ERROR: Must use shortwave=dEdd' + endif + abort_flag = 8 + endif + + if (tr_aero .and. n_aero==0) then + if (my_task == master_task) then + write(nu_diag,*) 'ERROR: aerosols activated but' + write(nu_diag,*) 'ERROR: not allocated in tracer array.' + write(nu_diag,*) 'ERROR: Activate in compilation script.' + endif + abort_flag = 9 + endif + + if (trim(shortwave) /= 'dEdd' .and. tr_aero) then + if (my_task == master_task) then + write(nu_diag,*) 'ERROR: tr_aero=T, invalid shortwave' + write(nu_diag,*) 'ERROR: Must use shortwave=dEdd' + endif + abort_flag = 10 + endif + + if ((rfracmin < -puny .or. rfracmin > c1+puny) .or. & + (rfracmax < -puny .or. rfracmax > c1+puny) .or. & + (rfracmin > rfracmax)) then + if (my_task == master_task) then + write(nu_diag,*) 'ERROR: rfracmin, rfracmax must be between 0 and 1' + write(nu_diag,*) 'ERROR: and rfracmax >= rfracmin' + endif + abort_flag = 11 + endif + rfracmin = min(max(rfracmin,c0),c1) + rfracmax = min(max(rfracmax,c0),c1) + + if (trim(atm_data_type) == 'monthly' .and. calc_strair) then + if (my_task == master_task) write(nu_diag,*)'ERROR: atm_data_type=monthly and calc_strair=T' + abort_flag = 12 + endif + + if (ktherm == 2 .and. .not. calc_Tsfc) then + if (my_task == master_task) write(nu_diag,*) 'ERROR: ktherm = 2 and calc_Tsfc=F' + abort_flag = 13 + endif + +! tcraig, is it really OK for users to run inconsistently? + if (ktherm == 1 .and. trim(tfrz_option) /= 'linear_salt') then + if (my_task == master_task) then + write(nu_diag,*) 'WARNING: ktherm = 1 and tfrz_option = ',trim(tfrz_option) + write(nu_diag,*) 'WARNING: For consistency, set tfrz_option = linear_salt' + endif + endif + if (ktherm == 2 .and. trim(tfrz_option) /= 'mushy') then + if (my_task == master_task) then + write(nu_diag,*) 'WARNING: ktherm = 2 and tfrz_option = ',trim(tfrz_option) + write(nu_diag,*) 'WARNING: For consistency, set tfrz_option = mushy' + endif + endif +!tcraig + + if (formdrag) then + if (trim(atmbndy) == 'constant') then + if (my_task == master_task) write(nu_diag,*) 'ERROR: formdrag=T and atmbndy=constant' + abort_flag = 14 + endif + + if (.not. calc_strair) then + if (my_task == master_task) write(nu_diag,*) 'ERROR: formdrag=T and calc_strair=F' + abort_flag = 15 + endif + + if (tr_pond_cesm) then + if (my_task == master_task) write(nu_diag,*)'ERROR: formdrag=T and frzpnd=cesm' + abort_flag = 16 + endif + + if (.not. tr_lvl) then + if (my_task == master_task) write(nu_diag,*) 'ERROR: formdrag=T and tr_lvl=F' + abort_flag = 17 + endif + endif + + if (trim(fbot_xfer_type) == 'Cdn_ocn' .and. .not. formdrag) then + if (my_task == master_task) write(nu_diag,*) 'ERROR: formdrag=F and fbot_xfer_type=Cdn_ocn' + abort_flag = 18 + endif + call icepack_init_parameters(Cf_in=Cf) call icepack_warnings_flush(nu_diag) - if (icepack_warnings_aborted()) call abort_ice(error_message="subname", & + if (icepack_warnings_aborted()) call abort_ice(error_message=subname//'Icepack Abort1', & file=__FILE__, line=__LINE__) !----------------------------------------------------------------- @@ -831,13 +814,13 @@ subroutine input_data write(nu_diag,1050) ' histfreq = ', histfreq(:) write(nu_diag,1040) ' histfreq_n = ', histfreq_n(:) write(nu_diag,1010) ' hist_avg = ', hist_avg - if (.not. hist_avg) write (nu_diag,*) 'History data will be snapshots' + if (.not. hist_avg) write(nu_diag,*) 'History data will be snapshots' write(nu_diag,*) ' history_dir = ', & trim(history_dir) write(nu_diag,*) ' history_file = ', & trim(history_file) if (write_ic) then - write (nu_diag,*) 'Initial condition will be written in ', & + write(nu_diag,*) 'Initial condition will be written in ', & trim(incond_dir) endif write(nu_diag,1030) ' dumpfreq = ', & @@ -996,19 +979,19 @@ subroutine input_data #ifdef coupled if( oceanmixed_ice ) then - write (nu_diag,*) 'WARNING WARNING WARNING WARNING ' - write (nu_diag,*) '*Coupled and oceanmixed flags are *' - write (nu_diag,*) '*BOTH ON. Ocean data received from*' - write (nu_diag,*) '*coupler will be altered by mixed *' - write (nu_diag,*) '*layer routine! *' - write (nu_diag,*) ' ' + write(nu_diag,*) 'WARNING ** WARNING ** WARNING ** WARNING ' + write(nu_diag,*) 'WARNING: coupled CPP and oceanmixed_ice namelist are BOTH ON' + write(nu_diag,*) 'WARNING: Ocean data received from coupler will' + write(nu_diag,*) 'WARNING: be altered by mixed layer routine!' + write(nu_diag,*) 'WARNING ** WARNING ** WARNING ** WARNING ' + write(nu_diag,*) ' ' endif #endif - write (nu_diag,*) ' ' - write (nu_diag,'(a30,2f8.2)') 'Diagnostic point 1: lat, lon =', & + write(nu_diag,*) ' ' + write(nu_diag,'(a30,2f8.2)') 'Diagnostic point 1: lat, lon =', & latpnt(1), lonpnt(1) - write (nu_diag,'(a30,2f8.2)') 'Diagnostic point 2: lat, lon =', & + write(nu_diag,'(a30,2f8.2)') 'Diagnostic point 2: lat, lon =', & latpnt(2), lonpnt(2) ! tracers @@ -1078,16 +1061,19 @@ subroutine input_data endif endif - nt_aero = max_ntrcr + ! tcraig, tcx, this is a BAD kludge, NTRAERO should be 0 if tr_aero is false + nt_aero = max_ntrcr - 4*n_aero if (tr_aero) then nt_aero = ntrcr + 1 ntrcr = ntrcr + 4*n_aero ! 4 dEdd layers, n_aero species endif if (ntrcr > max_ntrcr-1) then - write(nu_diag,*) 'max_ntrcr-1 < number of namelist tracers' - write(nu_diag,*) 'max_ntrcr-1 = ',max_ntrcr-1,' ntrcr = ',ntrcr - call abort_ice('max_ntrcr-1 < number of namelist tracers') + if (my_task == master_task) then + write(nu_diag,*) 'ERROR: max_ntrcr-1 < number of namelist tracers' + write(nu_diag,*) 'ERROR: max_ntrcr-1 = ',max_ntrcr-1,' ntrcr = ',ntrcr + endif + abort_flag = 19 endif write(nu_diag,*) ' ' @@ -1109,7 +1095,7 @@ subroutine input_data 1040 format (a30,2x,6i6) ! integer 1050 format (a30,2x,6a6) ! character - write (nu_diag,*) ' ' + write(nu_diag,*) ' ' if (grid_type /= 'displaced_pole' .and. & grid_type /= 'tripole' .and. & grid_type /= 'column' .and. & @@ -1117,7 +1103,8 @@ subroutine input_data grid_type /= 'cpom_grid' .and. & grid_type /= 'regional' .and. & grid_type /= 'latlon' ) then - call abort_ice('ice_init: unknown grid_type') + if (my_task == master_task) write(nu_diag,*)'ERROR: unknown grid_type=',trim(grid_type) + abort_flag = 20 endif endif ! my_task = master_task @@ -1138,23 +1125,24 @@ subroutine input_data if (formdrag) then if (nt_apnd==0) then - write(nu_diag,*)'ERROR: nt_apnd:',nt_apnd - call abort_ice ('formdrag: nt_apnd=0') + if (my_task == master_task) write(nu_diag,*)'ERROR: formdrag=T, nt_apnd=',nt_apnd + abort_flag = 21 elseif (nt_hpnd==0) then - write(nu_diag,*)'ERROR: nt_hpnd:',nt_hpnd - call abort_ice ('formdrag: nt_hpnd=0') + if (my_task == master_task) write(nu_diag,*)'ERROR: formdrag=T, nt_hpnd=',nt_hpnd + abort_flag = 22 elseif (nt_ipnd==0) then - write(nu_diag,*)'ERROR: nt_ipnd:',nt_ipnd - call abort_ice ('formdrag: nt_ipnd=0') + if (my_task == master_task) write(nu_diag,*)'ERROR: formdrag=T, nt_ipnd=',nt_ipnd + abort_flag = 23 elseif (nt_alvl==0) then - write(nu_diag,*)'ERROR: nt_alvl:',nt_alvl - call abort_ice ('formdrag: nt_alvl=0') + if (my_task == master_task) write(nu_diag,*)'ERROR: formdrag=T, nt_alvl=',nt_alvl + abort_flag = 24 elseif (nt_vlvl==0) then - write(nu_diag,*)'ERROR: nt_vlvl:',nt_vlvl - call abort_ice ('formdrag: nt_vlvl=0') + if (my_task == master_task) write(nu_diag,*)'ERROR: formdrag=T, nt_vlvl=',nt_vlvl + abort_flag = 25 endif endif + call flush_fileunit(nu_diag) call icepack_init_parameters(ustar_min_in=ustar_min, albicev_in=albicev, albicei_in=albicei, & albsnowv_in=albsnowv, albsnowi_in=albsnowi, natmiter_in=natmiter, & ahmax_in=ahmax, shortwave_in=shortwave, albedo_type_in=albedo_type, R_ice_in=R_ice, R_pnd_in=R_pnd, & @@ -1177,9 +1165,17 @@ subroutine input_data nt_alvl_in=nt_alvl, nt_vlvl_in=nt_vlvl, nt_apnd_in=nt_apnd, nt_hpnd_in=nt_hpnd, & nt_ipnd_in=nt_ipnd, nt_aero_in=nt_aero) call icepack_warnings_flush(nu_diag) - if (icepack_warnings_aborted()) call abort_ice(error_message="subname", & + if (icepack_warnings_aborted()) call abort_ice(error_message=subname//' Icepack Abort2', & file=__FILE__, line=__LINE__) + call flush_fileunit(nu_diag) + call ice_barrier() + if (abort_flag /= 0) then + write(nu_diag,*) subname,' ERROR: abort_flag=',abort_flag + call abort_ice (subname//' ABORTING on input ERRORS', & + file=__FILE__, line=__LINE__) + endif + end subroutine input_data !======================================================================= @@ -1234,7 +1230,7 @@ subroutine init_state nt_alvl_out=nt_alvl, nt_vlvl_out=nt_vlvl, nt_apnd_out=nt_apnd, nt_hpnd_out=nt_hpnd, & nt_ipnd_out=nt_ipnd, nt_aero_out=nt_aero) call icepack_warnings_flush(nu_diag) - if (icepack_warnings_aborted()) call abort_ice(error_message="subname", & + if (icepack_warnings_aborted()) call abort_ice(error_message=subname, & file=__FILE__, line=__LINE__) !----------------------------------------------------------------- @@ -1244,33 +1240,33 @@ subroutine init_state if (my_task == master_task) then if (nilyr < 1) then - write (nu_diag,*) 'nilyr =', nilyr - write (nu_diag,*) 'Must have at least one ice layer' - call abort_ice ('ice_init: Not enough ice layers') + write(nu_diag,*) 'ERROR: Must have at least one ice layer' + write(nu_diag,*) 'ERROR: nilyr =', nilyr + call abort_ice (error_message=subname//' Not enough ice layers', & + file=__FILE__, line=__LINE__) endif if (nslyr < 1) then - write (nu_diag,*) 'nslyr =', nslyr - write (nu_diag,*) 'Must have at least one snow layer' - call abort_ice('ice_init: Not enough snow layers') + write(nu_diag,*) 'ERROR: Must have at least one snow layer' + write(nu_diag,*) 'ERROR: nslyr =', nslyr + call abort_ice(error_message=subname//' Not enough snow layers', & + file=__FILE__, line=__LINE__) endif if (.not.heat_capacity) then - write (nu_diag,*) 'WARNING - Zero-layer thermodynamics' - if (nilyr > 1) then - write (nu_diag,*) 'nilyr =', nilyr - write (nu_diag,*) & - 'Must have nilyr = 1 if ktherm = 0' - call abort_ice('ice_init: Too many ice layers') + write(nu_diag,*) 'ERROR: Must have nilyr = 1 if heat_capacity=F' + write(nu_diag,*) 'ERROR: nilyr =', nilyr + call abort_ice(error_message=subname//' Too many ice layers', & + file=__FILE__, line=__LINE__) endif if (nslyr > 1) then - write (nu_diag,*) 'nslyr =', nslyr - write (nu_diag,*) & - 'Must have nslyr = 1 if heat_capacity = F' - call abort_ice('ice_init: Too many snow layers') + write(nu_diag,*) 'ERROR: Must have nslyr = 1 if heat_capacity=F' + write(nu_diag,*) 'ERROR: nslyr =', nslyr + call abort_ice(error_message=subname//' Too many snow layers', & + file=__FILE__, line=__LINE__) endif endif ! heat_capacity = F @@ -1439,7 +1435,7 @@ subroutine init_state !$OMP END PARALLEL DO call icepack_warnings_flush(nu_diag) - if (icepack_warnings_aborted()) call abort_ice(error_message="subname", & + if (icepack_warnings_aborted()) call abort_ice(error_message=subname, & file=__FILE__, line=__LINE__) end subroutine init_state @@ -1544,6 +1540,8 @@ subroutine set_state_var (nx_block, ny_block, & integer (kind=int_kind) :: nt_Tsfc, nt_qice, nt_qsno, nt_sice integer (kind=int_kind) :: nt_fbri, nt_alvl, nt_vlvl + character(len=*), parameter :: subname='(set_state_var)' + call icepack_query_tracer_flags(tr_brine_out=tr_brine, tr_lvl_out=tr_lvl) call icepack_query_tracer_indices( nt_Tsfc_out=nt_Tsfc, nt_qice_out=nt_qice, & nt_qsno_out=nt_qsno, nt_sice_out=nt_sice, & @@ -1551,7 +1549,7 @@ subroutine set_state_var (nx_block, ny_block, & call icepack_query_parameters(rhos_out=rhos, Lfresh_out=Lfresh, puny_out=puny, & rad_to_deg_out=rad_to_deg) call icepack_warnings_flush(nu_diag) - if (icepack_warnings_aborted()) call abort_ice(error_message="subname", & + if (icepack_warnings_aborted()) call abort_ice(error_message=subname, & file=__FILE__, line=__LINE__) indxi(:) = 0 @@ -1730,7 +1728,7 @@ subroutine set_state_var (nx_block, ny_block, & endif ! ice_ic call icepack_warnings_flush(nu_diag) - if (icepack_warnings_aborted()) call abort_ice(error_message="subname", & + if (icepack_warnings_aborted()) call abort_ice(error_message=subname, & file=__FILE__, line=__LINE__) end subroutine set_state_var diff --git a/cicecore/cicedynB/infrastructure/comm/mpi/ice_communicate.F90 b/cicecore/cicedynB/infrastructure/comm/mpi/ice_communicate.F90 index ad02cb771..4ee877346 100644 --- a/cicecore/cicedynB/infrastructure/comm/mpi/ice_communicate.F90 +++ b/cicecore/cicedynB/infrastructure/comm/mpi/ice_communicate.F90 @@ -29,6 +29,7 @@ module ice_communicate public :: init_communicate, & get_num_procs, & + ice_barrier, & create_communicator integer (int_kind), public :: & @@ -81,7 +82,7 @@ subroutine init_communicate(mpicom) #if (defined key_oasis3 || defined key_oasis3mct || defined key_oasis4) ice_comm = localComm ! communicator from NEMO/OASISn #elif defined key_iomput - ice_comm = mpi_comm_opa ! communicator from NEMO/XIOS + ice_comm = mpi_comm_opa ! communicator from NEMO/XIOS #else ice_comm = MPI_COMM_WORLD ! Global communicator #endif @@ -129,6 +130,28 @@ function get_num_procs() end function get_num_procs +!*********************************************************************** + + subroutine ice_barrier() + +! This function calls an MPI_BARRIER + +!----------------------------------------------------------------------- +! +! local variables +! +!----------------------------------------------------------------------- + + integer (int_kind) :: ierr + +!----------------------------------------------------------------------- + + call MPI_BARRIER(MPI_COMM_ICE, ierr) + +!----------------------------------------------------------------------- + + end subroutine ice_barrier + !*********************************************************************** subroutine create_communicator(new_comm, num_procs) diff --git a/cicecore/cicedynB/infrastructure/comm/serial/ice_boundary.F90 b/cicecore/cicedynB/infrastructure/comm/serial/ice_boundary.F90 index 3a54aecb8..ef21fa52b 100644 --- a/cicecore/cicedynB/infrastructure/comm/serial/ice_boundary.F90 +++ b/cicecore/cicedynB/infrastructure/comm/serial/ice_boundary.F90 @@ -658,7 +658,7 @@ subroutine ice_HaloUpdate2DR8(array, halo, & !----------------------------------------------------------------------- integer (int_kind) :: & - i,nmsg, &! dummy loop indices + i,j,nmsg, &! dummy loop indices nxGlobal, &! global domain size in x (tripole) iSrc,jSrc, &! source addresses for message iDst,jDst, &! dest addresses for message @@ -689,6 +689,23 @@ subroutine ice_HaloUpdate2DR8(array, halo, & bufTripoleR8 = fill endif +!----------------------------------------------------------------------- +! +! fill out halo region +! needed for masked halos to ensure halo values are filled for +! halo grid cells that are not updated +! +!----------------------------------------------------------------------- + + do j = 1,nghost + array(1:nx_block, j,:) = fill + array(1:nx_block,ny_block-j+1,:) = fill + enddo + do i = 1,nghost + array(i, 1:ny_block,:) = fill + array(nx_block-i+1,1:ny_block,:) = fill + enddo + !----------------------------------------------------------------------- ! ! do local copies while waiting for messages to complete @@ -928,7 +945,7 @@ subroutine ice_HaloUpdate2DR4(array, halo, & !----------------------------------------------------------------------- integer (int_kind) :: & - i,nmsg, &! dummy loop indices + i,j,nmsg, &! dummy loop indices nxGlobal, &! global domain size in x (tripole) iSrc,jSrc, &! source addresses for message iDst,jDst, &! dest addresses for message @@ -959,6 +976,23 @@ subroutine ice_HaloUpdate2DR4(array, halo, & bufTripoleR4 = fill endif +!----------------------------------------------------------------------- +! +! fill out halo region +! needed for masked halos to ensure halo values are filled for +! halo grid cells that are not updated +! +!----------------------------------------------------------------------- + + do j = 1,nghost + array(1:nx_block, j,:) = fill + array(1:nx_block,ny_block-j+1,:) = fill + enddo + do i = 1,nghost + array(i, 1:ny_block,:) = fill + array(nx_block-i+1,1:ny_block,:) = fill + enddo + !----------------------------------------------------------------------- ! ! do local copies while waiting for messages to complete @@ -1198,7 +1232,7 @@ subroutine ice_HaloUpdate2DI4(array, halo, & !----------------------------------------------------------------------- integer (int_kind) :: & - i,nmsg, &! dummy loop indices + i,j,nmsg, &! dummy loop indices nxGlobal, &! global domain size in x (tripole) iSrc,jSrc, &! source addresses for message iDst,jDst, &! dest addresses for message @@ -1229,6 +1263,23 @@ subroutine ice_HaloUpdate2DI4(array, halo, & bufTripoleI4 = fill endif +!----------------------------------------------------------------------- +! +! fill out halo region +! needed for masked halos to ensure halo values are filled for +! halo grid cells that are not updated +! +!----------------------------------------------------------------------- + + do j = 1,nghost + array(1:nx_block, j,:) = fill + array(1:nx_block,ny_block-j+1,:) = fill + enddo + do i = 1,nghost + array(i, 1:ny_block,:) = fill + array(nx_block-i+1,1:ny_block,:) = fill + enddo + !----------------------------------------------------------------------- ! ! do local copies while waiting for messages to complete @@ -1468,7 +1519,7 @@ subroutine ice_HaloUpdate3DR8(array, halo, & !----------------------------------------------------------------------- integer (int_kind) :: & - i,k,nmsg, &! dummy loop indices + i,j,k,nmsg, &! dummy loop indices nxGlobal, &! global domain size in x (tripole) nz, &! size of array in 3rd dimension iSrc,jSrc, &! source addresses for message @@ -1506,6 +1557,23 @@ subroutine ice_HaloUpdate3DR8(array, halo, & bufTripole = fill endif +!----------------------------------------------------------------------- +! +! fill out halo region +! needed for masked halos to ensure halo values are filled for +! halo grid cells that are not updated +! +!----------------------------------------------------------------------- + + do j = 1,nghost + array(1:nx_block, j,:,:) = fill + array(1:nx_block,ny_block-j+1,:,:) = fill + enddo + do i = 1,nghost + array(i, 1:ny_block,:,:) = fill + array(nx_block-i+1,1:ny_block,:,:) = fill + enddo + !----------------------------------------------------------------------- ! ! do local copies @@ -1764,7 +1832,7 @@ subroutine ice_HaloUpdate3DR4(array, halo, & !----------------------------------------------------------------------- integer (int_kind) :: & - i,k,nmsg, &! dummy loop indices + i,j,k,nmsg, &! dummy loop indices nxGlobal, &! global domain size in x (tripole) nz, &! size of array in 3rd dimension iSrc,jSrc, &! source addresses for message @@ -1802,6 +1870,23 @@ subroutine ice_HaloUpdate3DR4(array, halo, & bufTripole = fill endif +!----------------------------------------------------------------------- +! +! fill out halo region +! needed for masked halos to ensure halo values are filled for +! halo grid cells that are not updated +! +!----------------------------------------------------------------------- + + do j = 1,nghost + array(1:nx_block, j,:,:) = fill + array(1:nx_block,ny_block-j+1,:,:) = fill + enddo + do i = 1,nghost + array(i, 1:ny_block,:,:) = fill + array(nx_block-i+1,1:ny_block,:,:) = fill + enddo + !----------------------------------------------------------------------- ! ! do local copies @@ -2060,7 +2145,7 @@ subroutine ice_HaloUpdate3DI4(array, halo, & !----------------------------------------------------------------------- integer (int_kind) :: & - i,k,nmsg, &! dummy loop indices + i,j,k,nmsg, &! dummy loop indices nxGlobal, &! global domain size in x (tripole) nz, &! size of array in 3rd dimension iSrc,jSrc, &! source addresses for message @@ -2098,6 +2183,23 @@ subroutine ice_HaloUpdate3DI4(array, halo, & bufTripole = fill endif +!----------------------------------------------------------------------- +! +! fill out halo region +! needed for masked halos to ensure halo values are filled for +! halo grid cells that are not updated +! +!----------------------------------------------------------------------- + + do j = 1,nghost + array(1:nx_block, j,:,:) = fill + array(1:nx_block,ny_block-j+1,:,:) = fill + enddo + do i = 1,nghost + array(i, 1:ny_block,:,:) = fill + array(nx_block-i+1,1:ny_block,:,:) = fill + enddo + !----------------------------------------------------------------------- ! ! do local copies @@ -2356,7 +2458,7 @@ subroutine ice_HaloUpdate4DR8(array, halo, & !----------------------------------------------------------------------- integer (int_kind) :: & - i,k,l,nmsg, &! dummy loop indices + i,j,k,l,nmsg, &! dummy loop indices nxGlobal, &! global domain size in x (tripole) nz, nt, &! size of array in 3rd,4th dimensions iSrc,jSrc, &! source addresses for message @@ -2395,6 +2497,23 @@ subroutine ice_HaloUpdate4DR8(array, halo, & bufTripole = fill endif +!----------------------------------------------------------------------- +! +! fill out halo region +! needed for masked halos to ensure halo values are filled for +! halo grid cells that are not updated +! +!----------------------------------------------------------------------- + + do j = 1,nghost + array(1:nx_block, j,:,:,:) = fill + array(1:nx_block,ny_block-j+1,:,:,:) = fill + enddo + do i = 1,nghost + array(i, 1:ny_block,:,:,:) = fill + array(nx_block-i+1,1:ny_block,:,:,:) = fill + enddo + !----------------------------------------------------------------------- ! ! do local copies @@ -2669,7 +2788,7 @@ subroutine ice_HaloUpdate4DR4(array, halo, & !----------------------------------------------------------------------- integer (int_kind) :: & - i,k,l,nmsg, &! dummy loop indices + i,j,k,l,nmsg, &! dummy loop indices nxGlobal, &! global domain size in x (tripole) nz, nt, &! size of array in 3rd,4th dimensions iSrc,jSrc, &! source addresses for message @@ -2708,6 +2827,23 @@ subroutine ice_HaloUpdate4DR4(array, halo, & bufTripole = fill endif +!----------------------------------------------------------------------- +! +! fill out halo region +! needed for masked halos to ensure halo values are filled for +! halo grid cells that are not updated +! +!----------------------------------------------------------------------- + + do j = 1,nghost + array(1:nx_block, j,:,:,:) = fill + array(1:nx_block,ny_block-j+1,:,:,:) = fill + enddo + do i = 1,nghost + array(i, 1:ny_block,:,:,:) = fill + array(nx_block-i+1,1:ny_block,:,:,:) = fill + enddo + !----------------------------------------------------------------------- ! ! do local copies @@ -2982,7 +3118,7 @@ subroutine ice_HaloUpdate4DI4(array, halo, & !----------------------------------------------------------------------- integer (int_kind) :: & - i,k,l,nmsg, &! dummy loop indices + i,j,k,l,nmsg, &! dummy loop indices nxGlobal, &! global domain size in x (tripole) nz, nt, &! size of array in 3rd,4th dimensions iSrc,jSrc, &! source addresses for message @@ -3021,6 +3157,23 @@ subroutine ice_HaloUpdate4DI4(array, halo, & bufTripole = fill endif +!----------------------------------------------------------------------- +! +! fill out halo region +! needed for masked halos to ensure halo values are filled for +! halo grid cells that are not updated +! +!----------------------------------------------------------------------- + + do j = 1,nghost + array(1:nx_block, j,:,:,:) = fill + array(1:nx_block,ny_block-j+1,:,:,:) = fill + enddo + do i = 1,nghost + array(i, 1:ny_block,:,:,:) = fill + array(nx_block-i+1,1:ny_block,:,:,:) = fill + enddo + !----------------------------------------------------------------------- ! ! do local copies @@ -3321,6 +3474,21 @@ subroutine ice_HaloUpdate_stress(array1, array2, halo, & bufTripoleR8 = fill endif +!----------------------------------------------------------------------- +! +! do NOT zero the halo out, this halo update just updates +! the tripole zipper as needed for stresses. if you zero +! it out, all halo values will be wiped out. +!----------------------------------------------------------------------- +! do j = 1,nghost +! array1(1:nx_block, j,:) = fill +! array1(1:nx_block,ny_block-j+1,:) = fill +! enddo +! do i = 1,nghost +! array1(i, 1:ny_block,:) = fill +! array1(nx_block-i+1,1:ny_block,:) = fill +! enddo + !----------------------------------------------------------------------- ! ! do local copies diff --git a/cicecore/cicedynB/infrastructure/comm/serial/ice_communicate.F90 b/cicecore/cicedynB/infrastructure/comm/serial/ice_communicate.F90 index 86fb3a015..416ba8859 100644 --- a/cicecore/cicedynB/infrastructure/comm/serial/ice_communicate.F90 +++ b/cicecore/cicedynB/infrastructure/comm/serial/ice_communicate.F90 @@ -18,6 +18,7 @@ module ice_communicate public :: init_communicate, & get_num_procs, & + ice_barrier, & create_communicator integer (int_kind), public :: & @@ -101,6 +102,24 @@ function get_num_procs() end function get_num_procs +!*********************************************************************** + + subroutine ice_barrier() + +! This function is an MPI_BARRIER on the MPI side + +!----------------------------------------------------------------------- +! +! serial execution, no-op +! +!----------------------------------------------------------------------- + + ! do nothing + +!----------------------------------------------------------------------- + + end subroutine ice_barrier + !*********************************************************************** subroutine create_communicator(new_comm, num_procs) diff --git a/cicecore/cicedynB/infrastructure/comm/serial/ice_exit.F90 b/cicecore/cicedynB/infrastructure/comm/serial/ice_exit.F90 index a79170863..c09556b90 100644 --- a/cicecore/cicedynB/infrastructure/comm/serial/ice_exit.F90 +++ b/cicecore/cicedynB/infrastructure/comm/serial/ice_exit.F90 @@ -8,6 +8,7 @@ module ice_exit + use ice_kinds_mod use ice_fileunits, only: nu_diag, flush_fileunit use icepack_intfc, only: icepack_warnings_flush, icepack_warnings_aborted #ifdef CESMCOUPLED diff --git a/cicecore/cicedynB/infrastructure/comm/serial/ice_timers.F90 b/cicecore/cicedynB/infrastructure/comm/serial/ice_timers.F90 index 23bd3c527..3a4ca326f 100644 --- a/cicecore/cicedynB/infrastructure/comm/serial/ice_timers.F90 +++ b/cicecore/cicedynB/infrastructure/comm/serial/ice_timers.F90 @@ -9,13 +9,14 @@ module ice_timers ! Replaced 'stdout' by 'nu_diag' use ice_kinds_mod - use ice_constants, only: c0, c1, bignum + use ice_constants, only: c0, c1 use ice_domain, only: nblocks, distrb_info use ice_global_reductions, only: global_minval, global_maxval, global_sum use ice_exit, only: abort_ice use ice_fileunits, only: nu_diag use ice_communicate, only: my_task, master_task use icepack_intfc, only: icepack_warnings_flush, icepack_warnings_aborted + use icepack_intfc, only: icepack_query_parameters implicit none private @@ -558,6 +559,7 @@ subroutine ice_timer_print(timer_id,stats) ! when this routine is called real (dbl_kind) :: & + bignum, &! big number local_time, &! temp space for holding local timer results min_time, &! minimum accumulated time max_time, &! maximum accumulated time @@ -579,6 +581,11 @@ subroutine ice_timer_print(timer_id,stats) ! !----------------------------------------------------------------------- + call icepack_query_parameters(bignum_out=bignum) + call icepack_warnings_flush(nu_diag) + if (icepack_warnings_aborted()) call abort_ice(error_message="subname", & + file=__FILE__, line=__LINE__) + if (all_timers(timer_id)%in_use) then if (all_timers(timer_id)%node_started) then call ice_timer_stop(timer_id) diff --git a/cicecore/cicedynB/infrastructure/ice_domain.F90 b/cicecore/cicedynB/infrastructure/ice_domain.F90 index 8bd7c2d18..f79eeef61 100644 --- a/cicecore/cicedynB/infrastructure/ice_domain.F90 +++ b/cicecore/cicedynB/infrastructure/ice_domain.F90 @@ -63,8 +63,8 @@ module ice_domain character (char_len) :: & distribution_type, &! method to use for distributing blocks - ! 'cartesian' - ! 'rake' + ! 'cartesian', 'roundrobin', 'sectrobin', 'sectcart' + ! 'rake', 'spacecurve', etc distribution_wght ! method for weighting work per block ! 'block' = POP default configuration ! 'latitude' = no. ocean points * |lat| @@ -476,14 +476,15 @@ subroutine init_domain_distribution(KMTG,ULATG) if (nblocks_max > max_blocks) then write(outstring,*) & - 'ice: no. blocks exceed max: increase max to', nblocks_max - call abort_ice(trim(outstring)) + 'ERROR: no. blocks exceed max: increase max to', nblocks_max + call abort_ice("subname"//trim(outstring), & + file=__FILE__, line=__LINE__) else if (nblocks_max < max_blocks) then write(outstring,*) & - 'ice: no. blocks too large: decrease max to', nblocks_max + 'WARNING: no. blocks too large: decrease max to', nblocks_max if (my_task == master_task) then write(nu_diag,*) ' ********WARNING***********' - write(nu_diag,*) trim(outstring) + write(nu_diag,*) "subname",trim(outstring) write(nu_diag,*) ' **************************' write(nu_diag,*) ' ' endif diff --git a/cicecore/cicedynB/infrastructure/ice_grid.F90 b/cicecore/cicedynB/infrastructure/ice_grid.F90 index 058b551fc..62bed9a4b 100644 --- a/cicecore/cicedynB/infrastructure/ice_grid.F90 +++ b/cicecore/cicedynB/infrastructure/ice_grid.F90 @@ -2269,7 +2269,7 @@ subroutine read_basalstress_bathy write (nu_diag,*) ' ' write (nu_diag,*) 'Initial ice file: ', trim(init_file) write (*,*) 'Initial ice file: ', trim(init_file) -! call flush(nu_diag) + call icepack_warnings_flush(nu_diag) endif @@ -2280,7 +2280,7 @@ subroutine read_basalstress_bathy if (my_task == master_task) then write(nu_diag,*) 'reading ',TRIM(fieldname) write(*,*) 'reading ',TRIM(fieldname) -! call flush(nu_diag) + call icepack_warnings_flush(nu_diag) endif call ice_read_nc(fid_init,1,fieldname,bathymetry,diag, & field_loc=field_loc_center, & @@ -2290,7 +2290,7 @@ subroutine read_basalstress_bathy if (my_task == master_task) then write(nu_diag,*) 'closing file ',TRIM(init_file) -! call flush(nu_diag) + call icepack_warnings_flush(nu_diag) endif end subroutine read_basalstress_bathy diff --git a/cicecore/cicedynB/infrastructure/ice_read_write.F90 b/cicecore/cicedynB/infrastructure/ice_read_write.F90 index c58ad4a38..9c3beede2 100644 --- a/cicecore/cicedynB/infrastructure/ice_read_write.F90 +++ b/cicecore/cicedynB/infrastructure/ice_read_write.F90 @@ -1153,7 +1153,7 @@ subroutine ice_read_nc_xy(fid, nrec, varname, work, diag, & amin = minval(work_g1) amax = maxval(work_g1, mask = work_g1 /= spval_dbl) asum = sum (work_g1, mask = work_g1 /= spval_dbl) - write(nu_diag,*) ' min, max, sum =', amin, amax, asum + write(nu_diag,*) ' min, max, sum =', amin, amax, asum, trim(varname) endif !------------------------------------------------------------------- @@ -1327,7 +1327,7 @@ subroutine ice_read_nc_xyz(fid, nrec, varname, work, diag, & amin = minval(work_g1(:,:,n)) amax = maxval(work_g1(:,:,n), mask = work_g1(:,:,n) /= spval_dbl) asum = sum (work_g1(:,:,n), mask = work_g1(:,:,n) /= spval_dbl) - write(nu_diag,*) ' min, max, sum =', amin, amax, asum + write(nu_diag,*) ' min, max, sum =', amin, amax, asum, trim(varname) enddo endif @@ -1560,7 +1560,7 @@ end subroutine ice_read_nc_z ! Adapted by David Bailey, NCAR subroutine ice_write_nc_xy(fid, nrec, varid, work, diag, & - restart_ext) + restart_ext, varname) use ice_gather_scatter, only: gather_global, gather_global_ext @@ -1579,6 +1579,9 @@ subroutine ice_write_nc_xy(fid, nrec, varid, work, diag, & intent(in) :: & work ! output array (real, 8-byte) + character (len=*), optional, intent(in) :: & + varname ! variable name + ! local variables #ifdef ncdf @@ -1592,8 +1595,9 @@ subroutine ice_write_nc_xy(fid, nrec, varid, work, diag, & real (kind=dbl_kind) :: & amin, amax, asum ! min, max values and sum of input array -! character (char_len) :: & -! dimname ! dimension name + character (char_len) :: & + lvarname, & ! variable name + dimname ! dimension name real (kind=dbl_kind), dimension(:,:), allocatable :: & work_g1 @@ -1610,6 +1614,12 @@ subroutine ice_write_nc_xy(fid, nrec, varid, work, diag, & endif endif + if (present(varname)) then + lvarname = trim(varname) + else + lvarname = ' ' + endif + if (my_task == master_task) then allocate(work_g1(nx,ny)) else @@ -1653,7 +1663,7 @@ subroutine ice_write_nc_xy(fid, nrec, varid, work, diag, & amin = minval(work_g1) amax = maxval(work_g1, mask = work_g1 /= spval_dbl) asum = sum (work_g1, mask = work_g1 /= spval_dbl) - write(nu_diag,*) ' min, max, sum =', amin, amax, asum + write(nu_diag,*) ' min, max, sum =', amin, amax, asum, trim(lvarname) endif deallocate(work_g1) @@ -1668,7 +1678,7 @@ end subroutine ice_write_nc_xy ! Adapted by David Bailey, NCAR subroutine ice_write_nc_xyz(fid, nrec, varid, work, diag, & - restart_ext) + restart_ext, varname) use ice_gather_scatter, only: gather_global, gather_global_ext @@ -1687,6 +1697,9 @@ subroutine ice_write_nc_xyz(fid, nrec, varid, work, diag, & intent(in) :: & work ! output array (real, 8-byte) + character (len=*), optional, intent(in) :: & + varname ! variable name + ! local variables #ifdef ncdf @@ -1701,8 +1714,9 @@ subroutine ice_write_nc_xyz(fid, nrec, varid, work, diag, & real (kind=dbl_kind) :: & amin, amax, asum ! min, max values and sum of input array -! character (char_len) :: & -! dimname ! dimension name + character (char_len) :: & + lvarname, & ! variable name + dimname ! dimension name real (kind=dbl_kind), dimension(:,:,:), allocatable :: & work_g1 @@ -1739,6 +1753,12 @@ subroutine ice_write_nc_xyz(fid, nrec, varid, work, diag, & enddo endif + if (present(varname)) then + lvarname = trim(varname) + else + lvarname = ' ' + endif + if (my_task == master_task) then !-------------------------------------------------------------- @@ -1771,7 +1791,7 @@ subroutine ice_write_nc_xyz(fid, nrec, varid, work, diag, & amin = minval(work_g1(:,:,n)) amax = maxval(work_g1(:,:,n), mask = work_g1(:,:,n) /= spval_dbl) asum = sum (work_g1(:,:,n), mask = work_g1(:,:,n) /= spval_dbl) - write(nu_diag,*) ' min, max, sum =', amin, amax, asum + write(nu_diag,*) ' min, max, sum =', amin, amax, asum, trim(lvarname) enddo endif @@ -1882,7 +1902,7 @@ subroutine ice_read_global_nc (fid, nrec, varname, work_g, diag) amin = minval(work_g) amax = maxval(work_g, mask = work_g /= spval_dbl) asum = sum (work_g, mask = work_g /= spval_dbl) - write(nu_diag,*) 'min, max, sum = ', amin, amax, asum + write(nu_diag,*) 'min, max, sum = ', amin, amax, asum, trim(varname) endif #ifdef ORCA_GRID @@ -2023,7 +2043,7 @@ subroutine ice_read_nc_uv(fid, nrec, nzlev, varname, work, diag, & amin = minval(work_g1) amax = maxval(work_g1, mask = work_g1 /= spval_dbl) asum = sum (work_g1, mask = work_g1 /= spval_dbl) - write(nu_diag,*) ' min, max, sum =', amin, amax, asum + write(nu_diag,*) ' min, max, sum =', amin, amax, asum, trim(varname) endif !------------------------------------------------------------------- diff --git a/cicecore/cicedynB/infrastructure/ice_restoring.F90 b/cicecore/cicedynB/infrastructure/ice_restoring.F90 index 4989a4966..58c6d35a5 100644 --- a/cicecore/cicedynB/infrastructure/ice_restoring.F90 +++ b/cicecore/cicedynB/infrastructure/ice_restoring.F90 @@ -57,7 +57,7 @@ subroutine ice_HaloRestore_init use ice_communicate, only: my_task, master_task use ice_domain, only: ew_boundary_type, ns_boundary_type, & nblocks, blocks_ice - use ice_grid, only: tmask + use ice_grid, only: tmask, hm use ice_flux, only: sst, Tf, Tair, salinz, Tmltz use ice_restart_shared, only: restart_ext @@ -85,11 +85,11 @@ subroutine ice_HaloRestore_init if (icepack_warnings_aborted()) call abort_ice(error_message="subname", & file=__FILE__, line=__LINE__) - if (ew_boundary_type == 'open' .and. & - ns_boundary_type == 'open' .and. .not.(restart_ext)) then - if (my_task == master_task) write (nu_diag,*) & - 'WARNING: Setting restart_ext = T for open boundaries' - restart_ext = .true. + if ((ew_boundary_type == 'open' .or. & + ns_boundary_type == 'open') .and. .not.(restart_ext)) then + if (my_task == master_task) write (nu_diag,*) 'ERROR: restart_ext=F and open boundaries' + call abort_ice(error_message="subname"//'open boundary and restart_ext=F', & + file=__FILE__, line=__LINE__) endif allocate (aicen_rest(nx_block,ny_block,ncat,max_blocks), & @@ -253,6 +253,26 @@ subroutine ice_HaloRestore_init endif ! restore_ic + !----------------------------------------------------------------- + ! Impose land mask + !----------------------------------------------------------------- + + do iblk = 1, nblocks + do n = 1, ncat + do j = 1, ny_block + do i = 1, nx_block + aicen_rest(i,j,n,iblk) = aicen_rest(i,j,n,iblk) * hm(i,j,iblk) + vicen_rest(i,j,n,iblk) = vicen_rest(i,j,n,iblk) * hm(i,j,iblk) + vsnon_rest(i,j,n,iblk) = vsnon_rest(i,j,n,iblk) * hm(i,j,iblk) + do nt = 1, ntrcr + trcrn_rest(i,j,nt,n,iblk) = trcrn_rest(i,j,nt,n,iblk) & + * hm(i,j,iblk) + enddo + enddo + enddo + enddo + enddo + if (my_task == master_task) & write (nu_diag,*) 'ice restoring timescale = ',trestore,' days' diff --git a/cicecore/cicedynB/infrastructure/io/io_binary/ice_restart.F90 b/cicecore/cicedynB/infrastructure/io/io_binary/ice_restart.F90 index 71e3dbada..ccae5051d 100644 --- a/cicecore/cicedynB/infrastructure/io/io_binary/ice_restart.F90 +++ b/cicecore/cicedynB/infrastructure/io/io_binary/ice_restart.F90 @@ -8,15 +8,21 @@ module ice_restart use ice_broadcast - use ice_exit, only: abort_ice - use ice_fileunits use ice_kinds_mod use ice_restart_shared, only: & restart, restart_ext, restart_dir, restart_file, pointer_file, & runid, runtype, use_restart_time, restart_format, lenstr - use icepack_intfc, only: tr_iage, tr_FY, tr_lvl, tr_aero, tr_pond_cesm, & - tr_pond_topo, tr_pond_lvl, tr_brine, nbtrcr - use icepack_intfc, only: solve_zsal + use ice_fileunits, only: nu_diag, nu_rst_pointer + use ice_fileunits, only: nu_dump, nu_dump_eap, nu_dump_FY, nu_dump_age + use ice_fileunits, only: nu_dump_lvl, nu_dump_pond, nu_dump_hbrine + use ice_fileunits, only: nu_dump_bgc, nu_dump_aero, nu_dump_age + use ice_fileunits, only: nu_restart, nu_restart_eap, nu_restart_FY, nu_restart_age + use ice_fileunits, only: nu_restart_lvl, nu_restart_pond, nu_restart_hbrine + use ice_fileunits, only: nu_restart_bgc, nu_restart_aero, nu_restart_age + use ice_exit, only: abort_ice + use icepack_intfc, only: icepack_query_parameters + use icepack_intfc, only: icepack_query_tracer_numbers + use icepack_intfc, only: icepack_query_tracer_flags use icepack_intfc, only: icepack_warnings_flush, icepack_warnings_aborted implicit none @@ -35,7 +41,7 @@ module ice_restart subroutine init_restart_read(ice_ic) - use ice_calendar, only: istep0, istep1, time, time_forc, npt + use ice_calendar, only: istep0, istep1, time, time_forc, npt, nyr use ice_communicate, only: my_task, master_task use ice_dyn_shared, only: kdyn use ice_read_write, only: ice_open, ice_open_ext @@ -44,11 +50,17 @@ subroutine init_restart_read(ice_ic) ! local variables + logical (kind=log_kind) :: & + solve_zsal, & + tr_iage, tr_FY, tr_lvl, tr_aero, tr_pond_cesm, & + tr_pond_topo, tr_pond_lvl, tr_brine + character(len=char_len_long) :: & filename, filename0 integer (kind=int_kind) :: & n, & ! loop indices + nbtrcr, & ! number of bgc tracers iignore ! dummy variable real (kind=real_kind) :: & @@ -57,6 +69,18 @@ subroutine init_restart_read(ice_ic) character(len=char_len_long) :: & string1, string2 + call icepack_query_parameters( & + solve_zsal_out=solve_zsal) + call icepack_query_tracer_numbers( & + nbtrcr_out=nbtrcr) + call icepack_query_tracer_flags( & + tr_iage_out=tr_iage, tr_FY_out=tr_FY, tr_lvl_out=tr_lvl, & + tr_aero_out=tr_aero, tr_pond_cesm_out=tr_pond_cesm, & + tr_pond_topo_out=tr_pond_topo, tr_pond_lvl_out=tr_pond_lvl, tr_brine_out=tr_brine) + call icepack_warnings_flush(nu_diag) + if (icepack_warnings_aborted()) call abort_ice(error_message="subname", & + file=__FILE__, line=__LINE__) + if (present(ice_ic)) then filename = trim(ice_ic) else @@ -78,7 +102,7 @@ subroutine init_restart_read(ice_ic) call ice_open(nu_restart,trim(filename),0) endif if (use_restart_time) then - read (nu_restart) istep0,time,time_forc + read (nu_restart) istep0,time,time_forc,nyr else read (nu_restart) iignore,rignore,rignore ! use namelist values endif @@ -88,6 +112,7 @@ subroutine init_restart_read(ice_ic) call broadcast_scalar(istep0,master_task) call broadcast_scalar(time,master_task) call broadcast_scalar(time_forc,master_task) + call broadcast_scalar(nyr,master_task) istep1 = istep0 @@ -317,11 +342,29 @@ subroutine init_restart_write(filename_spec) ! local variables + logical (kind=log_kind) :: & + solve_zsal, & + tr_iage, tr_FY, tr_lvl, tr_aero, tr_pond_cesm, & + tr_pond_topo, tr_pond_lvl, tr_brine + integer (kind=int_kind) :: & - iyear, imonth, iday ! year, month, day + nbtrcr, & ! number of bgc tracers + iyear, imonth, iday ! year, month, day character(len=char_len_long) :: filename + call icepack_query_parameters( & + solve_zsal_out=solve_zsal) + call icepack_query_tracer_numbers( & + nbtrcr_out=nbtrcr) + call icepack_query_tracer_flags( & + tr_iage_out=tr_iage, tr_FY_out=tr_FY, tr_lvl_out=tr_lvl, & + tr_aero_out=tr_aero, tr_pond_cesm_out=tr_pond_cesm, & + tr_pond_topo_out=tr_pond_topo, tr_pond_lvl_out=tr_pond_lvl, tr_brine_out=tr_brine) + call icepack_warnings_flush(nu_diag) + if (icepack_warnings_aborted()) call abort_ice(error_message="subname", & + file=__FILE__, line=__LINE__) + ! construct path/file if (present(filename_spec)) then filename = trim(filename_spec) @@ -346,7 +389,7 @@ subroutine init_restart_write(filename_spec) else call ice_open(nu_dump,filename,0) endif - write(nu_dump) istep1,time,time_forc + write(nu_dump) istep1,time,time_forc,nyr write(nu_diag,*) 'Writing ',filename(1:lenstr(filename)) endif @@ -680,7 +723,26 @@ subroutine final_restart() use ice_calendar, only: istep1, time, time_forc use ice_communicate, only: my_task, master_task - integer (kind=int_kind) :: status + logical (kind=log_kind) :: & + solve_zsal, & + tr_iage, tr_FY, tr_lvl, tr_aero, tr_pond_cesm, & + tr_pond_topo, tr_pond_lvl, tr_brine + + integer (kind=int_kind) :: & + nbtrcr, & ! number of bgc tracers + status + + call icepack_query_parameters( & + solve_zsal_out=solve_zsal) + call icepack_query_tracer_numbers( & + nbtrcr_out=nbtrcr) + call icepack_query_tracer_flags( & + tr_iage_out=tr_iage, tr_FY_out=tr_FY, tr_lvl_out=tr_lvl, & + tr_aero_out=tr_aero, tr_pond_cesm_out=tr_pond_cesm, & + tr_pond_topo_out=tr_pond_topo, tr_pond_lvl_out=tr_pond_lvl, tr_brine_out=tr_brine) + call icepack_warnings_flush(nu_diag) + if (icepack_warnings_aborted()) call abort_ice(error_message="subname", & + file=__FILE__, line=__LINE__) if (my_task == master_task) then close(nu_dump) diff --git a/cicecore/cicedynB/infrastructure/io/io_netcdf/ice_history_write.F90 b/cicecore/cicedynB/infrastructure/io/io_netcdf/ice_history_write.F90 index faae0d9ad..8e0f8cade 100644 --- a/cicecore/cicedynB/infrastructure/io/io_netcdf/ice_history_write.F90 +++ b/cicecore/cicedynB/infrastructure/io/io_netcdf/ice_history_write.F90 @@ -813,7 +813,7 @@ subroutine ice_write_hist (ns) if (status /= nf90_noerr) call abort_ice( & 'ice Error: global attribute contents') - title = 'Los Alamos Sea Ice Model (CICE) Version 5' + write(title,'(2a)') 'Los Alamos Sea Ice Model, ', trim(version_name) status = nf90_put_att(ncid,nf90_global,'source',title) if (status /= nf90_noerr) call abort_ice( & 'ice Error: global attribute source') diff --git a/cicecore/cicedynB/infrastructure/io/io_netcdf/ice_restart.F90 b/cicecore/cicedynB/infrastructure/io/io_netcdf/ice_restart.F90 index 8dd4de296..70df416ea 100644 --- a/cicecore/cicedynB/infrastructure/io/io_netcdf/ice_restart.F90 +++ b/cicecore/cicedynB/infrastructure/io/io_netcdf/ice_restart.F90 @@ -88,6 +88,7 @@ subroutine init_restart_read(ice_ic) call broadcast_scalar(istep0,master_task) call broadcast_scalar(time,master_task) call broadcast_scalar(time_forc,master_task) + call broadcast_scalar(nyr,master_task) istep1 = istep0 @@ -131,10 +132,10 @@ subroutine init_restart_write(filename_spec) tr_bgc_hum integer (kind=int_kind) :: & - k, n, & ! index - nx, ny, & ! global array size - iyear, & ! year - nbtrcr + k, n, & ! index + nx, ny, & ! global array size + iyear, imonth, iday, & ! year, month, day + nbtrcr ! number of bgc tracers character(len=char_len_long) :: filename @@ -728,16 +729,16 @@ subroutine write_restart_field(nu,nrec,work,atype,vname,ndim3,diag) status = nf90_inq_varid(ncid,trim(vname),varid) if (ndim3 == ncat) then if (restart_ext) then - call ice_write_nc(ncid, 1, varid, work, diag, restart_ext) + call ice_write_nc(ncid, 1, varid, work, diag, restart_ext, varname=trim(vname)) else - call ice_write_nc(ncid, 1, varid, work, diag) + call ice_write_nc(ncid, 1, varid, work, diag, varname=trim(vname)) endif elseif (ndim3 == 1) then work2(:,:,:) = work(:,:,1,:) if (restart_ext) then - call ice_write_nc(ncid, 1, varid, work2, diag, restart_ext) + call ice_write_nc(ncid, 1, varid, work2, diag, restart_ext, varname=trim(vname)) else - call ice_write_nc(ncid, 1, varid, work2, diag) + call ice_write_nc(ncid, 1, varid, work2, diag, varname=trim(vname)) endif else write(nu_diag,*) 'ndim3 not supported',ndim3 diff --git a/cicecore/cicedynB/infrastructure/io/io_pio/ice_history_write.F90 b/cicecore/cicedynB/infrastructure/io/io_pio/ice_history_write.F90 index a943c31a4..981a708eb 100644 --- a/cicecore/cicedynB/infrastructure/io/io_pio/ice_history_write.F90 +++ b/cicecore/cicedynB/infrastructure/io/io_pio/ice_history_write.F90 @@ -686,7 +686,7 @@ subroutine ice_write_hist (ns) title = 'Diagnostic and Prognostic Variables' status = pio_put_att(File,pio_global,'contents',trim(title)) - title = 'Los Alamos Sea Ice Model (CICE) Version 5' + write(title,'(2a)') 'Los Alamos Sea Ice Model, ', trim(version_name) status = pio_put_att(File,pio_global,'source',trim(title)) if (use_leap_years) then diff --git a/cicecore/cicedynB/infrastructure/io/io_pio/ice_restart.F90 b/cicecore/cicedynB/infrastructure/io/io_pio/ice_restart.F90 index 41657e2c5..946af16b2 100644 --- a/cicecore/cicedynB/infrastructure/io/io_pio/ice_restart.F90 +++ b/cicecore/cicedynB/infrastructure/io/io_pio/ice_restart.F90 @@ -99,6 +99,7 @@ subroutine init_restart_read(ice_ic) call broadcast_scalar(istep0,master_task) call broadcast_scalar(time,master_task) call broadcast_scalar(time_forc,master_task) + call broadcast_scalar(nyr,master_task) istep1 = istep0 diff --git a/cicecore/drivers/cice/CICE_RunMod.F90_debug b/cicecore/drivers/cice/CICE_RunMod.F90_debug index e55cc3412..3d7a82311 100644 --- a/cicecore/drivers/cice/CICE_RunMod.F90_debug +++ b/cicecore/drivers/cice/CICE_RunMod.F90_debug @@ -15,11 +15,19 @@ module CICE_RunMod use ice_kinds_mod + use ice_fileunits, only: nu_diag + use ice_arrays_column, only: oceanmixed_ice + use ice_constants, only: c0, c1 + use ice_constants, only: field_loc_center, field_type_scalar + use ice_exit, only: abort_ice + use icepack_intfc, only: icepack_warnings_flush, icepack_warnings_aborted + use icepack_intfc, only: icepack_max_aero + use icepack_intfc, only: icepack_query_parameters + use icepack_intfc, only: icepack_query_tracer_flags, icepack_query_tracer_numbers implicit none private public :: CICE_Run, ice_step - save !======================================================================= @@ -40,10 +48,10 @@ use ice_forcing_bgc, only: get_forcing_bgc, get_atm_bgc, fzaero_data, & faero_default use ice_flux, only: init_flux_atm, init_flux_ocn - use ice_colpkg_tracers, only: tr_aero, tr_zaero use ice_timers, only: ice_timer_start, ice_timer_stop, & timer_couple, timer_step - use ice_colpkg_shared, only: skl_bgc, z_tracers + logical (kind=log_kind) :: & + tr_aero, tr_zaero, skl_bgc, z_tracers !-------------------------------------------------------------------- ! initialize error code and step timer @@ -51,6 +59,12 @@ call ice_timer_start(timer_step) ! start timing entire run + call icepack_query_parameters(skl_bgc_out=skl_bgc, z_tracers_out=z_tracers) + call icepack_query_tracer_flags(tr_aero_out=tr_aero, tr_zaero_out=tr_zaero) + call icepack_warnings_flush(nu_diag) + if (icepack_warnings_aborted()) call abort_ice(error_message="subname", & + file=__FILE__, line=__LINE__) + #ifndef CICE_IN_NEMO !-------------------------------------------------------------------- ! timestep loop @@ -71,21 +85,28 @@ if (stop_now >= 1) exit timeLoop #endif -#ifndef coupled call ice_timer_start(timer_couple) ! atm/ocn coupling + +#ifndef coupled +#ifndef CESMCOUPLED call get_forcing_atmo ! atmospheric forcing from data call get_forcing_ocn(dt) ! ocean forcing from data - ! if (tr_aero) call faero_data ! aerosols - if (tr_aero .or. tr_zaero) call faero_default ! aerosols + + ! aerosols + ! if (tr_aero) call faero_data ! data file + ! if (tr_zaero) call fzaero_data ! data file (gx1) + if (tr_aero .or. tr_zaero) call faero_default ! default values + if (skl_bgc .or. z_tracers) call get_forcing_bgc ! biogeochemistry - if (z_tracers) call get_atm_bgc ! biogeochemistry - !if (tr_zaero) call fzaero_data ! zaerosols, gx1 - call ice_timer_stop(timer_couple) ! atm/ocn coupling #endif +#endif + if (z_tracers) call get_atm_bgc ! biogeochemistry call init_flux_atm ! initialize atmosphere fluxes sent to coupler call init_flux_ocn ! initialize ocean fluxes sent to coupler + call ice_timer_stop(timer_couple) ! atm/ocn coupling + #ifndef CICE_IN_NEMO enddo timeLoop #endif @@ -109,7 +130,6 @@ use ice_boundary, only: ice_HaloUpdate use ice_calendar, only: dt, dt_dyn, ndtd, diagfreq, write_restart, istep - use ice_constants, only: field_loc_center, field_type_scalar, c0 use ice_diagnostics, only: init_mass_diags, runtime_diags use ice_diagnostics_bgc, only: hbrine_diags, zsal_diags, bgc_diags use ice_domain, only: halo_info, nblocks @@ -128,12 +148,9 @@ use ice_restart_driver, only: dumpfile use ice_restoring, only: restore_ice, ice_HaloRestore use ice_state, only: trcrn - use ice_colpkg_tracers, only: tr_iage, tr_FY, tr_lvl, & - tr_pond_cesm, tr_pond_lvl, tr_pond_topo, tr_brine, tr_aero use ice_step_mod, only: prep_radiation, step_therm1, step_therm2, & update_state, step_dyn_horiz, step_dyn_ridge, step_radiation, & biogeochemistry - use ice_colpkg_shared, only: calc_Tsfc, skl_bgc, solve_zsal, z_tracers use ice_timers, only: ice_timer_start, ice_timer_stop, & timer_diags, timer_column, timer_thermo, timer_bound, & timer_hist, timer_readwrite @@ -145,6 +162,11 @@ real (kind=dbl_kind) :: & offset ! d(age)/dt time offset + logical (kind=log_kind) :: & + tr_iage, tr_FY, tr_lvl, & + tr_pond_cesm, tr_pond_lvl, tr_pond_topo, tr_brine, tr_aero, & + calc_Tsfc, skl_bgc, solve_zsal, z_tracers + character (len=char_len) :: plabeld plabeld = 'beginning time step' @@ -152,6 +174,15 @@ call debug_ice (iblk, plabeld) enddo + call icepack_query_parameters(calc_Tsfc_out=calc_Tsfc, skl_bgc_out=skl_bgc, & + solve_zsal_out=solve_zsal, z_tracers_out=z_tracers) + call icepack_query_tracer_flags(tr_iage_out=tr_iage, tr_FY_out=tr_FY, & + tr_lvl_out=tr_lvl, tr_pond_cesm_out=tr_pond_cesm, tr_pond_lvl_out=tr_pond_lvl, & + tr_pond_topo_out=tr_pond_topo, tr_brine_out=tr_brine, tr_aero_out=tr_aero) + call icepack_warnings_flush(nu_diag) + if (icepack_warnings_aborted()) call abort_ice(error_message="subname", & + file=__FILE__, line=__LINE__) + !----------------------------------------------------------------- ! restoring on grid boundaries !----------------------------------------------------------------- @@ -217,6 +248,7 @@ !----------------------------------------------------------------- do k = 1, ndtd + ! momentum, stress, transport call step_dyn_horiz (dt_dyn) @@ -323,9 +355,6 @@ albicen, albsnon, albpndn, apeffn, fzsal_g, fzsal, snowfracn use ice_blocks, only: block, nx_block, ny_block use ice_calendar, only: dt, nstreams - use ice_colpkg_shared, only: calc_Tsfc, oceanmixed_ice, max_aero - use ice_colpkg_tracers, only: nbtrcr - use ice_constants, only: c0, c1, puny, rhofresh use ice_domain_size, only: ncat use ice_flux, only: alvdf, alidf, alvdr, alidr, albice, albsno, & albpnd, albcnt, apeff_ai, coszen, fpond, fresh, l_mpond_fresh, & @@ -349,12 +378,25 @@ integer (kind=int_kind) :: & n , & ! thickness category index i,j , & ! horizontal indices - k ! tracer index + k , & ! tracer index + nbtrcr ! + + logical (kind=log_kind) :: & + calc_Tsfc ! real (kind=dbl_kind) :: & cszn , & ! counter for history averaging + puny , & ! + rhofresh , & ! netsw ! flag for shortwave radiation presence + call icepack_query_parameters(puny_out=puny, rhofresh_out=rhofresh) + call icepack_query_tracer_numbers(nbtrcr_out=nbtrcr) + call icepack_query_parameters(calc_Tsfc_out=calc_Tsfc) + call icepack_warnings_flush(nu_diag) + if (icepack_warnings_aborted()) call abort_ice(error_message="subname", & + file=__FILE__, line=__LINE__) + !----------------------------------------------------------------- ! Save current value of frzmlt for diagnostics. ! Update mixed layer with heat and radiation from ice. @@ -400,6 +442,8 @@ do n = 1, ncat do j = 1, ny_block do i = 1, nx_block + if (aicen(i,j,n,iblk) > puny) then + alvdf(i,j,iblk) = alvdf(i,j,iblk) & + alvdfn(i,j,n,iblk)*aicen(i,j,n,iblk) alidf(i,j,iblk) = alidf(i,j,iblk) & @@ -424,6 +468,8 @@ + apeffn(i,j,n,iblk)*aicen(i,j,n,iblk) snowfrac(i,j,iblk) = snowfrac(i,j,iblk) & ! for history + snowfracn(i,j,n,iblk)*aicen(i,j,n,iblk) + + endif ! aicen > puny enddo enddo enddo @@ -480,7 +526,7 @@ !----------------------------------------------------------------- call scale_fluxes (nx_block, ny_block, & - tmask (:,:,iblk), nbtrcr, max_aero, & + tmask (:,:,iblk), nbtrcr, icepack_max_aero, & aice (:,:,iblk), Tf (:,:,iblk), & Tair (:,:,iblk), Qa (:,:,iblk), & strairxT (:,:,iblk), strairyT(:,:,iblk), & @@ -562,8 +608,13 @@ i, j, n ! horizontal indices real (kind=dbl_kind) :: & + puny, & ! rLsub ! 1/Lsub + call icepack_query_parameters(puny_out=puny) + call icepack_warnings_flush(nu_diag) + if (icepack_warnings_aborted()) call abort_ice(error_message="subname", & + file=__FILE__, line=__LINE__) rLsub = c1 / Lsub do n = 1, ncat diff --git a/cicecore/version.txt b/cicecore/version.txt index 84a340b61..8d067b669 100644 --- a/cicecore/version.txt +++ b/cicecore/version.txt @@ -1 +1 @@ -CICE 6.0.0.alpha +CICE 6.0.dev diff --git a/configuration/scripts/cice.batch.csh b/configuration/scripts/cice.batch.csh index da4902771..1693f0062 100755 --- a/configuration/scripts/cice.batch.csh +++ b/configuration/scripts/cice.batch.csh @@ -27,7 +27,7 @@ if (${taskpernodelimit} > ${ntasks}) set taskpernodelimit = ${ntasks} set ptile = $taskpernode if ($ptile > ${maxtpn} / 2) @ ptile = ${maxtpn} / 2 -set queue = "regular" +set queue = "${ICE_QUEUE}" set batchtime = "00:15:00" if (${ICE_RUNLENGTH} > 1) set batchtime = "00:29:00" if (${ICE_RUNLENGTH} > 2) set batchtime = "00:59:00" @@ -67,11 +67,9 @@ cat >> ${jobfile} << EOFB EOFB else if (${ICE_MACHINE} =~ thunder* || ${ICE_MACHINE} =~ gordon* || ${ICE_MACHINE} =~ conrad*) then -set queue = "debug" -if (${ICE_RUNLENGTH} > 1) set queue = "frontier" cat >> ${jobfile} << EOFB #PBS -N ${shortcase} -#PBS -q debug +#PBS -q ${queue} #PBS -A ${acct} #PBS -l select=${nnodes}:ncpus=${maxtpn}:mpiprocs=${taskpernode} #PBS -l walltime=${batchtime} @@ -81,11 +79,9 @@ cat >> ${jobfile} << EOFB EOFB else if (${ICE_MACHINE} =~ onyx*) then -set queue = "debug" -if (${ICE_RUNLENGTH} > 2) set queue = "frontier" cat >> ${jobfile} << EOFB #PBS -N ${ICE_CASENAME} -#PBS -q debug +#PBS -q ${queue} #PBS -A ${acct} #PBS -l select=${nnodes}:ncpus=${maxtpn}:mpiprocs=${taskpernode} #PBS -l walltime=${batchtime} @@ -97,7 +93,7 @@ EOFB else if (${ICE_MACHINE} =~ cori*) then cat >> ${jobfile} << EOFB #SBATCH -J ${ICE_CASENAME} -#SBATCH -p debug +#SBATCH -p ${queue} ###SBATCH -A ${acct} #SBATCH -n ${ncores} #SBATCH -t ${batchtime} @@ -153,6 +149,11 @@ cat >> ${jobfile} << EOFB # nothing to do EOFB +else if (${ICE_MACHINE} =~ travisCI*) then +cat >> ${jobfile} << EOFB +# nothing to do +EOFB + else echo "${0} ERROR: ${ICE_MACHINE} unknown" exit -1 diff --git a/configuration/scripts/cice.build b/configuration/scripts/cice.build index bec2302b1..cf3b2ce36 100755 --- a/configuration/scripts/cice.build +++ b/configuration/scripts/cice.build @@ -16,6 +16,7 @@ echo "${0}:" set stamp = `date '+%y%m%d-%H%M%S'` set ICE_BLDLOG_FILE = "cice.bldlog.${stamp}" +set quiet = ${ICE_QUIETMODE} if (${ICE_CLEANBUILD} == 'true') then echo "cleaning objdir" @@ -90,18 +91,40 @@ if (-e ${ICE_BLDLOG_FILE}) rm ${ICE_BLDLOG_FILE} if (${ICE_CLEANBUILD} == 'true') then echo "gmake clean" - ${ICE_MACHINE_MAKE} VPFILE=Filepath EXEC=${ICE_RUNDIR}/icepack \ - CPPDEFS="${ICE_CPPDEFS}" \ - -f ${ICE_CASEDIR}/Makefile MACFILE=${ICE_CASEDIR}/Macros.${ICE_MACHCOMP} clean | tee ${ICE_BLDLOG_FILE} + if (${quiet} == "true") then + ${ICE_MACHINE_MAKE} VPFILE=Filepath EXEC=${ICE_RUNDIR}/icepack \ + CPPDEFS="${ICE_CPPDEFS}" \ + -f ${ICE_CASEDIR}/Makefile MACFILE=${ICE_CASEDIR}/Macros.${ICE_MACHCOMP} clean >& ${ICE_BLDLOG_FILE} + else + ${ICE_MACHINE_MAKE} VPFILE=Filepath EXEC=${ICE_RUNDIR}/icepack \ + CPPDEFS="${ICE_CPPDEFS}" \ + -f ${ICE_CASEDIR}/Makefile MACFILE=${ICE_CASEDIR}/Macros.${ICE_MACHCOMP} clean | tee ${ICE_BLDLOG_FILE} + endif endif -${ICE_MACHINE_MAKE} -j ${ICE_MACHINE_BLDTHRDS} VPFILE=Filepath EXEC=${ICE_RUNDIR}/cice \ +echo "gmake cice" +if (${quiet} == "true") then + echo " quiet mode on... patience" + ${ICE_MACHINE_MAKE} -j ${ICE_MACHINE_BLDTHRDS} VPFILE=Filepath EXEC=${ICE_RUNDIR}/cice \ + CPPDEFS="${ICE_CPPDEFS}" \ + -f ${ICE_CASEDIR}/Makefile MACFILE=${ICE_CASEDIR}/Macros.${ICE_MACHCOMP} >& ${ICE_BLDLOG_FILE} + set bldstat = ${status} +else + ${ICE_MACHINE_MAKE} -j ${ICE_MACHINE_BLDTHRDS} VPFILE=Filepath EXEC=${ICE_RUNDIR}/cice \ CPPDEFS="${ICE_CPPDEFS}" \ -f ${ICE_CASEDIR}/Makefile MACFILE=${ICE_CASEDIR}/Macros.${ICE_MACHCOMP} | tee ${ICE_BLDLOG_FILE} + set bldstat = ${status} +endif -if ($status != 0) then +if !(-d ${ICE_LOGDIR}) mkdir -p ${ICE_LOGDIR} +cp -p ${ICE_BLDLOG_FILE} ${ICE_LOGDIR}/ + +if (${bldstat} != 0) then echo "${0}: COMPILE FAILED, see" echo " cat ${ICE_OBJDIR}/${ICE_BLDLOG_FILE}" + if (${quiet} == "true") then + tail -10 ${ICE_OBJDIR}/${ICE_BLDLOG_FILE} + endif if ( ${ICE_TEST} != ${ICE_SPVAL} ) then # This is a test case. Write output to test_output file echo "FAIL ${ICE_TESTNAME} build" >> ${ICE_CASEDIR}/test_output @@ -110,8 +133,6 @@ if ($status != 0) then exit 99 endif -if !(-d ${ICE_LOGDIR}) mkdir -p ${ICE_LOGDIR} -cp -p ${ICE_BLDLOG_FILE} ${ICE_LOGDIR}/ echo "`date` ${0}:${ICE_CASENAME} build completed ${ICE_BLDLOG_FILE}" >> ${ICE_CASEDIR}/README.case echo "${0}: COMPILE SUCCESSFUL, ${ICE_LOGDIR}/${ICE_BLDLOG_FILE}" if ( ${ICE_TEST} != ${ICE_SPVAL} ) then diff --git a/configuration/scripts/cice.launch.csh b/configuration/scripts/cice.launch.csh index cdd262024..19c83ec9b 100755 --- a/configuration/scripts/cice.launch.csh +++ b/configuration/scripts/cice.launch.csh @@ -4,11 +4,21 @@ echo "running cice.launch.csh" source ./cice.settings +source ${ICE_CASEDIR}/env.${ICE_MACHCOMP} || exit 2 set jobfile = $1 set ntasks = ${ICE_NTASKS} set nthrds = ${ICE_NTHRDS} +set maxtpn = ${ICE_MACHINE_TPNODE} + +@ ncores = ${ntasks} * ${nthrds} +@ taskpernode = ${maxtpn} / $nthrds +@ nnodes = ${ntasks} / ${taskpernode} +if (${nnodes} * ${taskpernode} < ${ntasks}) @ nnodes = $nnodes + 1 +set taskpernodelimit = ${taskpernode} +if (${taskpernodelimit} > ${ntasks}) set taskpernodelimit = ${ntasks} +@ corespernode = ${taskpernodelimit} * ${nthrds} #========================================== @@ -29,12 +39,12 @@ EOFR else if (${ICE_MACHINE} =~ onyx*) then cat >> ${jobfile} << EOFR -aprun -n ${ntasks} -N ${ntasks} -d ${nthrds} ./cice >&! \$ICE_RUNLOG_FILE +aprun -n ${ntasks} -N ${taskpernodelimit} -d ${nthrds} ./cice >&! \$ICE_RUNLOG_FILE EOFR else if (${ICE_MACHINE} =~ gordon* || ${ICE_MACHINE} =~ conrad*) then cat >> ${jobfile} << EOFR -aprun -n ${ntasks} -N ${ntasks} -d ${nthrds} ./cice >&! \$ICE_RUNLOG_FILE +aprun -n ${ntasks} -N ${taskpernodelimit} -d ${nthrds} ./cice >&! \$ICE_RUNLOG_FILE EOFR else if (${ICE_MACHINE} =~ cori*) then @@ -62,6 +72,11 @@ cat >> ${jobfile} << EOFR ./cice >&! \$ICE_RUNLOG_FILE EOFR +else if (${ICE_MACHINE} =~ travisCI*) then +cat >> ${jobfile} << EOFR +mpirun -np ${ntasks} ./cice >&! \$ICE_RUNLOG_FILE +EOFR + #cat >> ${jobfile} << EOFR #srun -n ${ntasks} -c ${nthrds} ./cice >&! \$ICE_RUNLOG_FILE #EOFR diff --git a/configuration/scripts/cice.run.setup.csh b/configuration/scripts/cice.run.setup.csh index 39cf72302..b541ba442 100755 --- a/configuration/scripts/cice.run.setup.csh +++ b/configuration/scripts/cice.run.setup.csh @@ -38,9 +38,7 @@ set ICE_RUNLOG_FILE = "cice.runlog.\${stamp}" #-------------------------------------------- -if !(-d \${ICE_RUNDIR}) mkdir -p \${ICE_RUNDIR} -if !(-d \${ICE_HSTDIR}) mkdir -p \${ICE_HSTDIR} -if !(-d \${ICE_RSTDIR}) mkdir -p \${ICE_RSTDIR} +./setup_run_dirs.csh #-------------------------------------------- cd \${ICE_RUNDIR} diff --git a/configuration/scripts/cice.settings b/configuration/scripts/cice.settings index fa0e50e68..f5cb5d9bc 100755 --- a/configuration/scripts/cice.settings +++ b/configuration/scripts/cice.settings @@ -16,16 +16,15 @@ setenv ICE_DRVOPT cice setenv ICE_CONSTOPT cice setenv ICE_IOTYPE netcdf # set to none if netcdf library is unavailable setenv ICE_CLEANBUILD true -setenv ICE_GRID gx3 -setenv ICE_NXGLOB 100 -setenv ICE_NYGLOB 116 -setenv ICE_NTASKS 4 -setenv ICE_NTHRDS 1 -setenv ICE_BLCKX 25 -setenv ICE_BLCKY 29 -setenv ICE_MXBLCKS 4 -setenv ICE_DECOMP cartesian -setenv ICE_DSHAPE slenderX2 +setenv ICE_QUIETMODE false +setenv ICE_GRID undefined +setenv ICE_NXGLOB undefined +setenv ICE_NYGLOB undefined +setenv ICE_NTASKS undefined +setenv ICE_NTHRDS undefined +setenv ICE_BLCKX undefined +setenv ICE_BLCKY undefined +setenv ICE_MXBLCKS undefined setenv ICE_TEST undefined # Define if this is a test case setenv ICE_TESTNAME undefined # Define if this is a test case setenv ICE_BASELINE undefined @@ -35,6 +34,7 @@ setenv ICE_BFBCOMP undefined setenv ICE_SPVAL undefined setenv ICE_RUNLENGTH 0 setenv ICE_ACCOUNT undefined +setenv ICE_QUEUE undefined #====================================================== diff --git a/configuration/scripts/cice_decomp.csh b/configuration/scripts/cice_decomp.csh index 2d6c61c75..81a8fd326 100755 --- a/configuration/scripts/cice_decomp.csh +++ b/configuration/scripts/cice_decomp.csh @@ -2,14 +2,22 @@ #--- inputs --- -echo "${0:t} input ICE_DECOMP_GRID = $ICE_DECOMP_GRID" -echo "${0:t} input ICE_DECOMP_NTASK = $ICE_DECOMP_NTASK" -echo "${0:t} input ICE_DECOMP_NTHRD = $ICE_DECOMP_NTHRD" +#echo "${0:t} input ICE_DECOMP_GRID = $ICE_DECOMP_GRID" +#echo "${0:t} input ICE_DECOMP_NTASK = $ICE_DECOMP_NTASK" +#echo "${0:t} input ICE_DECOMP_NTHRD = $ICE_DECOMP_NTHRD" +#echo "${0:t} input ICE_DECOMP_BLCKX = $ICE_DECOMP_BLCKX" +#echo "${0:t} input ICE_DECOMP_BLCKY = $ICE_DECOMP_BLCKY" +#echo "${0:t} input ICE_DECOMP_MXBLCKS = $ICE_DECOMP_MXBLCKS" set grid = $ICE_DECOMP_GRID set task = $ICE_DECOMP_NTASK set thrd = $ICE_DECOMP_NTHRD +if (${task} <= 0 || ${thrd} <= 0) then + echo "${0:t}: ERROR task and thread must be gt 0" + exit -9 +endif + #--- computation --- @ cicepes = ${task} * ${thrd} @@ -23,15 +31,30 @@ if (${grid} == 'col') then set blckx = 1; set blcky = 1 endif +else if (${grid} == 'gbox128') then + set nxglob = 128 + set nyglob = 128 + if (${cicepes} <= 1) then + set blckx = 128; set blcky = 128 + else if (${cicepes} <= 8) then + set blckx = 32; set blcky = 32 + else if (${cicepes} <= 32) then + set blckx = 16; set blcky = 16 + else + set blckx = 8; set blcky = 8 + endif + else if (${grid} == 'gx3') then set nxglob = 100 set nyglob = 116 - if (${cicepes} <= 8) then + if (${cicepes} <= 1) then + set blckx = 100; set blcky = 116 + else if (${cicepes} <= 8) then set blckx = 25; set blcky = 29 else if (${cicepes} <= 32) then set blckx = 5; set blcky = 29 else - set blckx = 5; set blcky = 5 + set blckx = 5; set blcky = 4 endif else if (${grid} == 'gx1') then @@ -61,6 +84,17 @@ else exit -9 endif +# check and override +if (${ICE_DECOMP_BLCKX} > 0 && ${ICE_DECOMP_BLCKY} > 0) then + set blckx = ${ICE_DECOMP_BLCKX} + set blcky = ${ICE_DECOMP_BLCKY} +else if (${ICE_DECOMP_BLCKX} < 1 && ${ICE_DECOMP_BLCKY} < 1) then + # continue, use values computed above +else + echo "${0:t}: ERROR user defined blocksize illegal" + exit -9 +endif + @ bx = $nxglob / ${blckx} if ($bx * ${blckx} != $nxglob) @ bx = $bx + 1 @ by = $nyglob / ${blcky} @@ -70,6 +104,9 @@ if ($by * ${blcky} != $nyglob) @ by = $by + 1 if ($m * ${task} != $bx * $by) @ m = $m + 1 set mxblcks = $m +# override +if (${ICE_DECOMP_MXBLCKS} > 0) set mxblcks = ${ICE_DECOMP_MXBLCKS} + set decomp = 'cartesian' set dshape = 'slenderX2' if (${nxglob} % ${cicepes} != 0) set decomp = 'roundrobin' @@ -84,12 +121,12 @@ setenv ICE_DECOMP_MXBLCKS $mxblcks setenv ICE_DECOMP_DECOMP $decomp setenv ICE_DECOMP_DSHAPE $dshape -echo "${0:t} output ICE_DECOMP_NXGLOB = $ICE_DECOMP_NXGLOB" -echo "${0:t} output ICE_DECOMP_NYGLOB = $ICE_DECOMP_NYGLOB" -echo "${0:t} output ICE_DECOMP_BLCKX = $ICE_DECOMP_BLCKX" -echo "${0:t} output ICE_DECOMP_BLCKY = $ICE_DECOMP_BLCKY" -echo "${0:t} output ICE_DECOMP_MXBLCKS = $ICE_DECOMP_MXBLCKS" -echo "${0:t} output ICE_DECOMP_DECOMP = $ICE_DECOMP_DECOMP" -echo "${0:t} output ICE_DECOMP_DSHAPE = $ICE_DECOMP_DSHAPE" +#echo "${0:t} output ICE_DECOMP_NXGLOB = $ICE_DECOMP_NXGLOB" +#echo "${0:t} output ICE_DECOMP_NYGLOB = $ICE_DECOMP_NYGLOB" +#echo "${0:t} output ICE_DECOMP_BLCKX = $ICE_DECOMP_BLCKX" +#echo "${0:t} output ICE_DECOMP_BLCKY = $ICE_DECOMP_BLCKY" +#echo "${0:t} output ICE_DECOMP_MXBLCKS = $ICE_DECOMP_MXBLCKS" +#echo "${0:t} output ICE_DECOMP_DECOMP = $ICE_DECOMP_DECOMP" +#echo "${0:t} output ICE_DECOMP_DSHAPE = $ICE_DECOMP_DSHAPE" exit 0 diff --git a/configuration/scripts/ice_in b/configuration/scripts/ice_in index 44e57a0f0..47c9b1c8c 100644 --- a/configuration/scripts/ice_in +++ b/configuration/scripts/ice_in @@ -38,6 +38,7 @@ write_ic = .true. incond_dir = './history/' incond_file = 'iceh_ic' + version_name = 'undefined_version_name' / &grid_nml @@ -49,18 +50,6 @@ kcatbound = 0 / -&domain_nml - nprocs = 4 - processor_shape = 'slenderX2' - distribution_type = 'cartesian' - distribution_wght = 'latitude' - ew_boundary_type = 'cyclic' - ns_boundary_type = 'open' - maskhalo_dyn = .false. - maskhalo_remap = .false. - maskhalo_bound = .false. -/ - &tracer_nml tr_iage = .true. restart_age = .false. @@ -132,6 +121,47 @@ pndaspect = 0.8 / +&forcing_nml + formdrag = .false. + atmbndy = 'default' + fyear_init = 1997 + ycycle = 1 + atm_data_format = 'bin' + atm_data_type = 'ncar' + atm_data_dir = '/glade/u/home/tcraig/cice_data/' + calc_strair = .true. + highfreq = .false. + natmiter = 5 + calc_Tsfc = .true. + precip_units = 'mm_per_month' + ustar_min = 0.0005 + fbot_xfer_type = 'constant' + update_ocn_f = .false. + l_mpond_fresh = .false. + tfrz_option = 'mushy' + oceanmixed_ice = .true. + ocn_data_format = 'bin' + sss_data_type = 'default' + sst_data_type = 'default' + ocn_data_dir = 'unknown_ocn_data_dir' + oceanmixed_file = 'unknown_oceanmixed_file' + restore_sst = .false. + trestore = 90 + restore_ice = .false. +/ + +&domain_nml + nprocs = 4 + processor_shape = 'slenderX2' + distribution_type = 'cartesian' + distribution_wght = 'latitude' + ew_boundary_type = 'cyclic' + ns_boundary_type = 'open' + maskhalo_dyn = .false. + maskhalo_remap = .false. + maskhalo_bound = .false. +/ + &zbgc_nml tr_brine = .false. restart_hbrine = .false. @@ -285,35 +315,6 @@ ratio_C2N_proteins = 7.0 / -&forcing_nml - formdrag = .false. - atmbndy = 'default' - fyear_init = 1997 - ycycle = 1 - atm_data_format = 'bin' - atm_data_type = 'ncar' - atm_data_dir = '/glade/u/home/tcraig/cice_data/' - calc_strair = .true. - highfreq = .false. - natmiter = 5 - calc_Tsfc = .true. - precip_units = 'mm_per_month' - ustar_min = 0.0005 - fbot_xfer_type = 'constant' - update_ocn_f = .false. - l_mpond_fresh = .false. - tfrz_option = 'mushy' - oceanmixed_ice = .true. - ocn_data_format = 'bin' - sss_data_type = 'default' - sst_data_type = 'default' - ocn_data_dir = 'unknown_ocn_data_dir' - oceanmixed_file = 'unknown_oceanmixed_file' - restore_sst = .false. - trestore = 90 - restore_ice = .false. -/ - &icefields_nml f_tmask = .true. f_blkmask = .true. diff --git a/configuration/scripts/machines/Macros.cheyenne_intel b/configuration/scripts/machines/Macros.cheyenne_intel index 43a0df8c2..f5a01b24d 100755 --- a/configuration/scripts/machines/Macros.cheyenne_intel +++ b/configuration/scripts/machines/Macros.cheyenne_intel @@ -48,9 +48,9 @@ SCC:= icc SFC:= ifort ifeq ($(compile_threaded), true) - LDFLAGS += -openmp - CFLAGS += -openmp - FFLAGS += -openmp + LDFLAGS += -qopenmp + CFLAGS += -qopenmp + FFLAGS += -qopenmp endif ifeq ($(DITTO), yes) diff --git a/configuration/scripts/machines/Macros.conrad_cray b/configuration/scripts/machines/Macros.conrad_cray index 5f05946b2..642454380 100644 --- a/configuration/scripts/machines/Macros.conrad_cray +++ b/configuration/scripts/machines/Macros.conrad_cray @@ -10,6 +10,7 @@ FIXEDFLAGS := -132 FREEFLAGS := FFLAGS := -h fp0 -h byteswapio FFLAGS_NOOPT:= -O0 +LDFLAGS := -h byteswapio ifeq ($(ICE_BLDDEBUG), true) FFLAGS += -O0 -g -Rbcdps diff --git a/configuration/scripts/machines/Macros.conrad_gnu b/configuration/scripts/machines/Macros.conrad_gnu index c834ea4af..285ab0c42 100644 --- a/configuration/scripts/machines/Macros.conrad_gnu +++ b/configuration/scripts/machines/Macros.conrad_gnu @@ -4,15 +4,15 @@ CPP := ftn -E CPPDEFS := -DFORTRANUNDERSCORE -DNO_R16 -DHAVE_F2008_CONTIGUOUS -DLINUX -DCPRINTEL ${ICE_CPPDEFS} -CFLAGS := -c -O2 -ffloat-store -march=native +CFLAGS := -c -O2 FIXEDFLAGS := -ffixed-line-length-132 FREEFLAGS := -ffree-form -FFLAGS := -ffloat-store -fconvert=swap -fbacktrace -march=native -ffree-line-length-none +FFLAGS := -fconvert=big-endian -fbacktrace -ffree-line-length-none FFLAGS_NOOPT:= -O0 ifeq ($(ICE_BLDDEBUG), true) - FFLAGS += -O0 -g -Wuninitialized -fbounds-check -ffpe-trap=invalid,zero,overflow,underflow + FFLAGS += -O0 -g -fcheck=bounds -finit-real=nan -fimplicit-none -ffpe-trap=invalid,zero,overflow else FFLAGS += -O2 endif @@ -44,9 +44,8 @@ INCLDIR := $(INCLDIR) #LIB_MPI := $(IMPILIBDIR) #SLIBS := -L$(LIB_NETCDF) -lnetcdf -lnetcdff -SCC:= icc - -SFC:= ifort +SCC:= gcc +SFC:= gfortran ifeq ($(ICE_THREADED), true) LDFLAGS += -fopenmp diff --git a/configuration/scripts/machines/Macros.conrad_intel b/configuration/scripts/machines/Macros.conrad_intel index 848b59f07..191da715a 100644 --- a/configuration/scripts/machines/Macros.conrad_intel +++ b/configuration/scripts/machines/Macros.conrad_intel @@ -49,9 +49,9 @@ SCC:= icc SFC:= ifort ifeq ($(ICE_THREADED), true) - LDFLAGS += -openmp - CFLAGS += -openmp - FFLAGS += -openmp + LDFLAGS += -qopenmp + CFLAGS += -qopenmp + FFLAGS += -qopenmp endif ### if using parallel I/O, load all 3 libraries. PIO must be first! diff --git a/configuration/scripts/machines/Macros.cori_intel b/configuration/scripts/machines/Macros.cori_intel index 0a40cb4be..cd788649f 100644 --- a/configuration/scripts/machines/Macros.cori_intel +++ b/configuration/scripts/machines/Macros.cori_intel @@ -49,9 +49,9 @@ SCC:= icc SFC:= ifort ifeq ($(ICE_THREADED), true) - LDFLAGS += -openmp - CFLAGS += -openmp - FFLAGS += -openmp + LDFLAGS += -qopenmp + CFLAGS += -qopenmp + FFLAGS += -qopenmp endif ### if using parallel I/O, load all 3 libraries. PIO must be first! diff --git a/configuration/scripts/machines/Macros.gordon_gnu b/configuration/scripts/machines/Macros.gordon_gnu index c25456b85..c23be2b76 100644 --- a/configuration/scripts/machines/Macros.gordon_gnu +++ b/configuration/scripts/machines/Macros.gordon_gnu @@ -4,15 +4,15 @@ CPP := ftn -E CPPDEFS := -DFORTRANUNDERSCORE -DNO_R16 -DHAVE_F2008_CONTIGUOUS -DLINUX -DCPRINTEL ${ICE_CPPDEFS} -CFLAGS := -c -O2 -ffloat-store -march=native +CFLAGS := -c -O2 FIXEDFLAGS := -ffixed-line-length-132 FREEFLAGS := -ffree-form -FFLAGS := -ffloat-store -fconvert=swap -fbacktrace -march=native -ffree-line-length-none +FFLAGS := -fconvert=big-endian -fbacktrace -ffree-line-length-none FFLAGS_NOOPT:= -O0 ifeq ($(ICE_BLDDEBUG), true) - FFLAGS += -O0 -g -Wuninitialized -fbounds-check -ffpe-trap=invalid,zero,overflow,underflow + FFLAGS += -O0 -g -fcheck=bounds -finit-real=nan -fimplicit-none -ffpe-trap=invalid,zero,overflow else FFLAGS += -O2 endif @@ -44,9 +44,8 @@ INCLDIR := $(INCLDIR) #LIB_MPI := $(IMPILIBDIR) #SLIBS := -L$(LIB_NETCDF) -lnetcdf -lnetcdff -SCC:= icc - -SFC:= ifort +SCC:= gcc +SFC:= gfortran ifeq ($(ICE_THREADED), true) LDFLAGS += -fopenmp diff --git a/configuration/scripts/machines/Macros.gordon_intel b/configuration/scripts/machines/Macros.gordon_intel index e5fa4256b..d1ed9cda7 100644 --- a/configuration/scripts/machines/Macros.gordon_intel +++ b/configuration/scripts/machines/Macros.gordon_intel @@ -49,9 +49,9 @@ SCC:= icc SFC:= ifort ifeq ($(ICE_THREADED), true) - LDFLAGS += -openmp - CFLAGS += -openmp - FFLAGS += -openmp + LDFLAGS += -qopenmp + CFLAGS += -qopenmp + FFLAGS += -qopenmp endif ### if using parallel I/O, load all 3 libraries. PIO must be first! diff --git a/configuration/scripts/machines/Macros.hobart_nag b/configuration/scripts/machines/Macros.hobart_nag new file mode 100755 index 000000000..feb4b45f0 --- /dev/null +++ b/configuration/scripts/machines/Macros.hobart_nag @@ -0,0 +1,51 @@ +#============================================================================== +# Makefile macros for NCAR hobart, NAG compiler +#============================================================================== + +CPP := /usr/bin/cpp +CPPFLAGS := -P -traditional +CPPDEFS := -DFORTRANUNDERSCORE -DNO_CRAY_POINTERS -DNO_SHR_VMATH -DCPRNAG $(ICE_CPPDEFS) +CFLAGS := -std=gnu99 + +FIXEDFLAGS := -fixed +FREEFLAGS := -free +FFLAGS := -Wp,-macro=no_com -convert=BIG_ENDIAN -ieee=full -O2 -wmismatch=mpi_bcast,mpi_isend,mpi_irecv,mpi_send,mpi_recv,mpi_allreduce -gline + +FFLAGS_NOOPT:= -Wp,-macro=no_com -convert=BIG_ENDIAN -ieee=full -wmismatch=mpi_bcast,mpi_isend,mpi_irecv,mpi_send,mpi_recv,mpi_allreduce -gline +FC_AUTO_R8 := -r8 + +ifeq ($(ICE_BLDDEBUG), true) + FFLAGS := -Wp,-macro=no_com -convert=BIG_ENDIAN -wmismatch=mpi_bcast,mpi_isend,mpi_irecv,mpi_send,mpi_recv,mpi_allreduce -gline -C=all -g -time -f2003 -ieee=stop +endif + +ifeq ($(ICE_COMMDIR), mpi) + FC := mpif90 +else + FC := nagfor +endif + +MPICC:= mpicc + +MPIFC:= mpif90 +LD:= $(MPIFC) + +NETCDF_PATH := /usr/local/netcdf_c-4.3.2_f-4.4.1-nag-6.0 + +INCLDIR := -I/usr/local/netcdf_c-4.3.2_f-4.4.1-nag-6.0/include -I/cluster/mvapich2-2.2rc1-gcc-g++-4.8.5-nag-6.1/include + +LIB_NETCDF := $(NETCDF_PATH)/lib +LIB_PNETCDF := $(PNETCDF_PATH)/lib +LIB_MPI := $(IMPILIBDIR) + +SLIBS := -L/usr/local/nag-6.2/lib/NAG_Fortran -L$(LIB_NETCDF) -lnetcdf -lnetcdff -L/usr/lib64 -llapack -lblas + +SCC:= nagcc + +SFC:= nagfor + +## if using parallel I/O, load all 3 libraries. PIO must be first! +ifeq ($(IO_TYPE), pio) + PIO_PATH:= + INCLDIR += -I + SLIBS := $(SLIB) -L$(PIO_PATH) -lpiof +endif diff --git a/configuration/scripts/machines/Macros.onyx_cray b/configuration/scripts/machines/Macros.onyx_cray index 16be2eeea..c0d14eb5a 100644 --- a/configuration/scripts/machines/Macros.onyx_cray +++ b/configuration/scripts/machines/Macros.onyx_cray @@ -10,6 +10,7 @@ FIXEDFLAGS := -132 FREEFLAGS := FFLAGS := -h fp0 -h byteswapio FFLAGS_NOOPT:= -O0 +LDFLAGS := -h byteswapio ifeq ($(ICE_BLDDEBUG), true) FFLAGS += -O0 -g -Rbcdps diff --git a/configuration/scripts/machines/Macros.onyx_gnu b/configuration/scripts/machines/Macros.onyx_gnu index e934754b2..b87878d1e 100644 --- a/configuration/scripts/machines/Macros.onyx_gnu +++ b/configuration/scripts/machines/Macros.onyx_gnu @@ -4,15 +4,15 @@ CPP := ftn -E CPPDEFS := -DFORTRANUNDERSCORE -DNO_R16 -DHAVE_F2008_CONTIGUOUS -DLINUX -DCPRINTEL ${ICE_CPPDEFS} -CFLAGS := -c -O2 -ffloat-store -march=native +CFLAGS := -c -O2 FIXEDFLAGS := -ffixed-line-length-132 FREEFLAGS := -ffree-form -FFLAGS := -ffloat-store -fconvert=swap -fbacktrace -march=native -ffree-line-length-none +FFLAGS := -fconvert=big-endian -fbacktrace -ffree-line-length-none FFLAGS_NOOPT:= -O0 ifeq ($(ICE_BLDDEBUG), true) - FFLAGS += -O0 -g -Wuninitialized -fbounds-check -ffpe-trap=invalid,zero,overflow,underflow + FFLAGS += -O0 -g -fcheck=bounds -finit-real=nan -fimplicit-none -ffpe-trap=invalid,zero,overflow else FFLAGS += -O2 endif @@ -44,9 +44,8 @@ INCLDIR := $(INCLDIR) #LIB_MPI := $(IMPILIBDIR) #SLIBS := -L$(LIB_NETCDF) -lnetcdf -lnetcdff -SCC:= icc - -SFC:= ifort +SCC:= gcc +SFC:= gfortran ifeq ($(ICE_THREADED), true) LDFLAGS += -fopenmp diff --git a/configuration/scripts/machines/Macros.onyx_intel b/configuration/scripts/machines/Macros.onyx_intel index 24f391af2..900b46de7 100644 --- a/configuration/scripts/machines/Macros.onyx_intel +++ b/configuration/scripts/machines/Macros.onyx_intel @@ -49,9 +49,9 @@ SCC:= icc SFC:= ifort ifeq ($(ICE_THREADED), true) - LDFLAGS += -openmp - CFLAGS += -openmp - FFLAGS += -openmp + LDFLAGS += -qopenmp + CFLAGS += -qopenmp + FFLAGS += -qopenmp endif ### if using parallel I/O, load all 3 libraries. PIO must be first! diff --git a/configuration/scripts/machines/Macros.testmachine_intel b/configuration/scripts/machines/Macros.testmachine_intel index c15eacca8..6b9a31b77 100755 --- a/configuration/scripts/machines/Macros.testmachine_intel +++ b/configuration/scripts/machines/Macros.testmachine_intel @@ -48,9 +48,9 @@ SCC:= icc SFC:= ifort ifeq ($(compile_threaded), true) - LDFLAGS += -openmp - CFLAGS += -openmp - FFLAGS += -openmp + LDFLAGS += -qopenmp + CFLAGS += -qopenmp + FFLAGS += -qopenmp endif ifeq ($(DITTO), yes) diff --git a/configuration/scripts/machines/Macros.travisCI_gnu b/configuration/scripts/machines/Macros.travisCI_gnu new file mode 100644 index 000000000..c9f0b19da --- /dev/null +++ b/configuration/scripts/machines/Macros.travisCI_gnu @@ -0,0 +1,47 @@ +#============================================================================== +# Makefile macros for Travis-CI - GCC and openmpi compilers +#============================================================================== + +CPP := cpp +CPPDEFS := -DFORTRANUNDERSCORE -DNO_R16 -DHAVE_F2008_CONTIGUOUS -DLINUX -DCPRINTEL ${ICE_CPPDEFS} +CFLAGS := -c -O2 -xHost + +FIXEDFLAGS := -ffixed-line-length-132 +FREEFLAGS := -ffree-form +FFLAGS := -fconvert=big-endian -fbacktrace -ffree-line-length-none +FFLAGS_NOOPT:= -O0 + +ifeq ($(ICE_BLDDEBUG), true) + FFLAGS += -O0 -g -fcheck=bounds -finit-real=nan -fimplicit-none -ffpe-trap=invalid,zero,overflow +else + FFLAGS += -O2 +endif + +FC := mpif90 + +MPICC:= + +MPIFC:= mpif90 +LD:= $(FC) + +NETCDF_PATH := $(NETCDF) + +ifeq ($(ICE_IOTYPE), netcdf) + NETCDF_PATH := $(shell nc-config --prefix) + INCLDIR := $(INCLDIR) -I$(NETCDF_PATH)/include + LIB_NETCDF := $(NETCDF_PATH)/lib + LIB_PNETCDF := + SLIBS := -L$(LIB_NETCDF) -lnetcdf -lnetcdff +else + SLIBS := +endif + +LIB_MPI := +SCC:= +SFC:= + +ifeq ($(ICE_THREADED), true) + LDFLAGS += -fopenmp + CFLAGS += -fopenmp + FFLAGS += -fopenmp +endif diff --git a/configuration/scripts/machines/env.cheyenne_intel b/configuration/scripts/machines/env.cheyenne_intel index d8742fa4b..7edb8a343 100755 --- a/configuration/scripts/machines/env.cheyenne_intel +++ b/configuration/scripts/machines/env.cheyenne_intel @@ -17,6 +17,7 @@ setenv ICE_MACHINE_INPUTDATA /glade/p/cesm/pcwg_dev setenv ICE_MACHINE_BASELINE /glade/scratch/$user/CICE_BASELINE setenv ICE_MACHINE_SUBMIT "qsub" setenv ICE_MACHINE_ACCT P00000000 +setenv ICE_MACHINE_QUEUE "regular" setenv ICE_MACHINE_TPNODE 36 setenv ICE_MACHINE_BLDTHRDS 1 setenv ICE_MACHINE_QSTAT "qstat " diff --git a/configuration/scripts/machines/env.conrad_cray b/configuration/scripts/machines/env.conrad_cray index 7867942b8..9a4362de1 100755 --- a/configuration/scripts/machines/env.conrad_cray +++ b/configuration/scripts/machines/env.conrad_cray @@ -40,6 +40,7 @@ setenv ICE_MACHINE_INPUTDATA /p/work1/RASM_data/cice_consortium setenv ICE_MACHINE_BASELINE $WORKDIR/CICE_BASELINE setenv ICE_MACHINE_SUBMIT "qsub " setenv ICE_MACHINE_ACCT P00000000 +setenv ICE_MACHINE_QUEUE "debug" setenv ICE_MACHINE_TPNODE 32 # tasks per node setenv ICE_MACHINE_BLDTHRDS 4 setenv ICE_MACHINE_QSTAT "qstat " diff --git a/configuration/scripts/machines/env.conrad_gnu b/configuration/scripts/machines/env.conrad_gnu index eb16e2c1e..e046eb0b7 100755 --- a/configuration/scripts/machines/env.conrad_gnu +++ b/configuration/scripts/machines/env.conrad_gnu @@ -40,6 +40,7 @@ setenv ICE_MACHINE_INPUTDATA /p/work1/RASM_data/cice_consortium setenv ICE_MACHINE_BASELINE $WORKDIR/CICE_BASELINE setenv ICE_MACHINE_SUBMIT "qsub " setenv ICE_MACHINE_ACCT P00000000 +setenv ICE_MACHINE_QUEUE "debug" setenv ICE_MACHINE_TPNODE 32 # tasks per node setenv ICE_MACHINE_BLDTHRDS 4 setenv ICE_MACHINE_QSTAT "qstat " diff --git a/configuration/scripts/machines/env.conrad_intel b/configuration/scripts/machines/env.conrad_intel index ad2d7affc..a40df15e4 100755 --- a/configuration/scripts/machines/env.conrad_intel +++ b/configuration/scripts/machines/env.conrad_intel @@ -39,7 +39,8 @@ setenv ICE_MACHINE_WKDIR $WORKDIR/CICE_RUNS setenv ICE_MACHINE_INPUTDATA /p/work1/RASM_data/cice_consortium setenv ICE_MACHINE_BASELINE $WORKDIR/CICE_BASELINE setenv ICE_MACHINE_SUBMIT "qsub " -setenv ICE_MACHINE_ACCT ARLAP96070PET +setenv ICE_MACHINE_ACCT P00000000 +setenv ICE_MACHINE_QUEUE "debug" setenv ICE_MACHINE_TPNODE 32 # tasks per node setenv ICE_MACHINE_BLDTHRDS 4 setenv ICE_MACHINE_QSTAT "qstat " diff --git a/configuration/scripts/machines/env.conrad_pgi b/configuration/scripts/machines/env.conrad_pgi index 8c7447288..22fd1209b 100755 --- a/configuration/scripts/machines/env.conrad_pgi +++ b/configuration/scripts/machines/env.conrad_pgi @@ -40,6 +40,7 @@ setenv ICE_MACHINE_INPUTDATA /p/work1/RASM_data/cice_consortium setenv ICE_MACHINE_BASELINE $WORKDIR/CICE_BASELINE setenv ICE_MACHINE_SUBMIT "qsub " setenv ICE_MACHINE_ACCT ARLAP96070PET +setenv ICE_MACHINE_QUEUE "debug" setenv ICE_MACHINE_TPNODE 32 # tasks per node setenv ICE_MACHINE_BLDTHRDS 4 setenv ICE_MACHINE_QSTAT "qstat " diff --git a/configuration/scripts/machines/env.cori_intel b/configuration/scripts/machines/env.cori_intel index 05a73fd14..15f6a1818 100755 --- a/configuration/scripts/machines/env.cori_intel +++ b/configuration/scripts/machines/env.cori_intel @@ -41,6 +41,7 @@ setenv ICE_MACHINE_INPUTDATA /global/homes/t/tcraig/cice_consortium setenv ICE_MACHINE_BASELINE $SCRATCH/CICE_BASELINE setenv ICE_MACHINE_SUBMIT "sbatch " setenv ICE_MACHINE_ACCT P00000000 +setenv ICE_MACHINE_QUEUE "debug" setenv ICE_MACHINE_TPNODE 32 # tasks per node setenv ICE_MACHINE_BLDTHRDS 4 setenv ICE_MACHINE_QSTAT "squeue --jobs=" diff --git a/configuration/scripts/machines/env.fram_intel b/configuration/scripts/machines/env.fram_intel index c7499e737..671d7b134 100755 --- a/configuration/scripts/machines/env.fram_intel +++ b/configuration/scripts/machines/env.fram_intel @@ -10,6 +10,7 @@ setenv ICE_MACHINE_WKDIR /users/dor/armn/jfl/local1/CICE6dev/CICE/tests/CICE_RUN setenv ICE_MACHINE_INPUTDATA /users/dor/armn/jfl/local1/CICE6/CICE/configuration/data/gx3Ncar setenv ICE_MACHINE_BASELINE /users/dor/armn/jfl/local1/CICE6dev/CICE/tests/CICE_BASELINE setenv ICE_MACHINE_SUBMIT "qsub" +setenv ICE_MACHINE_QUEUE "default" setenv ICE_MACHINE_TPNODE 36 setenv ICE_MACHINE_ACCT P0000000 setenv ICE_MACHINE_BLDTHRDS 1 @@ -19,4 +20,4 @@ if (-e ~/.cice_proj) then setenv CICE_ACCT ${account_name} endif -echo "je suis dans env" \ No newline at end of file +echo "je suis dans env" diff --git a/configuration/scripts/machines/env.gordon_cray b/configuration/scripts/machines/env.gordon_cray index 1542679ea..dc7c632f6 100755 --- a/configuration/scripts/machines/env.gordon_cray +++ b/configuration/scripts/machines/env.gordon_cray @@ -40,6 +40,7 @@ setenv ICE_MACHINE_INPUTDATA /p/work1/RASM_data/cice_consortium setenv ICE_MACHINE_BASELINE $WORKDIR/CICE_BASELINE setenv ICE_MACHINE_SUBMIT "qsub " setenv ICE_MACHINE_ACCT P00000000 +setenv ICE_MACHINE_QUEUE "debug" setenv ICE_MACHINE_TPNODE 32 # tasks per node setenv ICE_MACHINE_BLDTHRDS 4 setenv ICE_MACHINE_QSTAT "qstat " diff --git a/configuration/scripts/machines/env.gordon_gnu b/configuration/scripts/machines/env.gordon_gnu index 98150ce83..4310ba3fc 100755 --- a/configuration/scripts/machines/env.gordon_gnu +++ b/configuration/scripts/machines/env.gordon_gnu @@ -40,6 +40,7 @@ setenv ICE_MACHINE_INPUTDATA /p/work1/RASM_data/cice_consortium setenv ICE_MACHINE_BASELINE $WORKDIR/CICE_BASELINE setenv ICE_MACHINE_SUBMIT "qsub " setenv ICE_MACHINE_ACCT P00000000 +setenv ICE_MACHINE_QUEUE "debug" setenv ICE_MACHINE_TPNODE 32 # tasks per node setenv ICE_MACHINE_BLDTHRDS 4 setenv ICE_MACHINE_QSTAT "qstat " diff --git a/configuration/scripts/machines/env.gordon_intel b/configuration/scripts/machines/env.gordon_intel index e6edf18ba..66809b9c8 100755 --- a/configuration/scripts/machines/env.gordon_intel +++ b/configuration/scripts/machines/env.gordon_intel @@ -40,6 +40,7 @@ setenv ICE_MACHINE_INPUTDATA /p/work1/RASM_data/cice_consortium setenv ICE_MACHINE_BASELINE $WORKDIR/CICE_BASELINE setenv ICE_MACHINE_SUBMIT "qsub " setenv ICE_MACHINE_ACCT P00000000 +setenv ICE_MACHINE_QUEUE "debug" setenv ICE_MACHINE_TPNODE 32 # tasks per node setenv ICE_MACHINE_BLDTHRDS 4 setenv ICE_MACHINE_QSTAT "qstat " diff --git a/configuration/scripts/machines/env.gordon_pgi b/configuration/scripts/machines/env.gordon_pgi index 1275e005b..eb24bec82 100755 --- a/configuration/scripts/machines/env.gordon_pgi +++ b/configuration/scripts/machines/env.gordon_pgi @@ -40,6 +40,7 @@ setenv ICE_MACHINE_INPUTDATA /p/work1/RASM_data/cice_consortium setenv ICE_MACHINE_BASELINE $WORKDIR/CICE_BASELINE setenv ICE_MACHINE_SUBMIT "qsub " setenv ICE_MACHINE_ACCT ARLAP96070PET +setenv ICE_MACHINE_QUEUE "debug" setenv ICE_MACHINE_TPNODE 32 # tasks per node setenv ICE_MACHINE_BLDTHRDS 4 setenv ICE_MACHINE_QSTAT "qstat " diff --git a/configuration/scripts/machines/env.hobart_intel b/configuration/scripts/machines/env.hobart_intel new file mode 100755 index 000000000..9d2dd6cf0 --- /dev/null +++ b/configuration/scripts/machines/env.hobart_intel @@ -0,0 +1,18 @@ +#!/bin/csh -f + +source /usr/share/Modules/init/csh + +module purge +module load compiler/intel/default + +setenv ICE_MACHINE_ENVNAME hobart +setenv ICE_MACHINE_COMPILER ifort +setenv ICE_MACHINE_MAKE gmake +setenv ICE_MACHINE_WKDIR /scratch/cluster/$user/CICE_RUNS +setenv ICE_MACHINE_INPUTDATA /fs/cgd/csm/inputdata +setenv ICE_MACHINE_BASELINE /scratch/cluster/$user/CICE_BASELINE +setenv ICE_MACHINE_SUBMIT "qsub" +setenv ICE_MACHINE_QUEUE "short" +setenv ICE_MACHINE_ACCT P00000000 +setenv ICE_MACHINE_TPNODE 24 +setenv ICE_MACHINE_BLDTHRDS 1 diff --git a/configuration/scripts/machines/env.hobart_nag b/configuration/scripts/machines/env.hobart_nag new file mode 100755 index 000000000..a1bb008ba --- /dev/null +++ b/configuration/scripts/machines/env.hobart_nag @@ -0,0 +1,18 @@ +#!/bin/csh -f + +source /usr/share/Modules/init/csh + +module purge +module load compiler/nag/6.1 + +setenv ICE_MACHINE_ENVNAME hobart +setenv ICE_MACHINE_COMPILER nag +setenv ICE_MACHINE_MAKE gmake +setenv ICE_MACHINE_WKDIR /scratch/cluster/$user/CICE_RUNS +setenv ICE_MACHINE_INPUTDATA /fs/cgd/csm/inputdata +setenv ICE_MACHINE_BASELINE /scratch/cluster/$user/CICE_BASELINE +setenv ICE_MACHINE_SUBMIT "qsub" +setenv ICE_MACHINE_QUEUE "short" +setenv ICE_MACHINE_ACCT P00000000 +setenv ICE_MACHINE_TPNODE 24 +setenv ICE_MACHINE_BLDTHRDS 1 diff --git a/configuration/scripts/machines/env.loft_gnu b/configuration/scripts/machines/env.loft_gnu index 9d5bfee5e..666a8ff54 100755 --- a/configuration/scripts/machines/env.loft_gnu +++ b/configuration/scripts/machines/env.loft_gnu @@ -10,6 +10,7 @@ setenv ICE_MACHINE_INPUTDATA /Users/$user/Desktop/CICE-Consortium setenv ICE_MACHINE_BASELINE /Users/$user/Desktop/CICE-Consortium/CICE_BASELINE setenv ICE_MACHINE_SUBMIT " " setenv ICE_MACHINE_ACCT +setenv ICE_MACHINE_QUEUE "default" setenv ICE_MACHINE_TPNODE 1 setenv ICE_MACHINE_BLDTHRDS 1 setenv ICE_MACHINE_QSTAT " " diff --git a/configuration/scripts/machines/env.onyx_cray b/configuration/scripts/machines/env.onyx_cray index 9a07b2356..64816d755 100755 --- a/configuration/scripts/machines/env.onyx_cray +++ b/configuration/scripts/machines/env.onyx_cray @@ -40,6 +40,7 @@ setenv ICE_MACHINE_INPUTDATA /p/app/unsupported/RASM/cice_consortium setenv ICE_MACHINE_BASELINE $WORKDIR/CICE_BASELINE setenv ICE_MACHINE_SUBMIT "qsub " setenv ICE_MACHINE_ACCT P00000000 +setenv ICE_MACHINE_QUEUE "debug" setenv ICE_MACHINE_TPNODE 44 # tasks per node setenv ICE_MACHINE_BLDTHRDS 12 setenv ICE_MACHINE_QSTAT "qstat " diff --git a/configuration/scripts/machines/env.onyx_gnu b/configuration/scripts/machines/env.onyx_gnu index 55ab193e7..46f41e512 100755 --- a/configuration/scripts/machines/env.onyx_gnu +++ b/configuration/scripts/machines/env.onyx_gnu @@ -40,6 +40,7 @@ setenv ICE_MACHINE_INPUTDATA /p/app/unsupported/RASM/cice_consortium setenv ICE_MACHINE_BASELINE $WORKDIR/CICE_BASELINE setenv ICE_MACHINE_SUBMIT "qsub " setenv ICE_MACHINE_ACCT P00000000 +setenv ICE_MACHINE_QUEUE "debug" setenv ICE_MACHINE_TPNODE 44 # tasks per node setenv ICE_MACHINE_BLDTHRDS 12 setenv ICE_MACHINE_QSTAT "qstat " diff --git a/configuration/scripts/machines/env.onyx_intel b/configuration/scripts/machines/env.onyx_intel index bf8d0776e..82810a8c6 100755 --- a/configuration/scripts/machines/env.onyx_intel +++ b/configuration/scripts/machines/env.onyx_intel @@ -40,6 +40,7 @@ setenv ICE_MACHINE_INPUTDATA /p/app/unsupported/RASM/cice_consortium setenv ICE_MACHINE_BASELINE $WORKDIR/CICE_BASELINE setenv ICE_MACHINE_SUBMIT "qsub " setenv ICE_MACHINE_ACCT P00000000 +setenv ICE_MACHINE_QUEUE "debug" setenv ICE_MACHINE_TPNODE 44 # tasks per node setenv ICE_MACHINE_BLDTHRDS 12 setenv ICE_MACHINE_QSTAT "qstat " diff --git a/configuration/scripts/machines/env.pinto_intel b/configuration/scripts/machines/env.pinto_intel index 82b217583..3155eba5b 100755 --- a/configuration/scripts/machines/env.pinto_intel +++ b/configuration/scripts/machines/env.pinto_intel @@ -25,6 +25,7 @@ setenv ICE_MACHINE_INPUTDATA /usr/projects/climate/eclare/DATA/Consortium setenv ICE_MACHINE_BASELINE /net/scratch3/$user/CICE_BASELINE setenv ICE_MACHINE_SUBMIT "sbatch " setenv ICE_MACHINE_ACCT climateacme +setenv ICE_MACHINE_QUEUE "default" setenv ICE_MACHINE_TPNODE 16 setenv ICE_MACHINE_BLDTHRDS 1 setenv ICE_MACHINE_QSTAT "squeue --jobs=" diff --git a/configuration/scripts/machines/env.testmachine_intel b/configuration/scripts/machines/env.testmachine_intel index 9fd1e647e..b85d610c9 100755 --- a/configuration/scripts/machines/env.testmachine_intel +++ b/configuration/scripts/machines/env.testmachine_intel @@ -8,6 +8,7 @@ setenv ICE_MACHINE_INPUTDATA ~/CICE_INPUTDATA setenv ICE_MACHINE_BASELINE ~/CICE_BASELINE setenv ICE_MACHINE_SUBMIT "qsub" setenv ICE_MACHINE_TPNODE 4 +setenv ICE_MACHINE_QUEUE "default" setenv ICE_MACHINE_ACCT P0000000 setenv ICE_MACHINE_BLDTHRDS 1 setenv ICE_MACHINE_QSTAT "qstat " diff --git a/configuration/scripts/machines/env.thunder_intel b/configuration/scripts/machines/env.thunder_intel index 8f16ad416..6d2f58692 100755 --- a/configuration/scripts/machines/env.thunder_intel +++ b/configuration/scripts/machines/env.thunder_intel @@ -33,6 +33,7 @@ setenv ICE_MACHINE_INPUTDATA /p/work2/projects/rasm/cice_consortium setenv ICE_MACHINE_BASELINE $WORKDIR/CICE_BASELINE setenv ICE_MACHINE_SUBMIT "qsub " setenv ICE_MACHINE_ACCT P00000000 +setenv ICE_MACHINE_QUEUE "debug" setenv ICE_MACHINE_TPNODE 36 # tasks per node setenv ICE_MACHINE_BLDTHRDS 4 setenv ICE_MACHINE_QSTAT "qstat " diff --git a/configuration/scripts/machines/env.travisCI_gnu b/configuration/scripts/machines/env.travisCI_gnu new file mode 100755 index 000000000..687c4ba07 --- /dev/null +++ b/configuration/scripts/machines/env.travisCI_gnu @@ -0,0 +1,15 @@ +#!/bin/csh -f + +setenv ICE_MACHINE_ENVNAME travisCI +setenv ICE_MACHINE_COMPILER gnu +setenv ICE_MACHINE_MAKE make +setenv ICE_MACHINE_WKDIR ~/CICE_RUNS +setenv ICE_MACHINE_INPUTDATA ~ +setenv ICE_MACHINE_BASELINE ~/CICE_BASELINE +setenv ICE_MACHINE_SUBMIT " " +setenv ICE_MACHINE_TPNODE 4 +setenv ICE_MACHINE_ACCT P0000000 +setenv ICE_MACHINE_QUEUE "default" +setenv ICE_MACHINE_BLDTHRDS 1 +setenv ICE_MACHINE_QSTAT " " +setenv ICE_MACHINE_QUIETMODE true diff --git a/configuration/scripts/machines/env.wolf_intel b/configuration/scripts/machines/env.wolf_intel index 8a0b4e961..cee9ce89b 100755 --- a/configuration/scripts/machines/env.wolf_intel +++ b/configuration/scripts/machines/env.wolf_intel @@ -25,6 +25,7 @@ setenv ICE_MACHINE_INPUTDATA /usr/projects/climate/eclare/DATA/Consortium setenv ICE_MACHINE_BASELINE /net/scratch3/$user/CICE_BASELINE setenv ICE_MACHINE_SUBMIT "sbatch " setenv ICE_MACHINE_ACCT climateacme +setenv ICE_MACHINE_QUEUE "default" setenv ICE_MACHINE_TPNODE 16 setenv ICE_MACHINE_BLDTHRDS 1 setenv ICE_MACHINE_QSTAT "squeue --jobs=" diff --git a/configuration/scripts/options/set_env.alt01 b/configuration/scripts/options/set_env.alt01 index c6b058408..a3f7c10f5 100644 --- a/configuration/scripts/options/set_env.alt01 +++ b/configuration/scripts/options/set_env.alt01 @@ -1 +1 @@ -setenv NICELYR 1 # number of vertical layers in the ice +setenv NICELYR 1 diff --git a/configuration/scripts/options/set_env.alt02 b/configuration/scripts/options/set_env.alt02 new file mode 100644 index 000000000..69931b4e8 --- /dev/null +++ b/configuration/scripts/options/set_env.alt02 @@ -0,0 +1 @@ +setenv NICECAT 1 diff --git a/configuration/scripts/options/set_env.boxadv b/configuration/scripts/options/set_env.boxadv new file mode 100644 index 000000000..a3f7c10f5 --- /dev/null +++ b/configuration/scripts/options/set_env.boxadv @@ -0,0 +1 @@ +setenv NICELYR 1 diff --git a/configuration/scripts/options/set_env.boxdyn b/configuration/scripts/options/set_env.boxdyn new file mode 100644 index 000000000..a3f7c10f5 --- /dev/null +++ b/configuration/scripts/options/set_env.boxdyn @@ -0,0 +1 @@ +setenv NICELYR 1 diff --git a/configuration/scripts/options/set_env.boxrestore b/configuration/scripts/options/set_env.boxrestore new file mode 100644 index 000000000..a3f7c10f5 --- /dev/null +++ b/configuration/scripts/options/set_env.boxrestore @@ -0,0 +1 @@ +setenv NICELYR 1 diff --git a/configuration/scripts/options/set_nml.alt01 b/configuration/scripts/options/set_nml.alt01 index 017e0e5b4..7c7d80747 100644 --- a/configuration/scripts/options/set_nml.alt01 +++ b/configuration/scripts/options/set_nml.alt01 @@ -1,14 +1,26 @@ +ice_ic = 'default' +restart = .false. +distribution_type = 'roundrobin' +distribution_wght = 'block' +tr_iage = .false. +tr_FY = .false. +tr_lvl = .true. +tr_pond_cesm = .false. +tr_pond_topo = .false. +tr_pond_lvl = .false. +tr_aero = .false. kcatbound = 1 -kitd = 1 -ktherm = 0 +kitd = 0 +ktherm = 0 conduct = 'bubbly' +kdyn = 0 +basalstress = .true. shortwave = 'ccsm3' albedo_type = 'constant' -tfrz_option = 'minus1p8' -default_season = 'fall' -atm_data_type = 'default' formdrag = .true. -tr_lvl = .true. -tr_pond_cesm = .false. -tr_pond_topo = .false. -tr_pond_lvl = .false. +calc_tsfc = .true. +atm_data_type = 'default' +highfreq = .true. +fbot_xfer_type = 'Cdn_ocn' +tfrz_option = 'minus1p8' + diff --git a/configuration/scripts/options/set_nml.alt02 b/configuration/scripts/options/set_nml.alt02 new file mode 100644 index 000000000..e62a73058 --- /dev/null +++ b/configuration/scripts/options/set_nml.alt02 @@ -0,0 +1,22 @@ +kcatbound = -1 +ice_ic = 'default' +restart = .false. +distribution_type = 'sectrobin' +tr_iage = .true. +tr_FY = .true. +tr_lvl = .true. +tr_pond_cesm = .false. +tr_pond_topo = .false. +tr_pond_lvl = .false. +tr_aero = .false. +kitd = 0 +kdyn = 1 +revised_evp = .true. +kstrength = 0 +krdg_partic = 0 +krdg_redist = 0 +shortwave = 'ccsm3' +albedo_type = 'ccsm3' +calc_tsfc = .true. + + diff --git a/configuration/scripts/options/set_nml.alt03 b/configuration/scripts/options/set_nml.alt03 new file mode 100644 index 000000000..01c682b11 --- /dev/null +++ b/configuration/scripts/options/set_nml.alt03 @@ -0,0 +1,16 @@ +kcatbound = 2 +ice_ic = 'default' +restart = .false. +distribution_type = 'sectcart' +tr_iage = .false. +tr_FY = .false. +tr_lvl = .false. +tr_pond_cesm = .false. +tr_pond_topo = .true. +tr_pond_lvl = .false. +tr_aero = .false. +kdyn = 2 +revised_evp = .false. +Ktens = 0. +e_ratio = 2. +basalstress = .true. diff --git a/configuration/scripts/options/set_nml.alt04 b/configuration/scripts/options/set_nml.alt04 new file mode 100644 index 000000000..f14b4e3f5 --- /dev/null +++ b/configuration/scripts/options/set_nml.alt04 @@ -0,0 +1,24 @@ +ice_ic = 'default' +restart = .false. +bfbflag = .true. +distribution_type = 'rake' +processor_shape = 'slenderX2' +distribution_wght = 'block' +tr_iage = .true. +tr_FY = .true. +tr_lvl = .true. +tr_pond_cesm = .false. +tr_pond_topo = .false. +tr_pond_lvl = .true. +tr_aero = .false. +kitd = 0 +ktherm = 1 +conduct = 'MU71' +kdyn = 1 +#advection = 'upwind' # tc-leads to science failure +kstrength = 0 +krdg_partic = 0 +krdg_redist = 0 +frzpnd = 'ccsm' +natmiter = 20 +tfrz_option = 'linear_salt' diff --git a/configuration/scripts/options/set_nml.pondcesm b/configuration/scripts/options/set_nml.alt05 similarity index 57% rename from configuration/scripts/options/set_nml.pondcesm rename to configuration/scripts/options/set_nml.alt05 index 18fe3324e..5a1f83110 100644 --- a/configuration/scripts/options/set_nml.pondcesm +++ b/configuration/scripts/options/set_nml.alt05 @@ -1,8 +1,13 @@ +ice_ic = 'default' +restart = .false. tr_iage = .false. tr_FY = .false. tr_lvl = .false. tr_pond_cesm = .true. tr_pond_topo = .false. tr_pond_lvl = .false. -tr_aero = .true. +tr_aero = .false. +shortwave = 'dEdd' +albedo_type = 'default' + diff --git a/configuration/scripts/options/set_nml.basal b/configuration/scripts/options/set_nml.basal deleted file mode 100644 index d54c0bffb..000000000 --- a/configuration/scripts/options/set_nml.basal +++ /dev/null @@ -1,3 +0,0 @@ -Ktens = 0. -e_ratio = 2. -basalstress = .true. diff --git a/configuration/scripts/options/set_nml.bfbflagT b/configuration/scripts/options/set_nml.bfbflagT deleted file mode 100644 index 0cd94c6e4..000000000 --- a/configuration/scripts/options/set_nml.bfbflagT +++ /dev/null @@ -1,2 +0,0 @@ -bfbflag = .true. - diff --git a/configuration/scripts/options/set_nml.boxadv b/configuration/scripts/options/set_nml.boxadv new file mode 100644 index 000000000..9d61758ba --- /dev/null +++ b/configuration/scripts/options/set_nml.boxadv @@ -0,0 +1,20 @@ +ice_ic = 'default' +restart = .false. +restart_ext = .false. +kcatbound = 2 +ew_boundary_type = 'cyclic' +ns_boundary_type = 'cyclic' +tr_iage = .true. +tr_FY = .false. +tr_lvl = .true. +tr_pond_cesm = .false. +tr_pond_topo = .false. +tr_pond_lvl = .false. +tr_aero = .false. +kitd = 1 +ktherm = 0 +kdyn = 2 +kstrength = 0 +krdg_partic = 0 +krdg_redist = 0 + diff --git a/configuration/scripts/options/set_nml.boxdyn b/configuration/scripts/options/set_nml.boxdyn new file mode 100644 index 000000000..d896ab53c --- /dev/null +++ b/configuration/scripts/options/set_nml.boxdyn @@ -0,0 +1,27 @@ +ice_ic = 'default' +restart = .false. +days_per_year = 360 +npt = 72 +restart_format = 'bin' +dumpfreq = 'd' +dumpfreq_n = 2 +histfreq = 'd','x','x','x','x' +histfreq_n = 2,1,1,1,1 +f_aice = 'd' +kcatbound = 0 +ew_boundary_type = 'open' +ns_boundary_type = 'open' +tr_iage = .false. +tr_FY = .false. +tr_lvl = .false. +tr_pond_cesm = .false. +tr_pond_topo = .false. +tr_pond_lvl = .false. +tr_aero = .false. +kitd = 0 +ktherm = 0 +kdyn = 3 +revised_evp = .false. +kstrength = 0 +krdg_partic = 1 +krdg_redist = 1 diff --git a/configuration/scripts/options/set_nml.boxrestore b/configuration/scripts/options/set_nml.boxrestore new file mode 100644 index 000000000..40d3c4e52 --- /dev/null +++ b/configuration/scripts/options/set_nml.boxrestore @@ -0,0 +1,29 @@ +ice_ic = 'default' +restart = .false. +restart_ext = .true. +use_leap_years = .true. +ndtd = 2 +kcatbound = 1 +distribution_type = 'cartesian' +processor_shape = 'slenderX1' +ew_boundary_type = 'cyclic' +ns_boundary_type = 'open' +histfreq = 'd','x','x','x','x' +histfreq_n = 1,1,1,1,1 +f_aice = 'd' +tr_iage = .true. +tr_FY = .true. +tr_lvl = .true. +tr_pond_cesm = .false. +tr_pond_topo = .false. +tr_pond_lvl = .false. +tr_aero = .false. +kitd = 1 +ktherm = 0 +kdyn = 1 +revised_evp = .true. +kstrength = 0 +krdg_partic = 0 +krdg_redist = 0 +basalstress = .true. +restore_ice = .true. diff --git a/configuration/scripts/options/set_nml.dpy360 b/configuration/scripts/options/set_nml.dpy360 deleted file mode 100644 index 7226b2ffd..000000000 --- a/configuration/scripts/options/set_nml.dpy360 +++ /dev/null @@ -1,2 +0,0 @@ -days_per_year = 360 -use_leap_years = .false. diff --git a/configuration/scripts/options/set_nml.drakeX1 b/configuration/scripts/options/set_nml.drakeX1 new file mode 100644 index 000000000..c5c4d43ba --- /dev/null +++ b/configuration/scripts/options/set_nml.drakeX1 @@ -0,0 +1,2 @@ +distribution_type = 'rake' +processor_shape = 'slenderX1' diff --git a/configuration/scripts/options/set_nml.drakeX2 b/configuration/scripts/options/set_nml.drakeX2 new file mode 100644 index 000000000..67fe1a028 --- /dev/null +++ b/configuration/scripts/options/set_nml.drakeX2 @@ -0,0 +1,2 @@ +distribution_type = 'rake' +processor_shape = 'slenderX2' diff --git a/configuration/scripts/options/set_nml.drakeice b/configuration/scripts/options/set_nml.drakeice new file mode 100644 index 000000000..db199699b --- /dev/null +++ b/configuration/scripts/options/set_nml.drakeice @@ -0,0 +1,2 @@ +distribution_type = 'rake' +processor_shape = 'square-ice' diff --git a/configuration/scripts/options/set_nml.drakepop b/configuration/scripts/options/set_nml.drakepop new file mode 100644 index 000000000..0fde2818b --- /dev/null +++ b/configuration/scripts/options/set_nml.drakepop @@ -0,0 +1,2 @@ +distribution_type = 'rake' +processor_shape = 'square-pop' diff --git a/configuration/scripts/options/set_nml.droundrobin b/configuration/scripts/options/set_nml.droundrobin new file mode 100644 index 000000000..c9495004e --- /dev/null +++ b/configuration/scripts/options/set_nml.droundrobin @@ -0,0 +1 @@ +distribution_type = 'roundrobin' diff --git a/configuration/scripts/options/set_nml.dsectcart b/configuration/scripts/options/set_nml.dsectcart new file mode 100644 index 000000000..d6b95831e --- /dev/null +++ b/configuration/scripts/options/set_nml.dsectcart @@ -0,0 +1 @@ +distribution_type = 'sectcart' diff --git a/configuration/scripts/options/set_nml.dsectrobin b/configuration/scripts/options/set_nml.dsectrobin new file mode 100644 index 000000000..468ac2cdb --- /dev/null +++ b/configuration/scripts/options/set_nml.dsectrobin @@ -0,0 +1 @@ +distribution_type = 'sectrobin' diff --git a/configuration/scripts/options/set_nml.dslenderX1 b/configuration/scripts/options/set_nml.dslenderX1 new file mode 100644 index 000000000..04008a80d --- /dev/null +++ b/configuration/scripts/options/set_nml.dslenderX1 @@ -0,0 +1,2 @@ +distribution_type = 'cartesian' +processor_shape = 'slenderX1' diff --git a/configuration/scripts/options/set_nml.dslenderX2 b/configuration/scripts/options/set_nml.dslenderX2 new file mode 100644 index 000000000..754b76772 --- /dev/null +++ b/configuration/scripts/options/set_nml.dslenderX2 @@ -0,0 +1,2 @@ +distribution_type = 'cartesian' +processor_shape = 'slenderX2' diff --git a/configuration/scripts/options/set_nml.dspacecurve b/configuration/scripts/options/set_nml.dspacecurve new file mode 100644 index 000000000..0429a6b57 --- /dev/null +++ b/configuration/scripts/options/set_nml.dspacecurve @@ -0,0 +1 @@ +distribution_type = 'spacecurve' diff --git a/configuration/scripts/options/set_nml.dsquareice b/configuration/scripts/options/set_nml.dsquareice new file mode 100644 index 000000000..a44f945cd --- /dev/null +++ b/configuration/scripts/options/set_nml.dsquareice @@ -0,0 +1,2 @@ +distribution_type = 'cartesian' +processor_shape = 'square-ice' diff --git a/configuration/scripts/options/set_nml.dsquarepop b/configuration/scripts/options/set_nml.dsquarepop new file mode 100644 index 000000000..c0532084b --- /dev/null +++ b/configuration/scripts/options/set_nml.dsquarepop @@ -0,0 +1,2 @@ +distribution_type = 'cartesian' +processor_shape = 'square-pop' diff --git a/configuration/scripts/options/set_nml.dwblock b/configuration/scripts/options/set_nml.dwblock new file mode 100644 index 000000000..1f5f0a26d --- /dev/null +++ b/configuration/scripts/options/set_nml.dwblock @@ -0,0 +1 @@ +distribution_wght = 'block' diff --git a/configuration/scripts/options/set_nml.dwlat b/configuration/scripts/options/set_nml.dwlat new file mode 100644 index 000000000..ca157e8fb --- /dev/null +++ b/configuration/scripts/options/set_nml.dwlat @@ -0,0 +1 @@ +distribution_wght = 'latitude' diff --git a/configuration/scripts/options/set_nml.gbox128 b/configuration/scripts/options/set_nml.gbox128 new file mode 100644 index 000000000..bdbbd9337 --- /dev/null +++ b/configuration/scripts/options/set_nml.gbox128 @@ -0,0 +1,3 @@ +ice_ic = 'default' +grid_type = 'rectangular' +atm_data_type = 'box' diff --git a/configuration/scripts/options/set_nml.gx1 b/configuration/scripts/options/set_nml.gx1 index c3999b14f..034255a42 100644 --- a/configuration/scripts/options/set_nml.gx1 +++ b/configuration/scripts/options/set_nml.gx1 @@ -5,11 +5,16 @@ grid_format = 'bin' grid_type = 'displaced_pole' grid_file = 'ICE_MACHINE_INPUTDATA/CICE_data/grid/gx1/grid_gx1.bin' kmt_file = 'ICE_MACHINE_INPUTDATA/CICE_data/grid/gx1/kmt_gx1.bin' +maskhalo_dyn = .true. +maskhalo_remap = .true. +maskhalo_bound = .true. fyear_init = 2005 ycycle = 1 atm_data_format = 'bin' atm_data_type = 'LYq' atm_data_dir = 'ICE_MACHINE_INPUTDATA/CICE_data/forcing/gx1/COREII' - - +precip_unit = 'mm_per_sec' +ocn_data_format = 'nc' +ocn_data_type = 'ncar' +ocn_data_dir = 'oceanmixed_ice_depth.nc' diff --git a/configuration/scripts/options/set_nml.gx1prod b/configuration/scripts/options/set_nml.gx1prod new file mode 100644 index 000000000..a26af8102 --- /dev/null +++ b/configuration/scripts/options/set_nml.gx1prod @@ -0,0 +1,7 @@ +year_init = 1958 +dt = 3600 +npt = 87600 +dumpfreq = 'm' +fyear_init = 1958 +ycycle = 52 +ocn_data_dir = 'ICE_MACHINE_INPUTDATA/CICE_data/forcing/gx1' diff --git a/configuration/scripts/options/set_nml.icdefault b/configuration/scripts/options/set_nml.icdefault new file mode 100644 index 000000000..24bf7e244 --- /dev/null +++ b/configuration/scripts/options/set_nml.icdefault @@ -0,0 +1,2 @@ +ice_ic = 'default' +restart = .false. diff --git a/configuration/scripts/options/set_nml.icnone b/configuration/scripts/options/set_nml.icnone new file mode 100644 index 000000000..b38c661bf --- /dev/null +++ b/configuration/scripts/options/set_nml.icnone @@ -0,0 +1,2 @@ +ice_ic = 'none' +restart = .false. diff --git a/configuration/scripts/options/set_nml.leap b/configuration/scripts/options/set_nml.leap deleted file mode 100644 index 71af9479a..000000000 --- a/configuration/scripts/options/set_nml.leap +++ /dev/null @@ -1,2 +0,0 @@ -days_per_year = 365 -use_leap_years = .true. diff --git a/configuration/scripts/options/set_nml.maskhalo b/configuration/scripts/options/set_nml.maskhalo new file mode 100644 index 000000000..d3977e7cf --- /dev/null +++ b/configuration/scripts/options/set_nml.maskhalo @@ -0,0 +1,3 @@ +maskhalo_dyn = .true. +maskhalo_remap = .true. +maskhalo_bound = .true. diff --git a/configuration/scripts/options/set_nml.none b/configuration/scripts/options/set_nml.none new file mode 100644 index 000000000..479cbf6e3 --- /dev/null +++ b/configuration/scripts/options/set_nml.none @@ -0,0 +1 @@ +# this serves as a empty place holder diff --git a/configuration/scripts/options/set_nml.pondlvl b/configuration/scripts/options/set_nml.pondlvl deleted file mode 100644 index 626922875..000000000 --- a/configuration/scripts/options/set_nml.pondlvl +++ /dev/null @@ -1,8 +0,0 @@ -tr_iage = .true. -tr_FY = .true. -tr_lvl = .true. -tr_pond_cesm = .false. -tr_pond_topo = .false. -tr_pond_lvl = .true. -tr_aero = .false. - diff --git a/configuration/scripts/options/set_nml.pondtopo b/configuration/scripts/options/set_nml.pondtopo deleted file mode 100644 index 77879a9b9..000000000 --- a/configuration/scripts/options/set_nml.pondtopo +++ /dev/null @@ -1,8 +0,0 @@ -tr_iage = .false. -tr_FY = .false. -tr_lvl = .false. -tr_pond_cesm = .false. -tr_pond_topo = .true. -tr_pond_lvl = .false. -tr_aero = .false. - diff --git a/configuration/scripts/options/set_nml.restart b/configuration/scripts/options/set_nml.restart deleted file mode 100644 index c7ed36bb5..000000000 --- a/configuration/scripts/options/set_nml.restart +++ /dev/null @@ -1,4 +0,0 @@ -npt = 240 -dumpfreq = 'd' -dumpfreq_n = 5 -histfreq = 'd','x','x','x','x' diff --git a/configuration/scripts/options/set_nml.run10day b/configuration/scripts/options/set_nml.run10day index afa3ee7bc..deae3e993 100644 --- a/configuration/scripts/options/set_nml.run10day +++ b/configuration/scripts/options/set_nml.run10day @@ -2,4 +2,5 @@ npt = 240 dumpfreq = 'd' dumpfreq_n = 10 histfreq = 'd','x','x','x','x' +f_aice = 'd' diff --git a/configuration/scripts/options/set_nml.run1day b/configuration/scripts/options/set_nml.run1day index 28ff332dc..d7b70f973 100644 --- a/configuration/scripts/options/set_nml.run1day +++ b/configuration/scripts/options/set_nml.run1day @@ -4,3 +4,4 @@ dumpfreq_n = 1 diag_type = 'stdout' print_global = .true. histfreq = 'd','x','x','x','x' +f_aice = 'd' diff --git a/configuration/scripts/options/set_nml.run2day b/configuration/scripts/options/set_nml.run2day index 625d65f2e..8129d59f6 100644 --- a/configuration/scripts/options/set_nml.run2day +++ b/configuration/scripts/options/set_nml.run2day @@ -2,4 +2,5 @@ npt = 48 dumpfreq = 'd' dumpfreq_n = 2 histfreq = 'd','x','x','x','x' +f_aice = 'd' diff --git a/configuration/scripts/options/set_nml.run3day b/configuration/scripts/options/set_nml.run3day new file mode 100644 index 000000000..1fbf7a115 --- /dev/null +++ b/configuration/scripts/options/set_nml.run3day @@ -0,0 +1,8 @@ +npt = 72 +dumpfreq = 'd' +dumpfreq_n = 2 +diag_type = 'stdout' +print_global = .true. +histfreq = 'd','x','x','x','x' +histfreq_n = 2,1,1,1,1 +f_aice = 'd' diff --git a/configuration/scripts/options/set_nml.run5day b/configuration/scripts/options/set_nml.run5day index a0b47aa07..4113c48e6 100644 --- a/configuration/scripts/options/set_nml.run5day +++ b/configuration/scripts/options/set_nml.run5day @@ -2,4 +2,5 @@ npt = 120 dumpfreq = 'd' dumpfreq_n = 5 histfreq = 'd','x','x','x','x' +f_aice = 'd' diff --git a/configuration/scripts/options/set_nml.run60day b/configuration/scripts/options/set_nml.run60day index ea1f521cb..01fd59504 100644 --- a/configuration/scripts/options/set_nml.run60day +++ b/configuration/scripts/options/set_nml.run60day @@ -2,4 +2,4 @@ npt = 1440 dumpfreq = 'd' dumpfreq_n = 30 histfreq = 'd','x','x','x','x' - +f_aice = 'd' diff --git a/configuration/scripts/options/set_nml.smoke b/configuration/scripts/options/set_nml.smoke deleted file mode 100644 index 80fab10dc..000000000 --- a/configuration/scripts/options/set_nml.smoke +++ /dev/null @@ -1 +0,0 @@ -#npt = 24 diff --git a/configuration/scripts/options/set_nml.swccsm3 b/configuration/scripts/options/set_nml.swccsm3 deleted file mode 100644 index 9875d3d5b..000000000 --- a/configuration/scripts/options/set_nml.swccsm3 +++ /dev/null @@ -1,6 +0,0 @@ -shortwave = 'ccsm3' -albedo_type = 'ccsm3' -calc_tsfc = .true. -tr_pond_cesm = .false. -tr_pond_topo = .false. -tr_pond_lvl = .false. diff --git a/configuration/scripts/options/set_nml.thermo1 b/configuration/scripts/options/set_nml.thermo1 deleted file mode 100644 index 69002ee19..000000000 --- a/configuration/scripts/options/set_nml.thermo1 +++ /dev/null @@ -1,6 +0,0 @@ -kcatbound = 0 -kitd = 0 -ktherm = 1 -conduct = 'MU71' -tfrz_option = 'linear_salt' -atm_data_type = 'clim' diff --git a/configuration/scripts/options/set_nml.writeicf b/configuration/scripts/options/set_nml.writeicf deleted file mode 100644 index a6abd575e..000000000 --- a/configuration/scripts/options/set_nml.writeicf +++ /dev/null @@ -1 +0,0 @@ -write_ic = .false. diff --git a/configuration/scripts/options/set_nml.writeict b/configuration/scripts/options/set_nml.writeict deleted file mode 100644 index 92958058a..000000000 --- a/configuration/scripts/options/set_nml.writeict +++ /dev/null @@ -1 +0,0 @@ -write_ic = .true. diff --git a/configuration/scripts/options/test_nml.restart2 b/configuration/scripts/options/test_nml.restart2 index 70f36c369..5981918a8 100644 --- a/configuration/scripts/options/test_nml.restart2 +++ b/configuration/scripts/options/test_nml.restart2 @@ -2,3 +2,5 @@ npt = 120 dumpfreq = 'd' dumpfreq_n = 5 runtype = 'continue' +restart = .true. + diff --git a/configuration/scripts/parse_namelist_from_settings.sh b/configuration/scripts/parse_namelist_from_settings.sh index 4683bb6c1..3139d1879 100755 --- a/configuration/scripts/parse_namelist_from_settings.sh +++ b/configuration/scripts/parse_namelist_from_settings.sh @@ -14,4 +14,8 @@ echo "running parse_namelist_from_settings.sh" sed -i.sedbak -e 's|ICE_SANDBOX|'"${ICE_SANDBOX}"'|g' $filename sed -i.sedbak -e 's|ICE_MACHINE_INPUTDATA|'"${ICE_MACHINE_INPUTDATA}"'|g' $filename +if [[ -e "${filename}.sedbak" ]]; then + rm ${filename}.sedbak +fi + exit 0 diff --git a/configuration/scripts/set_version_number.csh b/configuration/scripts/set_version_number.csh new file mode 100755 index 000000000..5dfe2880a --- /dev/null +++ b/configuration/scripts/set_version_number.csh @@ -0,0 +1,24 @@ +#! /bin/csh -f + +if ( $#argv < 1 ) then + echo "$0 requires one argument, none passed" + exit -1 +endif +if ( $#argv > 1 ) then + echo "$0 requires one argument, passed = $argv" + exit -1 +endif + +set versno = $1 +#echo "$0 versno = $versno" + +cp -f doc/source/conf.py doc/source/conf.py.bu + +sed -i 's|^.*version.*=.*$|version = u'"'"${versno}"'"' | g' doc/source/conf.py +sed -i 's|^.*release.*=.*$|version = u'"'"${versno}"'"' | g' doc/source/conf.py + +echo "CICE ${versno}" >! cicecore/version.txt + +echo "$0 completed successfully" + +exit 0 diff --git a/configuration/scripts/setup_run_dirs.csh b/configuration/scripts/setup_run_dirs.csh new file mode 100755 index 000000000..6a95b6b25 --- /dev/null +++ b/configuration/scripts/setup_run_dirs.csh @@ -0,0 +1,13 @@ +#! /bin/csh -f + +source ./cice.settings +source ${ICE_CASEDIR}/env.${ICE_MACHCOMP} || exit 2 + +if !(-d ${ICE_RUNDIR}) then + echo "mkdir ${ICE_RUNDIR}" + mkdir -p ${ICE_RUNDIR} +endif +if !(-d ${ICE_HSTDIR}) mkdir -p ${ICE_HSTDIR} +if !(-d ${ICE_RSTDIR}) mkdir -p ${ICE_RSTDIR} + +exit 0 diff --git a/configuration/scripts/tests/CTest/CTestConfig.cmake b/configuration/scripts/tests/CTest/CTestConfig.cmake deleted file mode 100644 index 51bb4ecfa..000000000 --- a/configuration/scripts/tests/CTest/CTestConfig.cmake +++ /dev/null @@ -1,6 +0,0 @@ -set(CTEST_PROJECT_NAME "myCICE") -set(CTEST_DROP_METHOD "http") -set(CTEST_DROP_SITE "my.cdash.org") -set(CTEST_DROP_LOCATION "/submit.php?project=myCICE") -set(CTEST_DROP_SITE_CDASH TRUE) -set(CTEST_NIGHTLY_START_TIME "01:00:00 CET") diff --git a/configuration/scripts/tests/CTest/CTestTestfile.cmake b/configuration/scripts/tests/CTest/CTestTestfile.cmake deleted file mode 100644 index 8a31a709e..000000000 --- a/configuration/scripts/tests/CTest/CTestTestfile.cmake +++ /dev/null @@ -1,6 +0,0 @@ - -# Add your custom executables here in the form: -# add_test( [ ...]) -# -# Make your test return 0 for success, nonzero for failure. - diff --git a/configuration/scripts/tests/CTest/gen_ctestfile.csh b/configuration/scripts/tests/CTest/gen_ctestfile.csh deleted file mode 100755 index e2179d5dd..000000000 --- a/configuration/scripts/tests/CTest/gen_ctestfile.csh +++ /dev/null @@ -1,27 +0,0 @@ -#!/bin/csh -f - -# This script is passed the test name, and whether or not bfbcomp is to be performed. -# It then builds the CTestTestfile.cmake file per the inputs - -# $1 = $tsdir (the directory where CTestTestfile.cmake is to be put) -# $2 = $testname_noid (the name of the test) -# $3 = $bfbcomp -# $4 = $spval (used to test for bfbcomp) - -# Every test needs to have the "build" phase -echo "add_test($2_build grep "\""PASS.*$2 .*build"\"" results.log)" >> $1/CTestTestfile.cmake - -# If this is a restart test, add the 'run-initial' and 'run-restart' tests -if ( "$2" =~ *"_restart_"* ) then - echo "add_test($2_run_initial grep "\""PASS.*$2 .*run-initial"\"" results.log)" >> $1/CTestTestfile.cmake - echo "add_test($2_run_restart grep "\""PASS.*$2 .*run-restart"\"" results.log)" >> $1/CTestTestfile.cmake - echo "add_test($2_exact_restart grep "\""PASS.*$2 .*exact-restart"\"" results.log)" >> $1/CTestTestfile.cmake -else - echo "add_test($2_run grep "\""PASS.*$2 .*run"\"" results.log)" >> $1/CTestTestfile.cmake - echo "add_test($2_compare grep "\""PASS.*$2 .*compare"\"" results.log)" >> $1/CTestTestfile.cmake - # Check for bfbcomp in sets - if ( $3 != $4 ) then - echo "add_test($2_bfbcomp grep "\""PASS.*$2 .*bfbcomp"\"" results.log)" >> $1/CTestTestfile.cmake - endif -endif - diff --git a/configuration/scripts/tests/CTest/parse_timings.csh b/configuration/scripts/tests/CTest/parse_timings.csh deleted file mode 100755 index 9532a7dcb..000000000 --- a/configuration/scripts/tests/CTest/parse_timings.csh +++ /dev/null @@ -1,61 +0,0 @@ -#!/bin/csh -f - -# This script parses the timings from the tests in the test suite and writes them to the CTest -# Test.xml file prior to submitting to CDash. - -# Loop through each line of the Test.xml file -set CTEST_TAG="`head -n 1 Testing/TAG`" -set testfile="`ls Testing/${CTEST_TAG}/Test.xml`" -set outfile="Testing/${CTEST_TAG}/Test.xml.generated" -set save_time=0 -foreach line ("`cat $testfile`") - if ( "$line" =~ *"Test Status"* ) then - if ( "$line" =~ *"passed"* ) then - set save_time=1 - else - set save_time=0 - endif - endif - if ( "$line" =~ *"FullName"* ) then - if ( $save_time == 1 ) then - if ( "$line" =~ *"_run<"* ) then - set save_time=1 - else if ("$line" =~ *"_run-initial<"*) then - set save_time=2 - else if ("$line" =~ *"run-restart"*) then - set save_time=3 - else - set save_time=0 - endif - # Grab the case name - set casename=`echo $line | grep -oP '(?<=\.\/).*?(?=)'` - set casename=`echo $casename | sed 's/_run\>\|_run-initial\>\|_run-restart\>//'` - else - set save_time=0 - endif - endif - if ( "$line" =~ *"Execution Time"* && $save_time > 0 ) then - # Find the case runlog - #set runlog=`ls ./${casename}.*/logs/*runlog*` - foreach file (`ls ./${casename}.*/logs/*runlog*`) - set runlog="$file" - if ( $save_time == 2 ) then - break - endif - end - foreach line1 ("`cat $runlog`") - if ( "$line1" =~ *"Timer 2:"*) then - set runtime=`echo $line1 | grep -oP "\d+\.(\d+)?" | sort -n | tail -1` - endif - end - set local_runtime=`echo $line | grep -oP "\d+\.(\d+)?" | sort -n | tail -1` - # Grab the leading whitespace - # Replace the timing in Test.xml with the timing from the runlog file - set line=`echo "$line" | sed "s/^\(.*\)${local_runtime}\(<\/Value>\)/\1${runtime}<\/Value>/"` - set save_time=0 - endif - echo "$line" >> $outfile -end - -mv $testfile ${testfile}.bak -mv $outfile $testfile diff --git a/configuration/scripts/tests/CTest/run_ctest.csh b/configuration/scripts/tests/CTest/run_ctest.csh deleted file mode 100755 index 425665683..000000000 --- a/configuration/scripts/tests/CTest/run_ctest.csh +++ /dev/null @@ -1,81 +0,0 @@ -#!/bin/csh -f - -set initargv = ( $argv[*] ) - -set dash = "-" -set submit_only=0 - -# Check if any of the results could not find the baseline dataset -grep --quiet 'baseline-does-not-exist' results.log -if ($status == 0) then - echo "Tests were not able to find the baseline datasets. No results" - echo "will be posted to CDash" - grep 'baseline-does-not-exist' results.log - exit -1 -endif - -# Read in command line arguments -set argv = ( $initargv[*] ) - -while (1) - if ( $#argv < 1 ) break; - if ("$argv[1]" == "${dash}submit") then - set submit_only=1 - shift argv - continue - endif -end - -if ( $submit_only == 0 ) then - ctest -S steer.cmake -else - # Find the filename to submit to CDash - set CTEST_TAG="`head -n 1 Testing/TAG.submit`" - -cat > submit.cmake << EOF0 -cmake_minimum_required(VERSION 2.8) -set(CTEST_DASHBOARD_ROOT "\$ENV{PWD}") -set(CTEST_SOURCE_DIRECTORY "\$ENV{PWD}") -set(CTEST_BINARY_DIRECTORY "\$ENV{PWD}") -message("source directory = \${CTEST_SOURCE_DIRECTORY}") - -include( CTestConfig.cmake ) - -ctest_start("Experimental") -ctest_submit(FILES "`pwd`/Testing/${CTEST_TAG}/Test.xml") -EOF0 - ctest -S submit.cmake -endif - -if ( $submit_only == 0 ) then - if ( -f Testing/TAG ) then - set file='Testing/TAG' - set CTEST_TAG="`head -n 1 $file`" - - # Check to see if ctest_submit was successful - set success=0 - set submit_file="Testing/Temporary/LastSubmit_$CTEST_TAG.log" - foreach line ("`cat $submit_file`") - if ( "$line" =~ *'Submission successful'* ) then - set success=1 - endif - end - - if ( $success == 0 ) then - echo "" - set xml_file="Testing/$CTEST_TAG/Test.xml" - cp Testing/TAG Testing/TAG.submit - tar czf cice_ctest.tgz run_ctest.csh CTestConfig.cmake Testing/TAG.submit \ - Testing/${CTEST_TAG}/Test.xml - echo "CTest submission failed. To try the submission again run " - echo " ./run_ctest.csh -submit" - echo "If you wish to submit the test results from another server, copy the " - echo "cice_ctest.tgz file to another server and run " - echo " ./run_ctest.csh -submit" - else - echo "Submit Succeeded" - endif - else - echo "No Testing/TAG file exists. Ensure that ctest is installed on this system." - endif -endif diff --git a/configuration/scripts/tests/CTest/steer.cmake b/configuration/scripts/tests/CTest/steer.cmake deleted file mode 100644 index ddeb9bedf..000000000 --- a/configuration/scripts/tests/CTest/steer.cmake +++ /dev/null @@ -1,43 +0,0 @@ -# ----------------------------------------------------------- -# -- Get environment -# ----------------------------------------------------------- - -## -- Set hostname -## -------------------------- -find_program(HOSTNAME_CMD NAMES hostname) -exec_program(${HOSTNAME_CMD} ARGS OUTPUT_VARIABLE HOSTNAME) - -set(CTEST_SITE "${HOSTNAME}") - -## -- Set site / build name -## -------------------------- - -find_program(UNAME NAMES uname) -macro(getuname name flag) - exec_program("${UNAME}" ARGS "${flag}" OUTPUT_VARIABLE "${name}") -endmacro(getuname) - -getuname(osname -s) -getuname(osrel -r) -getuname(cpu -m) - -find_program(GIT_CMD NAMES git) -exec_program(${GIT_CMD} ARGS rev-parse --short HEAD OUTPUT_VARIABLE GIT_COMMIT_HASH) - -find_program(IFORT_CMD NAMES ifort) -exec_program(${IFORT_CMD} ARGS --version | head -n 1 | awk '{print $3}' OUTPUT_VARIABLE COMPILER_VERSION) - -set(CTEST_BUILD_NAME "${osname}-${cpu}-intel${COMPILER_VERSION}-${GIT_COMMIT_HASH}") - -message("build name = ${CTEST_BUILD_NAME}") - -set(CTEST_DASHBOARD_ROOT "$ENV{PWD}") -set(CTEST_SOURCE_DIRECTORY "$ENV{PWD}") -set(CTEST_BINARY_DIRECTORY "$ENV{PWD}") -message("source directory = ${CTEST_SOURCE_DIRECTORY}") - -ctest_start(${MODEL} TRACK ${MODEL}) -ctest_test( BUILD "${CTEST_BINARY_DIRECTORY}" RETURN_VALUE res) -message("Parsing timings into Test.xml") -execute_process(COMMAND "./parse_timings.csh") -ctest_submit( RETURN_VALUE res) diff --git a/configuration/scripts/tests/QC/cice.t-test.py b/configuration/scripts/tests/QC/cice.t-test.py index 4adf246fd..5553f7a74 100755 --- a/configuration/scripts/tests/QC/cice.t-test.py +++ b/configuration/scripts/tests/QC/cice.t-test.py @@ -180,7 +180,11 @@ def two_stage_test(data_a,data_b,num_files,data_d): else: logger.error('TEST NOT CONCLUSIVE') passed = False - return passed, passed_array + + try: + return passed, passed_array + except: + return passed, 0 # Calculate Taylor Skill Score def skill_test(path_a,fname,data_a,data_b,num_files,tlat,hemisphere): @@ -461,22 +465,30 @@ def plot_two_stage_failures(data,lat,lon): # Run skill test on northern hemisphere data_nh_a = ma.masked_array(data_a,mask=mask_nh) data_nh_b = ma.masked_array(data_b,mask=mask_nh) - passed_nh = skill_test(path_a,fname,data_nh_a,data_nh_b,num_files,tlat,'Northern') + if np.ma.all(data_nh_a.mask) and np.ma.all(data_nh_b.mask): + logger.info("Northern Hemisphere data is bit-for-bit") + passed_nh = True + else: + passed_nh = skill_test(path_a,fname,data_nh_a,data_nh_b,num_files,tlat,'Northern') # Run skill test on southern hemisphere data_sh_a = ma.masked_array(data_a,mask=mask_sh) data_sh_b = ma.masked_array(data_b,mask=mask_sh) - passed_sh = skill_test(path_a,fname,data_sh_a,data_sh_b,num_files,tlat,'Southern') + if np.ma.all(data_sh_a.mask) and np.ma.all(data_sh_b.mask): + logger.info("Southern Hemisphere data is bit-for-bit") + passed_sh = True + else: + passed_sh = skill_test(path_a,fname,data_sh_a,data_sh_b,num_files,tlat,'Southern') passed_skill = passed_nh and passed_sh logger.info('') if not passed and not passed_skill: logger.error('Quality Control Test FAILED') - post_to_cdash(False) + #post_to_cdash(False) sys.exit(1) # exit with an error return code else: logger.info('Quality Control Test PASSED') - post_to_cdash(True) + #post_to_cdash(True) sys.exit(0) # exit with successfull return code diff --git a/configuration/scripts/tests/base_suite.ts b/configuration/scripts/tests/base_suite.ts index aca8811b1..550592b2f 100644 --- a/configuration/scripts/tests/base_suite.ts +++ b/configuration/scripts/tests/base_suite.ts @@ -1,16 +1,21 @@ # Test Grid PEs Sets BFB-compare +smoke gx3 1x1 debug,diag1,run2day +smoke gx3 1x4 debug,diag1,run2day +smoke gx3 4x1 debug,diag1,run5day +restart gx3 8x2 debug smoke gx3 8x2 diag1,run5day smoke gx3 8x2 diag24,run1year,medium -smoke gx3 4x1 debug,diag1,run5day -smoke gx3 8x2 debug,diag1,run5day smoke gx3 4x2 diag1,run5day smoke_gx3_8x2_diag1_run5day smoke gx3 4x1 diag1,run5day,thread smoke_gx3_8x2_diag1_run5day -restart gx3 8x1 diag1 -restart gx3 4x2 debug -restart gx3 8x2 diag1,pondcesm -restart gx3 8x2 diag1,pondtopo -smoke gx1 32x1 diag1,run5day,thread -smoke gx1 16x2 diag1,run5day smoke_gx1_32x1_diag1_run5day_thread -smoke gx1 8x4 debug,run2day -restart gx1 32x1 none -restart gx1 13x2 none +decomp gx3 4x2x25x29x5 +restart gx1 40x4 debug,droundrobin +restart gx3 4x4 none +restart gx3 6x2 alt01 +restart gx3 8x2 alt02 +restart gx3 4x2 alt03 +restart gx3 4x4 alt04 +restart gx3 4x4 alt05 +restart gbox128 4x2 none +restart gbox128 4x2 boxdyn +restart gbox128 2x2 boxadv +restart gbox128 4x4 boxrestore diff --git a/configuration/scripts/tests/baseline.script b/configuration/scripts/tests/baseline.script index 6234456b4..527b9e95f 100644 --- a/configuration/scripts/tests/baseline.script +++ b/configuration/scripts/tests/baseline.script @@ -26,6 +26,19 @@ if (${ICE_BASECOM} != ${ICE_SPVAL}) then set baseline_dir = ${ICE_BASELINE}/${ICE_BASECOM}/${ICE_TESTNAME}/restart set baseline_data = ${baseline_dir}/${test_file} + set baseline_log = `ls -1t ${ICE_BASELINE}/${ICE_BASECOM}/${ICE_TESTNAME}/cice.runlog* | head -1` + set btimeloop = -1 + set bdynamics = -1 + set bcolumn = -1 + if (${baseline_log} != "" ) then + set btimeloop = `grep TimeLoop ${baseline_log} | grep Timer | cut -c 22-32` + set bdynamics = `grep Dynamics ${baseline_log} | grep Timer | cut -c 22-32` + set bcolumn = `grep Column ${baseline_log} | grep Timer | cut -c 22-32` + if (${btimeloop} == "") set btimeloop = -1 + if (${bdynamics} == "") set bdynamics = -1 + if (${bcolumn} == "") set bcolumn = -1 + endif + echo "" echo "Regression Compare Mode:" echo "Performing binary comparison between files" @@ -33,13 +46,13 @@ if (${ICE_BASECOM} != ${ICE_SPVAL}) then echo "test: ${test_data}" if (-e ${baseline_data} ) then if ( { cmp -s ${test_data} ${baseline_data} } ) then - echo "PASS ${ICE_TESTNAME} compare ${ICE_BASECOM}" >> ${ICE_CASEDIR}/test_output + echo "PASS ${ICE_TESTNAME} compare ${ICE_BASECOM} ${btimeloop} ${bdynamics} ${bcolumn}" >> ${ICE_CASEDIR}/test_output else - echo "FAIL ${ICE_TESTNAME} compare ${ICE_BASECOM} different-data" >> ${ICE_CASEDIR}/test_output + echo "FAIL ${ICE_TESTNAME} compare ${ICE_BASECOM} ${btimeloop} ${bdynamics} ${bcolumn} different-data" >> ${ICE_CASEDIR}/test_output echo "Regression baseline and test dataset are different" endif else - echo "FAIL ${ICE_TESTNAME} compare ${ICE_BASECOM} baseline-does-not-exist" >> ${ICE_CASEDIR}/test_output + echo "MISS ${ICE_TESTNAME} compare ${ICE_BASECOM} ${btimeloop} ${bdynamics} ${bcolumn} missing-data" >> ${ICE_CASEDIR}/test_output echo "Baseline file does not exist" endif endif @@ -51,16 +64,24 @@ endif if (${ICE_BFBCOMP} != ${ICE_SPVAL}) then set test_file = `ls -t1 ${ICE_RUNDIR}/restart | head -1` - set test_data = ${ICE_RUNDIR}/restart/${test_file} + if (${test_file} != "") then + set test_data = ${ICE_RUNDIR}/restart/${test_file} + else + set test_data = "NoThInG__Here" + endif set comp_file = `ls -t1 ${ICE_RUNDIR}/../${ICE_BFBCOMP}/restart | head -1` - set comp_data = ${ICE_RUNDIR}/../${ICE_BFBCOMP}/restart/${comp_file} + if (${comp_file} != "") then + set comp_data = ${ICE_RUNDIR}/../${ICE_BFBCOMP}/restart/${comp_file} + else + set comp_data = "NoThInG__Here" + endif echo "" echo "BFB Compare Mode:" echo "Performing binary comparison between files" echo "comp_data: ${comp_data}" echo "test_data: ${test_data}" - if (-e ${comp_data} ) then + if (-e ${comp_data} && -e ${test_data}) then if ( { cmp -s ${test_data} ${comp_data} } ) then echo "PASS ${ICE_TESTNAME} bfbcomp ${ICE_BFBCOMP}" >> ${ICE_CASEDIR}/test_output else @@ -68,8 +89,8 @@ if (${ICE_BFBCOMP} != ${ICE_SPVAL}) then echo "bfbcomp and test dataset are different" endif else - echo "FAIL ${ICE_TESTNAME} bfbcomp baseline_does-not-exist" >> ${ICE_CASEDIR}/test_output - echo "Baseline file does not exist" + echo "MISS ${ICE_TESTNAME} bfbcomp missing-data" >> ${ICE_CASEDIR}/test_output + echo "Missing data" endif endif diff --git a/configuration/scripts/tests/decomp_suite.ts b/configuration/scripts/tests/decomp_suite.ts new file mode 100644 index 000000000..6cb354f59 --- /dev/null +++ b/configuration/scripts/tests/decomp_suite.ts @@ -0,0 +1,14 @@ +# Test Grid PEs Sets BFB-compare +decomp gx3 4x2x25x29x5 +restart gx3 4x2x25x29x4 dslenderX2 +restart gx3 4x1x25x116x1 dslenderX1,thread restart_gx3_4x2x25x29x4_dslenderX2 +restart gx3 6x2x4x29x18 dspacecurve restart_gx3_4x2x25x29x4_dslenderX2 +restart gx3 8x2x8x10x20 droundrobin restart_gx3_4x2x25x29x4_dslenderX2 +restart gx3 6x2x50x58x1 droundrobin restart_gx3_4x2x25x29x4_dslenderX2 +restart gx3 4x2x19x19x10 droundrobin restart_gx3_4x2x25x29x4_dslenderX2 +restart gx3 1x20x5x29x80 dsectrobin restart_gx3_4x2x25x29x4_dslenderX2 +restart gx3 16x2x5x10x20 drakeX2 restart_gx3_4x2x25x29x4_dslenderX2 +restart gx3 8x2x8x10x20 droundrobin,maskhalo restart_gx3_4x2x25x29x4_dslenderX2 +restart gx3 1x4x25x29x16 droundrobin restart_gx3_4x2x25x29x4_dslenderX2 +restart gx3 1x1x50x58x4 droundrobin,thread restart_gx3_4x2x25x29x4_dslenderX2 + diff --git a/configuration/scripts/tests/quick_suite.ts b/configuration/scripts/tests/quick_suite.ts new file mode 100644 index 000000000..006104367 --- /dev/null +++ b/configuration/scripts/tests/quick_suite.ts @@ -0,0 +1,5 @@ +# Test Grid PEs Sets BFB-compare +smoke gx3 8x2 diag1,run5day +restart gbox128 8x1 diag1 +restart gx3 4x2 debug,diag1,run5day +smoke gx3 4x1 diag1,run5day,thread smoke_gx3_8x2_diag1_run5day diff --git a/configuration/scripts/tests/report_results.csh b/configuration/scripts/tests/report_results.csh index 5faec637c..385036402 100755 --- a/configuration/scripts/tests/report_results.csh +++ b/configuration/scripts/tests/report_results.csh @@ -1,16 +1,23 @@ #!/bin/csh -f +if (! -e results.log) then + echo " " + echo "${0}: ERROR results.log does not exist, try running results.csh" + echo " " + exit -1 +endif + set wikirepo = "https://github.com/CICE-Consortium/Test-Results.wiki.git" set wikiname = Test-Results.wiki -set tsubdir = cice_testing -set hfile = "cice_by_hash" -set mfile = "cice_by_mach" -set vfile = "cice_by_vers" -set bfile = "cice_by_bran" - rm -r -f ${wikiname} git clone ${wikirepo} ${wikiname} +if ($status != 0) then + echo " " + echo "${0}: ERROR git clone failed" + echo " " + exit -1 +endif set repo = `grep "#repo = " results.log | cut -c 9-` set bran = `grep "#bran = " results.log | cut -c 9-` @@ -49,7 +56,13 @@ set xcdat = `echo $cdat | sed 's|-||g' | cut -c 3-` set xctim = `echo $ctim | sed 's|:||g'` set shrepo = `echo $repo | tr '[A-Z]' '[a-z]'` +set tsubdir = cice_master +set hfile = "cice_by_hash" +set mfile = "cice_by_mach" +set vfile = "cice_by_vers" +set bfile = "cice_by_bran" if ("${shrepo}" !~ "*cice-consortium*") then + set tsubdir = cice_dev set hfile = {$hfile}_forks set mfile = {$mfile}_forks set vfile = {$vfile}_forks @@ -79,7 +92,7 @@ foreach compiler ( ${compilers} ) cat >! ${outfile} << EOF -| Build | Run | Test | Regression | Compare | Timing | Case | +|Bld|Run|Test| Regr | Compare | Timing | Case | | ------ | ------ | ------ | ------ | ------ | ------ | ------ | EOF @@ -93,26 +106,65 @@ EOF foreach case ( ${cases} ) if ( ${case} =~ *_${compiler}_* ) then - @ ttotl = $ttotl + 1 +# check thata case results are meaningful + set fbuild = `grep " ${case} " results.log | grep " build" | cut -c 1-4` + set frun = `grep " ${case} " results.log | grep " run" | cut -c 1-4` + set ftest = `grep " ${case} " results.log | grep " test" | cut -c 1-4` - set tchkpass = 1 +if ( $fbuild != "" || $frun != "" || $ftest != "" ) then - set fbuild = `grep " ${case} " results.log | grep " build" | cut -c 1-4` + set fbuild = `grep " ${case} " results.log | grep " build" | cut -c 1-4` + set frun = `grep " ${case} " results.log | grep " run" | cut -c 1-4` + set ftest = `grep " ${case} " results.log | grep " test" | cut -c 1-4` set fregr = `grep " ${case} " results.log | grep " compare" | cut -c 1-4` set fcomp = `grep " ${case} " results.log | grep " bfbcomp" | cut -c 1-4` + if (${ftest} == "PASS") set frun = "PASS" + if (${frun} == "PASS") set fbuild = "PASS" + set vregr = `grep " ${case} " results.log | grep " compare" | cut -d " " -f 4 | sed 's/\./ /g' ` set vcomp = `grep " ${case} " results.log | grep " bfbcomp" | cut -d " " -f 4` - set ftime = "" - if (${case} =~ *_restart_*) then - set frun = `grep " ${case} " results.log | grep " run-initial" | cut -c 1-4` - set frun = `grep " ${case} " results.log | grep " run-restart" | cut -c 1-4` - set ftest = `grep " ${case} " results.log | grep " exact-restart" | cut -c 1-4` - else if (${case} =~ *_smoke_*) then - set frun = `grep " ${case} " results.log | grep " run" | cut -c 1-4` - set ftest = `grep " ${case} " results.log | grep " run" | cut -c 1-4` + set vtime1 = `grep " ${case} " results.log | grep " run" | cut -d " " -f 4` + set vtime2 = `grep " ${case} " results.log | grep " run" | cut -d " " -f 5` + set vtime3 = `grep " ${case} " results.log | grep " run" | cut -d " " -f 6` + + set btime1 = `grep " ${case} " results.log | grep " compare" | cut -d " " -f 5` + set btime2 = `grep " ${case} " results.log | grep " compare" | cut -d " " -f 6` + set btime3 = `grep " ${case} " results.log | grep " compare" | cut -d " " -f 7` + + if (${btime1} != "") then + if (`echo "${btime1} < 0.0" | bc`) set btime1 = "" + endif + if (${btime2} != "") then + if (`echo "${btime2} < 0.0" | bc`) set btime2 = "" + endif + if (${btime3} != "") then + if (`echo "${btime3} < 0.0" | bc`) set btime3 = "" + endif + + set vtime = "" + if (${vtime1} != "") set vtime = "$vtime TL=${vtime1}(${btime1})" + if (${vtime2} != "") set vtime = "$vtime Dyn=${vtime2}(${btime2})" + if (${vtime3} != "") set vtime = "$vtime Col=${vtime3}(${btime3})" + + set scale1 = 1.2 + set scale2 = 1.5 + set ftime = "" + if (${vtime1} != "" && ${btime1} != "") then + if (`echo "${vtime1} > 0.0" | bc` && `echo "${btime1} > 0.0" | bc`) then + if (`echo "$vtime1 > $btime1*$scale2" | bc`) then + set ftime = "FAIL" + else if (`echo "$vtime1 > $btime1*$scale1" | bc`) then + set ftime = "NOTSOGOOD" + else + set ftime = "PASS" + endif + endif endif + @ ttotl = $ttotl + 1 + set tchkpass = 1 + set noglob set rbuild = ${yellow} set rrun = ${yellow} @@ -126,12 +178,14 @@ if ( ${case} =~ *_${compiler}_* ) then if (${ftest} == "PASS") set rtest = ${green} if (${fregr} == "PASS") set rregr = ${green} if (${fcomp} == "PASS") set rcomp = ${green} + if (${ftime} == "PASS") set rtime = ${green} if (${fbuild} == "FAIL") set rbuild = ${red} if (${frun} == "FAIL") set rrun = ${red} if (${ftest} == "FAIL") set rtest = ${red} if (${fregr} == "FAIL") set rregr = ${red} if (${fcomp} == "FAIL") set rcomp = ${red} + if (${ftime} == "FAIL") set rtime = ${red} if (${fbuild} == "") set rbuild = ${red} if (${frun} == "") set rrun = ${red} @@ -140,8 +194,12 @@ if ( ${case} =~ *_${compiler}_* ) then if (${fcomp} == "") set rcomp = ${gray} if (${ftime} == "") set rtime = ${gray} - set fregrx = `grep " ${case} " results.log | grep " compare" | grep "baseline-does-not-exist" | wc -l ` - if ($fregrx > 0) set rregr = ${gray} + if (${fbuild} == "MISS") set rbuild = ${gray} + if (${frun} == "MISS") set rrun = ${gray} + if (${ftest} == "MISS") set rtest = ${gray} + if (${fregr} == "MISS") set rregr = ${gray} + if (${fcomp} == "MISS") set rcomp = ${gray} + if (${ftime} == "MISS") set rtime = ${gray} if (${rbuild} == ${red}) set tchkpass = 0 if (${rrun} == ${red}) set tchkpass = 0 @@ -163,11 +221,14 @@ if ( ${case} =~ *_${compiler}_* ) then unset noglob - set xcase = `echo $case | sed 's|_| |g'` - set xvcomp = `echo $vcomp | sed 's|_| |g'` + # remove final .string which is the testid isn't needed here + set wvcomp = `echo ${vcomp} | sed 's|^\(.*\)\.[^.]*$|\1|g'` + set xvcomp = `echo ${wvcomp} | sed 's|_| |g'` + set xcase = `echo ${case} | sed 's|_| |g'` #echo "debug | ${rbuild} | ${rrun} | ${rtest} | ${rregr} ${vregr} | ${rcomp} ${vcomp} | ${case} |" - echo "| ${rbuild} | ${rrun} | ${rtest} | ${rregr} ${vregr} | ${rcomp} ${xvcomp} | ${rtime} | ${xcase} |" >> ${outfile} + echo "| ${rbuild} | ${rrun} | ${rtest} | ${rregr} ${vregr} | ${rcomp} ${xvcomp} | ${rtime} ${vtime} | ${xcase} |" >> ${outfile} +endif endif end @@ -194,13 +255,13 @@ endif unset noglob mv ${outfile} ${outfile}.hold +#- raw results: ${totl} total tests: ${pass} pass, ${fail} fail cat >! ${outfile} << EOF - repo = **${repo}** : **${bran}** - hash = ${hash} - hash created by ${hashuser} ${hashdate} - vers = ${vers} - tested on ${mach}, ${compiler}, ${user}, ${cdat} ${ctim} UTC -- raw results: ${totl} total tests: ${pass} pass, ${fail} fail - ${ttotl} total tests: ${tpass} pass, ${tfail} fail - ${ttotl} total regressions: ${rpass} pass, ${rfail} fail, ${rothr} other EOF diff --git a/configuration/scripts/tests/test_decomp.files b/configuration/scripts/tests/test_decomp.files new file mode 100644 index 000000000..9ff03d58f --- /dev/null +++ b/configuration/scripts/tests/test_decomp.files @@ -0,0 +1,12 @@ +set_nml.drakeice +set_nml.drakepop +set_nml.drakeX1 +set_nml.drakeX2 +set_nml.droundrobin +set_nml.dsectcart +set_nml.dsectrobin +set_nml.dslenderX1 +set_nml.dslenderX2 +set_nml.dspacecurve +set_nml.dsquareice +set_nml.dsquarepop diff --git a/configuration/scripts/tests/test_decomp.script b/configuration/scripts/tests/test_decomp.script new file mode 100644 index 000000000..b58dbcb2b --- /dev/null +++ b/configuration/scripts/tests/test_decomp.script @@ -0,0 +1,123 @@ + +# Build and run case with different decompositions +#----------------------------------------------------------- +# Run the CICE model baseline simulation + +set decomps = "squarepop squareice slenderX2 slenderX1 roundrobin sectcart sectrobin spacecurve rakeX2 rakeX1 rakepop rakeice" + +mv -f ${ICE_CASEDIR}/test_output ${ICE_CASEDIR}/test_output.prev +cat ${ICE_CASEDIR}/test_output.prev | grep -iv "${ICE_TESTNAME} build" >! ${ICE_CASEDIR}/test_output +mv -f ${ICE_CASEDIR}/test_output ${ICE_CASEDIR}/test_output.prev +cat ${ICE_CASEDIR}/test_output.prev | grep -iv "${ICE_TESTNAME} run" >! ${ICE_CASEDIR}/test_output +rm -f ${ICE_CASEDIR}/test_output.prev + +foreach decomp (${decomps}) + echo "PASS ${ICE_TESTNAME}_${decomp} build" >> ${ICE_CASEDIR}/test_output + echo "PEND ${ICE_TESTNAME}_${decomp} run" >> ${ICE_CASEDIR}/test_output +end + +cp ice_in ice_in.0 +set base_data = "" + +foreach decomp (${decomps}) + ${ICE_CASEDIR}/casescripts/parse_namelist.sh ice_in ${ICE_CASEDIR}/casescripts/set_nml.d${decomp} + cp ice_in ice_in.${decomp} + + ./cice.run + set res="$?" + + set grade = FAIL + if ( $res == 0 ) then + set grade = PASS + endif + + mv -f ${ICE_CASEDIR}/test_output ${ICE_CASEDIR}/test_output.prev + cat ${ICE_CASEDIR}/test_output.prev | grep -iv "${ICE_TESTNAME}_${decomp}" >! ${ICE_CASEDIR}/test_output + rm -f ${ICE_CASEDIR}/test_output.prev + + # bfbcomp for this test + if (${grade} != PASS) then + echo "$grade ${ICE_TESTNAME}_${decomp} run" >> ${ICE_CASEDIR}/test_output + echo "$grade ${ICE_TESTNAME}_${decomp} test" >> ${ICE_CASEDIR}/test_output + else + + set log_file = `ls -t1 ${ICE_RUNDIR}/cice.runlog* | head -1` + mv -f ${log_file} ${log_file}.${decomp} + set log_file = ${log_file}.${decomp} + set ttimeloop = `grep TimeLoop ${log_file} | grep Timer | cut -c 22-32` + set tdynamics = `grep Dynamics ${log_file} | grep Timer | cut -c 22-32` + set tcolumn = `grep Column ${log_file} | grep Timer | cut -c 22-32` + if (${ttimeloop} == "") set ttimeloop = -1 + if (${tdynamics} == "") set tdynamics = -1 + if (${tcolumn} == "") set tcolumn = -1 + + set test_file = `ls -t1 ${ICE_RUNDIR}/restart | head -1` + set test_data = ${ICE_RUNDIR}/restart/${test_file} + mv -f ${test_data} ${test_data}.${decomp} + set test_file = ${test_file}.${decomp} + set test_data = ${test_data}.${decomp} + + echo "$grade ${ICE_TESTNAME}_${decomp} run ${ttimeloop} ${tdynamics} ${tcolumn}" >> ${ICE_CASEDIR}/test_output + echo "$grade ${ICE_TESTNAME}_${decomp} test" >> ${ICE_CASEDIR}/test_output + + # bfb compare section + if (${base_data} == "") then + # First run is the base data + set base_case = ${ICE_TESTNAME}_${decomp} + set base_data = ${test_data} + else + set grade = FAIL + if ( { cmp -s $test_data $base_data } ) then + set grade = PASS + endif + echo "$grade ${ICE_TESTNAME}_${decomp} bfbcomp ${base_case}" >> ${ICE_CASEDIR}/test_output + endif + + # compare section + if (${ICE_BASECOM} != ${ICE_SPVAL}) then + + set baseline_dir = ${ICE_BASELINE}/${ICE_BASECOM}/${ICE_TESTNAME}/restart + set baseline_data = ${baseline_dir}/${test_file} + + set baseline_log = `ls -1t ${ICE_BASELINE}/${ICE_BASECOM}/${ICE_TESTNAME}/cice.runlog*.${decomp} | head -1` + set btimeloop = -1 + set bdynamics = -1 + set bcolumn = -1 + if (${baseline_log} != "" ) then + set btimeloop = `grep TimeLoop ${baseline_log} | grep Timer | cut -c 22-32` + set bdynamics = `grep Dynamics ${baseline_log} | grep Timer | cut -c 22-32` + set bcolumn = `grep Column ${baseline_log} | grep Timer | cut -c 22-32` + if (${btimeloop} == "") set btimeloop = -1 + if (${bdynamics} == "") set bdynamics = -1 + if (${bcolumn} == "") set bcolumn = -1 + endif + + echo "" + echo "Regression Compare Mode:" + echo "Performing binary comparison between files" + echo "baseline: ${baseline_data}" + echo "test: ${test_data}" + if (-e ${baseline_data} ) then + if ( { cmp -s ${test_data} ${baseline_data} } ) then + echo "PASS ${ICE_TESTNAME}_$decomp compare ${ICE_BASECOM} ${btimeloop} ${bdynamics} ${bcolumn}" >> ${ICE_CASEDIR}/test_output + else + echo "FAIL ${ICE_TESTNAME}_$decomp compare ${ICE_BASECOM} ${btimeloop} ${bdynamics} ${bcolumn} different-data" >> ${ICE_CASEDIR}/test_output + echo "Regression baseline and test dataset are different" + endif + else + echo "MISS ${ICE_TESTNAME}_$decomp compare ${ICE_BASECOM} ${btimeloop} ${bdynamics} ${bcolumn} missing-data" >> ${ICE_CASEDIR}/test_output + echo "Missing data" + endif + endif + endif + +end + +cp ice_in.0 ice_in +#----------------------------------------------------------- + +# turn off general test features, these are done above for this test +setenv ICE_BASECOM ${ICE_SPVAL} +setenv ICE_BFBCOMP ${ICE_SPVAL} + +#----------------------------------------------------------- diff --git a/configuration/scripts/tests/test_restart.files b/configuration/scripts/tests/test_restart.files new file mode 100644 index 000000000..9ea3312af --- /dev/null +++ b/configuration/scripts/tests/test_restart.files @@ -0,0 +1,2 @@ +test_nml.restart1 +test_nml.restart2 diff --git a/configuration/scripts/tests/test_restart.script b/configuration/scripts/tests/test_restart.script index 3cde53555..712ab3f75 100644 --- a/configuration/scripts/tests/test_restart.script +++ b/configuration/scripts/tests/test_restart.script @@ -9,19 +9,26 @@ cp ice_in ice_in.1 mv -f ${ICE_CASEDIR}/test_output ${ICE_CASEDIR}/test_output.prev cat ${ICE_CASEDIR}/test_output.prev | grep -iv "${ICE_TESTNAME} run" >! ${ICE_CASEDIR}/test_output -echo "RUN ${ICE_TESTNAME} run" >> ${ICE_CASEDIR}/test_output -echo "PEND ${ICE_TESTNAME} exact-restart" >> ${ICE_CASEDIR}/test_output +mv -f ${ICE_CASEDIR}/test_output ${ICE_CASEDIR}/test_output.prev +cat ${ICE_CASEDIR}/test_output.prev | grep -iv "${ICE_TESTNAME} test" >! ${ICE_CASEDIR}/test_output +rm -f ${ICE_CASEDIR}/test_output.prev +echo "RUN ${ICE_TESTNAME} run " >> ${ICE_CASEDIR}/test_output +echo "PEND ${ICE_TESTNAME} test " >> ${ICE_CASEDIR}/test_output ./cice.run set res="$?" -mv -f ${ICE_CASEDIR}/test_output ${ICE_CASEDIR}/test_output.prev -cat ${ICE_CASEDIR}/test_output.prev | grep -iv "${ICE_TESTNAME} run" >! ${ICE_CASEDIR}/test_output if ( $res != 0 ) then - echo "FAIL ${ICE_TESTNAME} run-initial" >> ${ICE_CASEDIR}/test_output + mv -f ${ICE_CASEDIR}/test_output ${ICE_CASEDIR}/test_output.prev + cat ${ICE_CASEDIR}/test_output.prev | grep -iv "${ICE_TESTNAME} run" >! ${ICE_CASEDIR}/test_output + mv -f ${ICE_CASEDIR}/test_output ${ICE_CASEDIR}/test_output.prev + cat ${ICE_CASEDIR}/test_output.prev | grep -iv "${ICE_TESTNAME} test " >! ${ICE_CASEDIR}/test_output + rm -f ${ICE_CASEDIR}/test_output.prev + echo "FAIL ${ICE_TESTNAME} run" >> ${ICE_CASEDIR}/test_output + echo "FAIL ${ICE_TESTNAME} test " >> ${ICE_CASEDIR}/test_output exit 99 else - echo "PASS ${ICE_TESTNAME} run-initial" >> ${ICE_CASEDIR}/test_output + echo "PASS ${ICE_TESTNAME} initialrun" >> ${ICE_CASEDIR}/test_output endif # Prepend 'baseline_' to the final restart file to save for comparison @@ -40,27 +47,39 @@ ${ICE_CASEDIR}/casescripts/parse_namelist.sh ice_in ${ICE_CASEDIR}/casescripts/t cp ice_in ice_in.2 ./cice.run +set res="$?" -if ( $? != 0 ) then - echo "FAIL ${ICE_TESTNAME} run-restart" >> ${ICE_CASEDIR}/test_output - exit 99 -else - echo "PASS ${ICE_TESTNAME} run-restart" >> ${ICE_CASEDIR}/test_output -endif - -#----------------------------------------------------------- +cp ice_in.0 ice_in mv -f ${ICE_CASEDIR}/test_output ${ICE_CASEDIR}/test_output.prev -cat ${ICE_CASEDIR}/test_output.prev | grep -iv "${ICE_TESTNAME} exact-restart" >! ${ICE_CASEDIR}/test_output - -echo "Exact Restart Comparison Mode:" -echo "Performing binary comparison between files:" -echo "base: $base_data" -echo "test: $test_data" -if ( { cmp -s $test_data $base_data } ) then - echo "PASS ${ICE_TESTNAME} exact-restart" >> ${ICE_CASEDIR}/test_output +cat ${ICE_CASEDIR}/test_output.prev | grep -iv "${ICE_TESTNAME} run" >! ${ICE_CASEDIR}/test_output +mv -f ${ICE_CASEDIR}/test_output ${ICE_CASEDIR}/test_output.prev +cat ${ICE_CASEDIR}/test_output.prev | grep -iv "${ICE_TESTNAME} test" >! ${ICE_CASEDIR}/test_output +rm -f ${ICE_CASEDIR}/test_output.prev + +if ( $res != 0 ) then + echo "FAIL ${ICE_TESTNAME} run " >> ${ICE_CASEDIR}/test_output + echo "FAIL ${ICE_TESTNAME} test " >> ${ICE_CASEDIR}/test_output + exit 99 else - echo "FAIL ${ICE_TESTNAME} exact-restart" >> ${ICE_CASEDIR}/test_output + set log_file = `ls -t1 ${ICE_RUNDIR}/cice.runlog* | head -1` + set ttimeloop = `grep TimeLoop ${log_file} | grep Timer | cut -c 22-32` + set tdynamics = `grep Dynamics ${log_file} | grep Timer | cut -c 22-32` + set tcolumn = `grep Column ${log_file} | grep Timer | cut -c 22-32` + if (${ttimeloop} == "") set ttimeloop = -1 + if (${tdynamics} == "") set tdynamics = -1 + if (${tcolumn} == "") set tcolumn = -1 + echo "PASS ${ICE_TESTNAME} run ${ttimeloop} ${tdynamics} ${tcolumn}" >> ${ICE_CASEDIR}/test_output + + echo "Exact Restart Comparison Mode:" + echo "Performing binary comparison between files:" + echo "base: $base_data" + echo "test: $test_data" + if ( { cmp -s $test_data $base_data } ) then + echo "PASS ${ICE_TESTNAME} test " >> ${ICE_CASEDIR}/test_output + else + echo "FAIL ${ICE_TESTNAME} test " >> ${ICE_CASEDIR}/test_output + endif endif #----------------------------------------------------------- diff --git a/configuration/scripts/tests/test_smoke.script b/configuration/scripts/tests/test_smoke.script index 4f878d935..53d1747b0 100644 --- a/configuration/scripts/tests/test_smoke.script +++ b/configuration/scripts/tests/test_smoke.script @@ -5,18 +5,31 @@ mv -f ${ICE_CASEDIR}/test_output ${ICE_CASEDIR}/test_output.prev cat ${ICE_CASEDIR}/test_output.prev | grep -iv "${ICE_TESTNAME} run" >! ${ICE_CASEDIR}/test_output -echo "RUN ${ICE_TESTNAME} run" >> ${ICE_CASEDIR}/test_output +rm -f ${ICE_CASEDIR}/test_output.prev +echo "RUN ${ICE_TESTNAME} run " >> ${ICE_CASEDIR}/test_output ./cice.run set res="$?" +set log_file = `ls -t1 ${ICE_RUNDIR}/cice.runlog* | head -1` +set ttimeloop = `grep TimeLoop ${log_file} | grep Timer | cut -c 22-32` +set tdynamics = `grep Dynamics ${log_file} | grep Timer | cut -c 22-32` +set tcolumn = `grep Column ${log_file} | grep Timer | cut -c 22-32` +if (${ttimeloop} == "") set ttimeloop = -1 +if (${tdynamics} == "") set tdynamics = -1 +if (${tcolumn} == "") set tcolumn = -1 + mv -f ${ICE_CASEDIR}/test_output ${ICE_CASEDIR}/test_output.prev cat ${ICE_CASEDIR}/test_output.prev | grep -iv "${ICE_TESTNAME} run" >! ${ICE_CASEDIR}/test_output -if ( $res != 0 ) then - # Run failed - echo "FAIL ${ICE_TESTNAME} run" >> ${ICE_CASEDIR}/test_output - exit 99 -else - # Run succeeded - echo "PASS ${ICE_TESTNAME} run" >> ${ICE_CASEDIR}/test_output +mv -f ${ICE_CASEDIR}/test_output ${ICE_CASEDIR}/test_output.prev +cat ${ICE_CASEDIR}/test_output.prev | grep -iv "${ICE_TESTNAME} test" >! ${ICE_CASEDIR}/test_output +rm -f ${ICE_CASEDIR}/test_output.prev + +set grade = FAIL +if ( $res == 0 ) then + set grade = PASS endif + +echo "$grade ${ICE_TESTNAME} run ${ttimeloop} ${tdynamics} ${tcolumn}" >> ${ICE_CASEDIR}/test_output +echo "$grade ${ICE_TESTNAME} test " >> ${ICE_CASEDIR}/test_output + diff --git a/configuration/scripts/tests/travis_suite.ts b/configuration/scripts/tests/travis_suite.ts new file mode 100644 index 000000000..27a4255d2 --- /dev/null +++ b/configuration/scripts/tests/travis_suite.ts @@ -0,0 +1,7 @@ +# Test Grid PEs Sets BFB-compare +smoke gx3 1x1 debug,run1day +smoke gx3 2x2 debug,run1day +smoke gx3 1x2 run2day +smoke gx3 2x1 run2day,thread smoke_gx3_1x2_run2day +restart gx3 2x1 +restart gx3 1x2 diff --git a/doc/.DS_Store b/doc/.DS_Store deleted file mode 100644 index edbe77ac4..000000000 Binary files a/doc/.DS_Store and /dev/null differ diff --git a/doc/CICE-Consortium.pdf b/doc/CICE-Consortium.pdf deleted file mode 100644 index 1a80a3b23..000000000 Binary files a/doc/CICE-Consortium.pdf and /dev/null differ diff --git a/doc/about-sphinx-documentation.txt b/doc/about-sphinx-documentation.txt deleted file mode 100644 index 8e3bf471f..000000000 --- a/doc/about-sphinx-documentation.txt +++ /dev/null @@ -1,85 +0,0 @@ -Basic information about creating documentation with Sphinx -Alice DuVivier -July 14, 2017 ----------------------------------------------------------- - -INSTALLING SPHINX -You first need to install sphinx on your local machine. See here for more info on how to do this: -https://github.com/NCAR/CICE/wiki/Working-with-CICE-documentation-on-github-and-sphinx -I just had the NCAR ISG folks do it. Note that you will need the sphinxcontrib.bibtex library. - -CONVERTING LATEX TO RST CODE -If you start from a LaTeX (*.tex) document you will need to convert this to the restructured -text (*.rst) format that sphinx requires. A handy tool to do this is pandoc, which you can -install quickly and run from the command line. See here: http://pandoc.org/getting-started.html -The basic syntax is that at the command line you enter: -> panda NAMEIN.tex -f latex -t rst -s -ou NAMEOUT.rst -From here you can just start editing the NAMEOUT.rst files directly for sphinx. Note that this -did a beautiful job of converting the text, equations, and many tables. However, equation -numbering, section linking, references, figures, and some tables required more hands on care -to be sure they render correctly. -- Note: pandoc requires that the .tex files be in utf-8 encoding. To easily do this open the *.tex -document in Emacs then do ctrl-x ctrl-m f and you will be prompted to enter encoding type. Just -type in utf-8 and hit enter. Then save with ctrl-x ctrl-s . You are done and the document can be -converted with panda. - -CREATING AN ORPHAN BRANCH -To create documentation for a repository on GitHub we want to create an orphan branch for that -repository. An orphan branch is a branch that has no history in common with the master repo off -of which it was forked. In this case, the purpose for this is that the html pages created by -sphinx will exist in the orphan branch while the source *.rst code used to create the html exists -in the master repository. This way changes to the source code go into the evolving repository, but -changes to the html are isolated to the orphan branch and when you try to merge it avoids conflicts -related to the html that aren’t always caught by GitHub and could cause the documentation to break. - -To create an orphan branch, you will need owner status for a repository. Steps to create orphan branch: -On GitHub website create personal fork of the repo you’re interested in -On GitHub website from your fork, get link for the repository -On local machine: -> git clone URL -> git branch --l -> git checkout --orphan gh-pages -(creates the orphan branch gh-pages from master. Should have all documents in master to start) -> git rm -rf . -(removes everything in this repository. i.e. we are making it so their histories diverge) -> git status . -(check if all the files are gone) -> git echo “Documentation link goes here” > README.md -(it’s still nice to have a readme file) -> git add README.md -(this is all that will be in the branch) -> git commit -m “Initial commit to create gh-pages branch of repo” -> git remote --v -(get list of remote repositories and links. Should show your personal fork as “origin”) -> git push origin gh-pages -(adds the orphan branch to your local fork but not the original repo from which you forked) -> git remote add upstream URL -(add the URL for the original repository as your “upstream” repository. Get this from Github website) -> git remote --v -(verify that you now have an upstream remote URL too) -> git branch --l -(verify you are on the gh-pages branch only so that these changes will be pushed) -**** BE CAREFUL **** -> git push upstream gh-pages -(this pushes the gh-pages branch to the upstream repo. You may also want to try doing a pull request instead so that -others have a chance to look at what you’ve done. I did it both ways, I’d prefer the pull request method for the future). - -Now you have set up the orphan branch, but you need to build the sphinx stuff for it: -> git checkout master -(switch to master branch in order to set up doc/source directory it will use and other sphinx stuff) -> cd doc -> mkdir source -> sphinx-quickstart -(Your local machine will prompt you with interactive options to choose. I had Alice B.’s guidance to do this but could look at an old conf.py file to see what options were chosen. You only do this once for a repo but conf.py can be changed later on). -make/copy .gitignore from CICE repo to the main Icepack repository. Want to ignore html code when pushing to master branch. Also want to ignore emacs backup files (*~). -> git status -(should show the files conf.py, .gitignore, others? to initialize the sphinx stuff) -> git add . -> git commit -m “Initial setup for sphinx documentation” -> git push origin master -(push pages to your local master. Then either push upstream if you are an owner or *preferably* do a pull request to merge with the original repository) - -Make local gh-pages version for just the html. This is convenient so that you can have the master repo where you make code and another repo for the gh-pages branch from which you push the html. -> git clone -b gh-pages — single-branch https://github.com/duvivier/Icepack.git Icepack.gh-pages -(makes local copy of just gh-pages branch) - diff --git a/doc/cicedoc.pdf b/doc/cicedoc.pdf deleted file mode 100644 index 682f5f380..000000000 Binary files a/doc/cicedoc.pdf and /dev/null differ diff --git a/doc/requirements.txt b/doc/requirements.txt new file mode 100644 index 000000000..8788d6ac3 --- /dev/null +++ b/doc/requirements.txt @@ -0,0 +1,5 @@ +# +# +sphinxcontrib-bibtex +# +# diff --git a/doc/source/cice_1_introduction.rst b/doc/source/cice_1_introduction.rst deleted file mode 100644 index 7edf1bb9d..000000000 --- a/doc/source/cice_1_introduction.rst +++ /dev/null @@ -1,287 +0,0 @@ -:tocdepth: 3 - -Introduction - CICE5 -============================================ - -The Los Alamos sea ice model (CICE) is the result of an effort to -develop a computationally efficient sea ice component for a fully -coupled atmosphere--land global climate model. It was -designed to be compatible with the Parallel Ocean Program -(POP), an ocean circulation model developed at -Los Alamos National Laboratory for use on massively parallel computers -:cite:`SDM92,DSM93,DSM94`. The current version of the -model has been enhanced greatly through collaborations with members of -the community. - -CICE has several interacting components: a thermodynamic model that -computes local growth rates of snow and ice due to vertical conductive, -radiative and turbulent fluxes, along with snowfall; a model of ice -dynamics, which predicts the velocity field of the ice pack based on -a model of the material strength of the ice; a transport model that -describes advection of the areal concentration, ice volumes and other -state variables; and a ridging parameterization that transfers ice among -thickness categories based on energetic balances and -rates of strain.External routines would prepare and execute data exchanges with an -external "flux coupler," which then passes the data to other climate -model components such as POP. - -This model release is CICE version 5.1, available from http://oceans11.lanl.gov/trac/CICE/wiki. -It updates CICE5.0, which was released in September 2013. With so many new parameterizations, -we must admit that all combinations have not been tested. Also, different parameterizations for -various sub-processes (e.g., snow infiltration by melt or sea water) have been introduced as part -of the new code options, which need to be unified or pulled out as separate options. - -This document uses the following text conventions: -Variable names used in the code are ``typewritten``. -Subroutine names are given in *italic*. -File and directory names are in **boldface**. -A comprehensive :ref:`index`, including glossary of symbols with many of their values, appears -at the end of this guide. - -================= -Quick Start guide -================= - -~~~~~~~~~~~~~ -Get the model -~~~~~~~~~~~~~ - -Checkout the model from the CICE-Consortium repository, - - github.com/CICE-Consortium - -For more details about how to work in github with CICE, a document can be -found `here `_. - -~~~~~~~~~~~~~~~~~ -Running the model -~~~~~~~~~~~~~~~~~ - -> cd consortium - -> ./create.case -c ~/mycase1 -g gx3 -m thunder -s diag1,thread -p 8x1 - -> cd ~/mycase1 - -> ./cice.build - -> ./cice.submit/Users/duvivier/Documents/Research/github/CICE-Consortium/CICE/doc/source/all_orig/cice_2_quick_start.rst - -~~~~~~~~~~~~ -More Details -~~~~~~~~~~~~ - -create.case generates a case, use "create.case -h" for help with the tool. - -c is the case name and location (required) - - -m is the machine name (required). Currently, there are working ports for NCAR cheyenne, AFRL thunder, NavyDSRC gordon and conrad, and LANL’s wolf machines. - - -g is the resolution (default is gx3) - - -p is the task x thread/task values (default is 4x1) - - -s are comma separated optional env or namelist settings (default is "null") - - -t is the test name and location (cannot be used with -c). - - -bd is used to specify the location of the baseline datasets (only used with -t) - - -bg is used to specify the cice version name for generating baseline datasets (only used with -t) - - -bc is used to specify the cice versoin name for comparison. I.e., the version name for the baseline dataset (only used with -t) - - -testid is used to specify a test ID (used only with -t or -ts) - - -ts is used to generate all test cases for a given test suite. - - -Several files are placed in the case directory - - - env.${machine} defines the environment - - - cice.settings defines many variables associated with building and running the model - - - makdep.c is a tool that will automatically generate the make dependencies - - - Macros.${machine} defines the Makefile Macros - - - Makefile is the makefile used to build the model - - - cice.build is a script that build the model - - - ice_in is the namelist file - - - cice.run is a batch run script - - - cice.submit is a simple script that submits the cice.run script - -Once the case is created, all scripts and namelist are fully resolved. Users can edit any -of the files in the case directory manually to change the model configuration. The file -dependency is indicated in the above list. For instance, if any of the files before -cice.build in the list are edited, cice.build should be rerun. - -The casescripts directory holds scripts used to create the case and can largely be ignored. - -In general, when cice.build is executed, the model will build from scratch due to the large -dependence on cpps. To change this behavior, edit the env variable ICE_CLEANBUILD in -cice.settings. - -The cice.submit script just submits the cice.run script. You can use cice.submit or just -submit the cice.run script on the command line. - -The model will run in the directory defined by the env variable CICE_RUNDIR in cice.settings. -Build and run logs will be copied into the case logs directory when complete. - -To port, an env.machine and Macros.machine file have to be added to scripts/machines and the cice.run.setup.csh file needs to be modified. - - cd to consortium/scripts/machines - - Copy an existing env and Macros file to new names for your new machine - - Edit the env and Macros file - - cd to consortium/scripts - - Edit the cice.run.setup.csh script to add a section for your machine for the batch settings and for the job launch settings - - Download and untar the 1997 dataset to the location defined by ICE_MACHINE_INPUTDATA in the env file - - Create a file in your home directory called .cice_proj and add your preferred account name to the first line. - - You can now create a case and test. If there are problems, you can manually edit the env, Macros, and cice.run files in the case directory until things are working properly. Then you can copy the env and Macros files back to consortium/scripts/machines. You will have to manually modify the cice.run.setup.csh script if there any changes needed there. - -~~~~~~~~~~~~ -Forcing data -~~~~~~~~~~~~ - -The code is currently configured to run in standalone mode on a 3 degree grid using -atmospheric data from 1997, available as detailed on the `wiki `_. -These data files are designed only for testing the code, not for use in production -runs or as observational data. Please do not publish results based on these data -sets. Module cicecore/dynamics/cicedynB/ice_forcing.F90 can be modified to change the -forcing data. - -As currently configured, the model runs on 4 processors. MPI is used for message passing -between processors, and OpenMP threading is available. The grid provided here is too -small for the code to scale well beyond about 8 processors. A 1 degree grid is provided also, -and details about this grid can be found on the `wiki `_. - -~~~~~~~~~~~~~~~~ -Online resources -~~~~~~~~~~~~~~~~ - -**DO WE WANT TO KEEP THESE?** - -primary wiki page: - - -FAQ: - - -instructions for code developers: - - -ongoing or planned development projects: - - -list of users and publications: - - -Please send references to your publications using the CICE model to ... - - -Please report any bugs to -Elizabeth Hunke (eclare@lanl.gov) - -Good luck! - - -============= -Major updates -============= - -~~~~~~~~~ -CICE V5.1 -~~~~~~~~~ - -- include ice velocity in atm-ice coupling updates (e.g. stress) for high-frequency coupling -- allow a variable coefficient for the ice-ocean heat flux -- several new namelist options improve flexibility, especially for coupled model configurations: - - ice-ocean heat flux - - 'virtual' or 'real' topo melt pond water - - ocean freezing temperature - - high-frequency coupling - - coupling and computational grids may be different - - and more -- additional software enhancements improve flexibility and compatibility with CESM, Hadley Centre, and U.S. Navy coupled models -- new diagnostics and updated documentation -- various bug fixes - -~~~~~~~~~ -CICE V5.0 -~~~~~~~~~ - -- A method for prognosing sea ice salinity, including improved snow-ice formation -- Two new explicit melt pond parameterizations (topo and level-ice) -- Sea ice biogeochemistry -- Elastic-Anisotropic-Plastic rheology -- Improved parameterization of form drag -- The "revised EVP" under-damping approach -- Gracefully handles the case when an internal layer melts completely -- Gregorian calendar with leap years -- Reduced memory and floating-point operations for tracer calculations -- Ice and snow enthalpy defined as tracers -- New history variables for melt ponds, ridging diagnostics, biogeochemistry and more -- Read/write variables on the extended grid, including ghost (halo) cells -- Parallelism option via OpenMP threads -- Improved parallel efficiency through halo masks and new block distributions -- Parallel I/O option via the PIO library -- Restarts in binary or netCDF formats -- CPP options for categories, layers and tracers -- Corrected bugs, particularly for nonstandard configurations. - -====================== -Acknowledgements -====================== -This work has been supported under the Department of Energy’s Climate, -Ocean and Sea Ice Modeling project through the Computer Hardware Applied -Mathematics and Model Physics (CHAMMP) program, Climate Change -Prediction Program (CCPP), Improving the Characterization of Clouds, -Aerosols and the Cryosphere in Climate Models (Cloud-Cryo) program and -Scientific Discovery through Advanced Computing (SCIDAC) program, with -additional support from the T-3 Fluid Dynamics and Solid Mechanics Group -at Los Alamos National Laboratory. Special thanks are due to the -following people: - -- members of the CESM Polar Climate Working Group, including David - Bailey, Alice DuVivier, Cecilia Bitz, Bruce Briegleb, Tony Craig, - Marika Holland, John Dennis, Julie Schramm, Bonnie Light and Phil Jones. - -- Andrew Roberts of the Naval Postgraduate School, - -- David Hebert and Olivier Lecomte for their melt pond work, - -- Jonathan Gregory of the University of Reading and the U.K. MetOffice - for supplying tripole T-fold code and documentation, - -- Alison McLaren, Ann Keen and others working with the Hadley Centre - GCM for testing non-standard model configurations and providing their - code to us, - -- Daniel Feltham and his research group for several new - parameterizations and documentation, - -- Sylvain Bouillon for the revised EVP approach, - -- the many researchers who tested beta versions of CICE 5 and waited - patiently for the official release. - -====================== -Copyright -====================== -© Copyright 2013, LANS LLC. All rights reserved. Unless otherwise -indicated, this information has been authored by an employee or -employees of the Los Alamos National Security, LLC (LANS), operator of -the Los Alamos National Laboratory under Contract No. DE-AC52-06NA25396 -with the U.S. Department of Energy. The U.S. Government has rights to -use, reproduce, and distribute this information. The public may copy and -use this information without charge, provided that this Notice and any -statement of authorship are reproduced on all copies. Neither the -Government nor LANS makes any warranty, express or implied, or assumes -any liability or responsibility for the use of this information. -Beginning with version 4.0, the CICE code carries Los Alamos Software -Release number LA-CC-06-012. - - diff --git a/doc/source/cice_3_user_guide.rst b/doc/source/cice_3_user_guide.rst deleted file mode 100644 index 70f843942..000000000 --- a/doc/source/cice_3_user_guide.rst +++ /dev/null @@ -1,2699 +0,0 @@ -:tocdepth: 3 - -User Guide -========== - ------------------------- -Numerical implementation ------------------------- - -CICE is written in FORTRAN90 and runs on platforms using UNIX, LINUX, -and other operating systems. The code is parallelized via grid -decomposition with MPI or OpenMP threads and includes some optimizations -for vector architectures. - -A second, “external” layer of parallelization involves message passing -between CICE and the flux coupler, which may be running on different -processors in a distributed system. The parallelization scheme for CICE -was designed so that MPI could be used for the coupling along with MPI, -OpenMP or no parallelization internally. The internal parallelization -method is set at compile time with the `NTASK` and `THRD` definitions in the -compile script. Message passing between the ice model and the CESM flux -coupler is accomplished with MPI, regardless of the type of internal -parallelization used for CICE, although the ice model may be coupled to -another system without using MPI. - -.. _dirstructure: - -~~~~~~~~~~~~~~~~~~~ -Directory structure -~~~~~~~~~~~~~~~~~~~ - -The present code distribution includes make files, several scripts and -some input files. The main directory is **cice/**, and a run directory -(**rundir/**) is created upon initial execution of the script -**comp\_ice**. One year of atmospheric forcing data is also available -from the code distribution web site (see the **README** file for -details). - -basic information - -**bld/** makefiles - -**Macros.**\ :math:`\langle`\ OS\ :math:`\rangle`.\ :math:`\langle`\ SITE\ :math:`\rangle`.\ :math:`\langle`\ machine\ :math:`\rangle` - macro definitions for the given operating system, used by - **Makefile**.\ :math:`\langle` \ OS\ :math:`\rangle` - -**Makefile.**\ :math:`\langle`\ OS\ :math:`\rangle` - primary makefile for the given operating system - (**:math:`\langle`\ std\ :math:`\rangle`** works for most systems) - -**makedep.c** - perl script that determines module dependencies - -script that sets up the run directory and compiles the code - -modules based on “shared" code in CESM - -**shr\_orb\_mod.F90** - orbital parameterizations - -documentation - -**cicedoc.pdf** - this document - -**PDF/** - PDF documents of numerous publications related to CICE - -institution-specific modules - -**cice/** - official driver for CICE v5 (LANL) - - **CICE.F90** - main program - - **CICE\_FinalMod.F90** - routines for finishing and exiting a run - - **CICE\_InitMod.F90** - routines for initializing a run - - **CICE\_RunMod.F90** - main driver routines for time stepping - - **CICE\_RunMod.F90\_debug** - debugging version of **CICE\_RunMod.F90** - - **ice\_constants.F90** - physical and numerical constants and parameters - -sample diagnostic output files - -input files that may be modified for other CICE configurations - -**col/** - column configuration files - - **ice\_in** - namelist input data (data paths depend on particular system) - -**gx1/** - :math:`\left<1^\circ\right>` displaced pole grid files - - **global\_gx1.grid** - :math:`\left<1^\circ\right>` displaced pole grid (binary) - - **global\_gx1.kmt** - :math:`\left<1^\circ\right>` land mask (binary) - - **ice.restart\_file** - pointer for restart file name - - **ice\_in** - namelist input data (data paths depend on particular system) - - **ice\_in\_v4.1** - namelist input data for default CICE v4.1 configuration - - **iced\_gx1\_v5.nc** -  restart file used for initial condition - -**gx3/** - :math:`\left<3^\circ\right>` displaced pole grid files - - **global\_gx3.grid** - :math:`\left<3^\circ\right>` displaced pole grid (binary) - - **global\_gx3.kmt** - :math:`\left<3^\circ\right>` land mask (binary) - - **global\_gx3.grid.nc** - :math:`\left<3^\circ\right>` displaced pole grid () - - **global\_gx3.kmt.nc** - :math:`\left<3^\circ\right>` land mask () - - **ice.restart\_file** - pointer for restart file name - - **ice\_in** - namelist input data (data paths depend on particular system) - - **iced\_gx3\_v5.nc** -  restart file used for initial condition - -convert\_restarts.f90 - Fortran code to convert restart files from v4.1 to v5 (4 ice layers) - -**run\_ice.**\ :math:`\langle`\ OS\ :math:`\rangle`.\ :math:`\langle`\ SITE\ :math:`\rangle`.\ :math:`\langle`\ machine\ :math:`\rangle` - sample script for running on the given operating system - -binary history and restart modules - -**ice\_history\_write.F90** - subroutines with binary output - -**ice\_restart.F90** - read/write binary restart files - - history and restart modules - -**ice\_history\_write.F90** - subroutines with  output - -**ice\_restart.F90** - read/write   restart files - -parallel I/O history and restart modules - -**ice\_history\_write.F90** - subroutines with   output using PIO - -**ice\_pio.F90** - subroutines specific to PIO - -**ice\_restart.F90** - read/write  restart files using PIO - -modules that require MPI calls - -**ice\_boundary.F90** - boundary conditions - -**ice\_broadcast.F90** - routines for broadcasting data across processors - -**ice\_communicate.F90** - routines for communicating between processors - -**ice\_exit.F90** - aborts or exits the run - -**ice\_gather\_scatter.F90** - gathers/scatters data to/from one processor from/to all processors - -**ice\_global\_reductions.F90** - global sums, minvals, maxvals, etc., across processors - -**ice\_timers.F90** - timing routines - -same modules as in **mpi/** but without MPI calls - -general CICE source code - -handles most work associated with the aerosol tracers - -handles most work associated with the age tracer - -skeletal layer biogeochemistry - -stability-based parameterization for calculation of turbulent -ice–atmosphere fluxes - -for decomposing global domain into blocks - -evolves the brine height tracer - -keeps track of what time it is - -miscellaneous diagnostic and debugging routines - -for distributing blocks across processors - -decompositions, distributions and related parallel processing info - -domain and block sizes - -elastic-anisotropic-plastic dynamics component - -elastic-viscous-plastic dynamics component - -code shared by EVP and EAP dynamics - -unit numbers for I/O - -handles most work associated with the first-year ice area tracer - -fluxes needed/produced by the model - -routines to read and interpolate forcing data for stand-alone ice model -runs - -grid and land masks - -initialization and accumulation of history output variables - -history output of biogeochemistry variables - -history output of form drag variables - -history output of ridging variables - -history output of melt pond variables - -code shared by all history modules - -namelist and initializations - -utilities for managing ice thickness distribution - -basic definitions of reals, integers, etc. - -handles most work associated with the level ice area and volume tracers - -mechanical redistribution component (ridging) - -CESM melt pond parameterization - -level-ice melt pond parameterization - -topo melt pond parameterization - -mixed layer ocean model - -orbital parameters for Delta-Eddington shortwave parameterization - -utilities for reading and writing files - -driver for reading/writing restart files - -code shared by all restart options - -basic restoring for open boundary conditions - -shortwave and albedo parameterizations - -space-filling-curves distribution method - -essential arrays to describe the state of the ice - -routines for time stepping the major code components - -zero-layer thermodynamics of :cite:`Semtner76` - -multilayer thermodynamics of :cite:`BL99` - -thermodynamic changes mostly related to ice thickness distribution - -mushy-theory thermodynamics of:cite:`THB13` - -code shared by all thermodynamics parameterizations - -vertical growth rates and fluxes - -driver for horizontal advection - -horizontal advection via incremental remapping - -driver for ice biogeochemistry and brine tracer motion - -parameters and shared code for biogeochemistry and brine height - -execution or “run” directory created when the code is compiled using the -**comp\_ice** script (gx3) - -**cice** - code executable - -**compile/** - directory containing object files, etc. - -**grid** - horizontal grid file from **cice/input\_templates/gx3/** - -**ice.log.[ID]** - diagnostic output file - -**ice\_in** - namelist input data from **cice/input\_templates/gx3/** - -**history/iceh.[timeID].nc** - output history file - -**kmt** - land mask file from **cice/input\_templates/gx3/** - -**restart/** - restart directory - - **iced\_gx3\_v5.nc** - initial condition from **cice/input\_templates/gx3/** - - **ice.restart\_file** - restart pointer from **cice/input\_templates/gx3/** - -**run\_ice** - batch run script file from **cice/input\_templates/** - -~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ -Grid, boundary conditions and masks -~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ - -The spatial discretization is specialized for a generalized orthogonal -B-grid as in :cite:`Murray96` or -:cite:`SKM95`. The ice and snow area, volume and energy are -given at the center of the cell, velocity is defined at the corners, and -the internal ice stress tensor takes four different values within a grid -cell; bilinear approximations are used for the stress tensor and the ice -velocity across the cell, as described in :cite:`HD02`. -This tends to avoid the grid decoupling problems associated with the -B-grid. EVP is available on the C-grid through the MITgcm code -distribution, http://mitgcm.org/viewvc/MITgcm/MITgcm/pkg/seaice/. - -Since ice thickness and thermodynamic variables such as temperature are given -in the center of each cell, the grid cells are referred to as “T cells.” -We also occasionally refer to “U cells,” which are centered on the -northeast corner of the corresponding T cells and have velocity in the -center of each. The velocity components are aligned along grid lines. - -The user has several choices of grid routines: *popgrid* reads grid -lengths and other parameters for a nonuniform grid (including tripole -and regional grids), and *rectgrid* creates a regular rectangular grid, -including that used for the column configuration. The input files -**global\_gx3.grid** and **global\_gx3.kmt** contain the -:math:`\left<3^\circ\right>` POP grid and land mask; -**global\_gx1.grid** and **global\_gx1.kmt** contain the -:math:`\left<1^\circ\right>` grid and land mask. These are binary -unformatted, direct access files produced on an SGI (Big Endian). If you -are using an incompatible (Little Endian) architecture, choose -`rectangular` instead of `displaced\_pole` in **ice\_in**, or follow -procedures as for conejo -(:math:`\langle`\ **OS**\ :math:`\rangle.\langle`\ **SITE**\ :math:`\rangle.\langle`\ **machine**\ :math:`\rangle` -= Linux.LANL.conejo). There are versions of the gx3 grid files -available. - -In CESM, the sea ice model may exchange coupling fluxes using a -different grid than the computational grid. This functionality is -activated using the namelist variable `gridcpl\_file`. - -*********************** -Grid domains and blocks -*********************** - -In general, the global gridded domain is -`nx\_global` :math:`\times`\ `ny\_global`, while the subdomains used in the -block distribution are `nx\_block` :math:`\times`\ `ny\_block`. The -physical portion of a subdomain is indexed as [`ilo:ihi`, `jlo:jhi`], with -nghost “ghost” or “halo" cells outside the domain used for boundary -conditions. These parameters are illustrated in :ref:`fig-grid` in one -dimension. The routines *global\_scatter* and *global\_gather* -distribute information from the global domain to the local domains and -back, respectively. If MPI is not being used for grid decomposition in -the ice model, these routines simply adjust the indexing on the global -domain to the single, local domain index coordinates. Although we -recommend that the user choose the local domains so that the global -domain is evenly divided, if this is not possible then the furthest east -and/or north blocks will contain nonphysical points (“padding”). These -points are excluded from the computation domain and have little effect -on model performance. - -.. _fig-grid: - -.. figure:: ./figures/grid.png - :align: center - :scale: 20% - - Figure 8 - -:ref:`fig-grid` : Grid parameters for a sample one-dimensional, 20-cell -global domain decomposed into four local subdomains. Each local -domain has one ghost (halo) cell on each side, and the physical -portion of the local domains are labeled `ilo:ihi`. The parameter -`nx\_block` is the total number of cells in the local domain, including -ghost cells, and the same numbering system is applied to each of the -four subdomains. - -The user chooses a block size `BLCKX` :math:`\times`\ `BLCKY` and the -number of processors `NTASK` in **comp\_ice**. Parameters in the -*domain\_nml* namelist in **ice\_in** determine how the blocks are -distributed across the processors, and how the processors are -distributed across the grid domain. Recommended combinations of these -parameters for best performance are given in Section :ref:`performance`. -The script **comp\_ice** computes the maximum number of blocks on each -processor for typical Cartesian distributions, but for non-Cartesian -cases `MXBLCKS` may need to be set in the script. The code will print this -information to the log file before aborting, and the user will need to -adjust `MXBLCKS` in **comp\_ice** and recompile. The code will also print -a warning if the maximum number of blocks is too large. Although this is -not fatal, it does require excess memory. - -A loop at the end of routine *create\_blocks* in module -**ice\_blocks.F90** will print the locations for all of the blocks on -the global grid if dbug is set to be true. Likewise, a similar loop at -the end of routine *create\_local\_block\_ids* in module -**ice\_distribution.F90** will print the processor and local block -number for each block. With this information, the grid decomposition -into processors and blocks can be ascertained. The dbug flag must be -manually set in the code in each case (independently of the dbug flag in -**ice\_in**), as there may be hundreds or thousands of blocks to print -and this information should be needed only rarely. This information is -much easier to look at using a debugger such as Totalview. - -Alternatively, a new variable is provided in the history files, `blkmask`, -which labels the blocks in the grid decomposition according to `blkmask` = -`my\_task` + `iblk/100`. - -************* -Tripole grids -************* - -The tripole grid is a device for constructing a global grid with a -normal south pole and southern boundary condition, which avoids placing -a physical boundary or grid singularity in the Arctic Ocean. Instead of -a single north pole, it has two “poles” in the north, both located on -land, with a line of grid points between them. This line of points is -called the “fold,” and it is the “top row” of the physical grid. One -pole is at the left-hand end of the top row, and the other is in the -middle of the row. The grid is constructed by “folding” the top row, so -that the left-hand half and the right-hand half of it coincide. Two -choices for constructing the tripole grid are available. The one first -introduced to CICE is called “U-fold”, which means that the poles and -the grid cells between them are U cells on the grid. Alternatively the -poles and the cells between them can be grid T cells, making a “T-fold.” -Both of these options are also supported by the OPA/NEMO ocean model, -which calls the U-fold an “f-fold” (because it uses the Arakawa C-grid -in which U cells are on T-rows). The choice of tripole grid is given by -the namelist variable `ns\_boundary\_type`, ‘tripole’ for the U-fold and -‘tripoleT’ for the T-fold grid. - -In the U-fold tripole grid, the poles have U-index -:math:`{\tt nx\_global}/2` and `nx\_global` on the top U-row of the -physical grid, and points with U-index i and :math:`{\tt nx\_global-i}` -are coincident. Let the fold have U-row index :math:`n` on the global -grid; this will also be the T-row index of the T-row to the south of the -fold. There are ghost (halo) T- and U-rows to the north, beyond the -fold, on the logical grid. The point with index i along the ghost T-row -of index :math:`n+1` physically coincides with point -:math:`{\tt nx\_global}-{\tt i}+1` on the T-row of index :math:`n`. The -ghost U-row of index :math:`n+1` physically coincides with the U-row of -index :math:`n-1`. - -In the T-fold tripole grid, the poles have T-index 1 and and -:math:`{\tt nx\_global}/2+1` on the top T-row of the physical grid, and -points with T-index i and :math:`{\tt nx\_global}-{\tt i}+2` are -coincident. Let the fold have T-row index :math:`n` on the global grid. -It is usual for the northernmost row of the physical domain to be a -U-row, but in the case of the T-fold, the U-row of index :math:`n` is -“beyond” the fold; although it is not a ghost row, it is not physically -independent, because it coincides with U-row :math:`n-1`, and it -therefore has to be treated like a ghost row. Points i on U-row -:math:`n` coincides with :math:`{\tt nx\_global}-{\tt i}+1` on U-row -:math:`n-1`. There are still ghost T- and U-rows :math:`n+1` to the -north of U-row :math:`n`. Ghost T-row :math:`n+1` coincides with T-row -:math:`n-1`, and ghost U-row :math:`n+1` coincides with U-row -:math:`n-2`. - -The tripole grid thus requires two special kinds of treatment for -certain rows, arranged by the halo-update routines. First, within rows -along the fold, coincident points must always have the same value. This -is achieved by averaging them in pairs. Second, values for ghost rows -and the “quasi-ghost” U-row on the T-fold grid are reflected copies of -the coincident physical rows. Both operations involve the tripole -buffer, which is used to assemble the data for the affected rows. -Special treatment is also required in the scattering routine, and when -computing global sums one of each pair of coincident points has to be -excluded. - -.. _bio-grid: - -******** -Bio-grid -******** - -The bio-grid is a vertical grid used for solving the brine height -variable :math:`h_b`. In the future, it will also be used for -discretizing the vertical transport equations of biogeochemical tracers. -The bio-grid is a non-dimensional vertical grid which takes the value -zero at :math:`h_b` and one at the ice–ocean interface. The number of -grid levels is specified during compilation in **comp\_ice** by setting -the variable `NBGCLYR` equal to an integer (:math:`n_b`) . - -Ice tracers and microstructural properties defined on the bio-grid are -referenced in two ways: as `bgrid` :math:`=n_b+2` points and as -igrid\ :math:`=n_b+1` points. For both bgrid and igrid, the first and -last points reference :math:`h_b` and the ice–ocean interface, -respectively, and so take the values :math:`0` and :math:`1`, -respectively. For bgrid, the interior points :math:`[2, n_b+1]` are -spaced at :math:`1/n_b` intervals beginning with `bgrid(2)` :math:` = -1/(2n_b)`. The `igrid` interior points :math:`[2, n_b]` are also -equidistant with the same spacing, but physically coincide with points -midway between those of `bgrid`. - -******************** -Column configuration -******************** - -A column modeling capability is available. Because of the boundary -conditions and other spatial assumptions in the model, this is not a -single column, but a small array of columns (minimum grid size is 5x5). -However, the code is set up so that only the single, central column is -used (all other columns are designated as land). The column is located -near Barrow (71.35N, 156.5W). Options for choosing the column -configuration are given in **comp\_ice** (choose `RES col`) and in the -namelist file, **input\_templates/col/ice\_in**. Here, `istep0` and the -initial conditions are set such that the run begins September 1 with no -ice. The grid type is rectangular, dynamics are turned off (`kdyn` = 0) and -one processor is used. - -History variables available for column output are ice and snow -temperature, `Tinz` and `Tsnz`. These variables also include thickness -category as a fourth dimension. - -******************* -Boundary conditions -******************* - -Much of the infrastructure used in CICE, including the boundary -routines, is adopted from POP. The boundary routines perform boundary -communications among processors when MPI is in use and among blocks -whenever there is more than one block per processor. - -Open/cyclic boundary conditions are the default in CICE; the physical -domain can still be closed using the land mask. In our bipolar, -displaced-pole grids, one row of grid cells along the north and south -boundaries is located on land, and along east/west domain boundaries not -masked by land, periodic conditions wrap the domain around the globe. -CICE can be run on regional grids with open boundary conditions; except -for variables describing grid lengths, non-land halo cells along the -grid edge must be filled by restoring them to specified values. The -namelist variable `restore\_ice` turns this functionality on and off; the -restoring timescale `trestore` may be used (it is also used for restoring -ocean sea surface temperature in stand-alone ice runs). This -implementation is only intended to provide the “hooks" for a more -sophisticated treatment; the rectangular grid option can be used to test -this configuration. The ‘displaced\_pole’ grid option should not be used -unless the regional grid contains land all along the north and south -boundaries. The current form of the boundary condition routines does not -allow Neumann boundary conditions, which must be set explicitly. This -has been done in an unreleased branch of the code; contact Elizabeth for -more information. - -For exact restarts using restoring, set `restart\_ext` = true in namelist -to use the extended-grid subroutines. - -On tripole grids, the order of operations used for calculating elements -of the stress tensor can differ on either side of the fold, leading to -round-off differences. Although restarts using the extended grid -routines are exact for a given run, the solution will differ from -another run in which restarts are written at different times. For this -reason, explicit halo updates of the stress tensor are implemented for -the tripole grid, both within the dynamics calculation and for restarts. -This has not been implemented yet for tripoleT grids, pending further -testing. - -***** -Masks -***** - -A land mask hm (:math:`M_h`) is specified in the cell centers, with 0 -representing land and 1 representing ocean cells. A corresponding mask -uvm (:math:`M_u`) for velocity and other corner quantities is given by - -.. math:: - M_u(i,j)=\min\{M_h(l),\,l=(i,j),\,(i+1,j),\,(i,j+1),\,(i+1,j+1)\}. - -The logical masks `tmask` and `umask` (which correspond to the real masks -`hm` and `uvm`, respectively) are useful in conditional statements. - -In addition to the land masks, two other masks are implemented in -*evp\_prep* in order to reduce the dynamics component’s work on a global -grid. At each time step the logical masks `ice\_tmask` and `ice\_umask` are -determined from the current ice extent, such that they have the value -“true” wherever ice exists. They also include a border of cells around -the ice pack for numerical purposes. These masks are used in the -dynamics component to prevent unnecessary calculations on grid points -where there is no ice. They are not used in the thermodynamics -component, so that ice may form in previously ice-free cells. Like the -land masks `hm` and `uvm`, the ice extent masks `ice\_tmask` and `ice\_umask` -are for T cells and U cells, respectively. - -Improved parallel performance may result from utilizing halo masks for -boundary updates of the full ice state, incremental remapping transport, -or for EVP or EAP dynamics. These options are accessed through the -logical namelist flags `maskhalo\_bound`, `maskhalo\_remap`, and -`maskhalo\_dyn`, respectively. Only the halo cells containing needed -information are communicated. - -Two additional masks are created for the user’s convenience: `lmask\_n` -and `lmask\_s` can be used to compute or write data only for the northern -or southern hemispheres, respectively. Special constants (`spval` and -`spval\_dbl`, each equal to :math:`10^{30}`) are used to indicate land -points in the history files and diagnostics. - -~~~~~~~~~~~~~~~~~~~ -Test configurations -~~~~~~~~~~~~~~~~~~~ - -.. _init: - -~~~~~~~~~~~~~~~~~~~~~~~~~~~ -Initialization and coupling -~~~~~~~~~~~~~~~~~~~~~~~~~~~ - -The ice model’s parameters and variables are initialized in several -steps. Many constants and physical parameters are set in -**ice\_constants.F90**. Namelist variables (:ref:`tabnamelist`), -whose values can be altered at run time, are handled in *input\_data* -and other initialization routines. These variables are given default -values in the code, which may then be changed when the input file -**ice\_in** is read. Other physical constants, numerical parameters, and -variables are first set in initialization routines for each ice model -component or module. Then, if the ice model is being restarted from a -previous run, core variables are read and reinitialized in -*restartfile*, while tracer variables needed for specific configurations -are read in separate restart routines associated with each tracer or -specialized parameterization. Finally, albedo and other quantities -dependent on the initial ice state are set. Some of these parameters -will be described in more detail in :ref:`tabnamelist`. - -The restart files supplied with the code release include the core -variables on the default configuration, that is, with seven vertical -layers and the ice thickness distribution defined by `kcatbound` = 0. -Restart information for some tracers is also included in the  restart -files. - -Three namelist variables control model initialization, `ice\_ic`, `runtype`, -and `restart`, as described in :ref:`tab-ic`. It is possible to do an -initial run from a file **filename** in two ways: (1) set runtype = -‘initial’, restart = true and ice\_ic = **filename**, or (2) runtype = -‘continue’ and pointer\_file = **./restart/ice.restart\_file** where -**./restart/ice.restart\_file** contains the line -“./restart/[filename]". The first option is convenient when repeatedly -starting from a given file when subsequent restart files have been -written. With this arrangement, the tracer restart flags can be set to -true or false, depending on whether the tracer restart data exist. With -the second option, tracer restart flags are set to ‘continue’ for all -active tracers. - -An additional namelist option, `restart\_ext` specifies whether halo cells -are included in the restart files. This option is useful for tripole and -regional grids, but can not be used with PIO. - -MPI is initialized in *init\_communicate* for both coupled and -stand-alone MPI runs. The ice component communicates with a flux coupler -or other climate components via external routiines that handle the -variables listed in :ref:`tab-flux-cpl`. For stand-alone runs, -routines in **ice\_forcing.F90** read and interpolate data from files, -and are intended merely to provide guidance for the user to write his or -her own routines. Whether the code is to be run in stand-alone or -coupled mode is determined at compile time, as described below. - -:ref:`tab-ic` : *Ice initial state resulting from combinations of* -`ice\_ic`, `runtype` and `restart`. :math:`^a`\ *If false, restart is reset to -true.* :math:`^b`\ *restart is reset to false.* :math:`^c`\ ice\_ic *is -reset to ‘none.’* - -.. _tab-ic: - -.. table:: Table 4 - - +----------------+--------------------------+--------------------------------------+----------------------------------------+ - | ice\_ic | | | | - +================+==========================+======================================+========================================+ - | | initial/false | initial/true | continue/true (or false\ :math:`^a`) | - +----------------+--------------------------+--------------------------------------+----------------------------------------+ - | none | no ice | no ice\ :math:`^b` | restart using **pointer\_file** | - +----------------+--------------------------+--------------------------------------+----------------------------------------+ - | default | SST/latitude dependent | SST/latitude dependent\ :math:`^b` | restart using **pointer\_file** | - +----------------+--------------------------+--------------------------------------+----------------------------------------+ - | **filename** | no ice\ :math:`^c` | start from **filename** | restart using **pointer\_file** | - +----------------+--------------------------+--------------------------------------+----------------------------------------+ - -.. _parameters: - -~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ -Choosing an appropriate time step -~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ - -The time step is chosen based on stability of the transport component -(both horizontal and in thickness space) and on resolution of the -physical forcing. CICE allows the dynamics, advection and ridging -portion of the code to be run with a shorter timestep, -:math:`\Delta t_{dyn}` (`dt\_dyn`), than the thermodynamics timestep -:math:`\Delta t` (`dt`). In this case, `dt` and the integer ndtd are -specified, and `dt\_dyn` = `dt/ndtd`. - -A conservative estimate of the horizontal transport time step bound, or -CFL condition, under remapping yields - -.. math:: - \Delta t_{dyn} < {\min\left(\Delta x, \Delta y\right)\over 2\max\left(u, v\right)}. - -Numerical estimates for this bound for several POP grids, assuming -:math:`\max(u, v)=0.5` m/s, are as follows: - -.. csv-table:: - :widths: 20,40,40,40,40 - - grid label,N pole singularity,dimensions,min :math:`\sqrt{\Delta x\cdot\Delta y}`,max :math:`\Delta t_{dyn}` - gx3,Greenland,:math:`100\times 116`,:math:`39\times 10^3` m,10.8hr - gx1,Greenland,:math:`320\times 384`,:math:`18\times 10^3` m,5.0hr - p4,Canada,:math:`900\times 600`,:math:`6.5\times 10^3` m,1.8hr - -As discussed in section :ref:`mech-red` and -:cite:`LHMJ07`, the maximum time step in practice is -usually determined by the time scale for large changes in the ice -strength (which depends in part on wind strength). Using the strength -parameterization of :cite:`Rothrock75`, as in -Equation :eq:`roth-strength0`, limits the time step to :math:`\sim`\ 30 -minutes for the old ridging scheme (`krdg\_partic` = 0), and to -:math:`\sim`\ 2 hours for the new scheme (`krdg\_partic` = 1), assuming -:math:`\Delta x` = 10 km. Practical limits may be somewhat less, -depending on the strength of the atmospheric winds. - -Transport in thickness space imposes a similar restraint on the time -step, given by the ice growth/melt rate and the smallest range of -thickness among the categories, -:math:`\Delta t<\min(\Delta H)/2\max(f)`, where :math:`\Delta H` is the -distance between category boundaries and :math:`f` is the thermodynamic -growth rate. For the 5-category ice thickness distribution used as the -default in this distribution, this is not a stringent limitation: -:math:`\Delta t < 19.4` hr, assuming :math:`\max(f) = 40` cm/day. - -In the classic EVP or EAP approach (`kdyn` = 1 or 2, `revised\_evp` = false), -the dynamics component is subcycled ndte (:math:`N`) times per dynamics -time step so that the elastic waves essentially disappear before the -next time step. The subcycling time step (:math:`\Delta -t_e`) is thus - -.. math:: - dte = dt\_dyn/ndte. - -A second parameter, :math:`E_\circ` (`eyc`), defines the elastic wave -damping timescale :math:`T`, described in Section :ref:`dynam`, as -`eyc`\ * `dt\_dyn`. The forcing terms are not updated during the subcycling. -Given the small step (`dte`) at which the EVP dynamics model is subcycled, -the elastic parameter :math:`E` is also limited by stability -constraints, as discussed in :cite:`HD97`. Linear stability -analysis for the dynamics component shows that the numerical method is -stable as long as the subcycling time step :math:`\Delta t_e` -sufficiently resolves the damping timescale :math:`T`. For the stability -analysis we had to make several simplifications of the problem; hence -the location of the boundary between stable and unstable regions is -merely an estimate. In practice, the ratio -:math:`\Delta t_e ~:~ T ~:~ \Delta t`  = 1 : 40 : 120 provides both -stability and acceptable efficiency for time steps (:math:`\Delta t`) on -the order of 1 hour. - -For the revised EVP approach (`kdyn` = 1, `revised\_evp` = true), the -relaxation parameter `arlx1i` effectively sets the damping timescale in -the problem, and `brlx` represents the effective subcycling -:cite:`BFLM13`. In practice the parameters :math:`S_e>0.5` -and :math:`\xi<1` are set, along with an estimate of the ice strength -per unit mass, and the damping and subcycling parameters are then -calculated. With the addition of the revised EVP approach to CICE, the -code now uses these parameters internally for both classic and revised -EVP configurations (see Section :ref:`revp`). - -Note that only :math:`T` and :math:`\Delta t_e` figure into the -stability of the dynamics component; :math:`\Delta t` does not. Although -the time step may not be tightly limited by stability considerations, -large time steps (*e.g.,* :math:`\Delta t=1` day, given daily forcing) -do not produce accurate results in the dynamics component. The reasons -for this error are discussed in :cite:`HD97`; see -:cite:`HZ99` for its practical effects. The thermodynamics -component is stable for any time step, as long as the surface -temperature :math:`T_{sfc}` is computed internally. The -numerical constraint on the thermodynamics time step is associated with -the transport scheme rather than the thermodynamic solver. - -~~~~~~~~~~~~ -Model output -~~~~~~~~~~~~ - -.. _history: - -************* -History files -************* - -Model output data is averaged over the period(s) given by `histfreq` and -`histfreq\_n`, and written to binary or  files prepended by `history\_file` -in **ice\_in**. That is, if `history\_file` = ‘iceh’ then the filenames -will have the form **iceh.[timeID].nc** or **iceh.[timeID].da**, -depending on the output file format chosen in **comp\_ice** (set -`IO\_TYPE`). The  history files are CF-compliant; header information for -data contained in the  files is displayed with the command `ncdump -h -filename.nc`. Parallel  output is available using the PIO library; the -attribute `io\_flavor` distinguishes output files written with PIO from -those written with standard netCDF. With binary files, a separate header -file is written with equivalent information. Standard fields are output -according to settings in the **icefields\_nml** namelist in **ice\_in**. -The user may add (or subtract) variables not already available in the -namelist by following the instructions in section :ref:`addhist`. - -With this release, the history module has been divided into several -modules based on the desired formatting and on the variables -themselves. Parameters, variables and routines needed by multiple -modules is in **ice\_history\_shared.F90**, while the primary routines -for initializing and accumulating all of the history variables are in -**ice\_history.F90**. These routines call format-specific code in the -**io\_binary**, **io\_netcdf** and **io\_pio** directories. History -variables specific to certain components or parameterizations are -collected in their own history modules (**ice\_history\_bgc.F90**, -**ice\_history\_drag.F90**, **ice\_history\_mechred.F90**, -**ice\_history\_pond.F90**). - -The history modules allow output at different frequencies. Five output -frequencies (1, `h`, `d`, `m`, `y`) are available simultaneously during a run. -The same variable can be output at different frequencies (say daily and -monthly) via its namelist flag, `f\_` :math:`\left<{var}\right>`, which -is now a character string corresponding to `histfreq` or ‘x’ for none. -(Grid variable flags are still logicals, since they are written to all -files, no matter what the frequency is.) If there are no namelist flags -with a given `histfreq` value, or if an element of `histfreq\_n` is 0, then -no file will be written at that frequency. The output period can be -discerned from the filenames. - -For example, in namelist: - -:: - - `histfreq` = ’1’, ’h’, ’d’, ’m’, ’y’ - `histfreq\_n` = 1, 6, 0, 1, 1 - `f\_hi` = ’1’ - `f\_hs` = ’h’ - `f\_Tsfc` = ’d’ - `f\_aice` = ’m’ - `f\_meltb` = ’mh’ - `f\_iage` = ’x’ - -Here, `hi` will be written to a file on every timestep, `hs` will be -written once every 6 hours, `aice` once a month, `meltb` once a month AND -once every 6 hours, and `Tsfc` and `iage` will not be written. - -From an efficiency standpoint, it is best to set unused frequencies in -`histfreq` to ‘x’. Having output at all 5 frequencies takes nearly 5 times -as long as for a single frequency. If you only want monthly output, the -most efficient setting is `histfreq` = ’m’,’x’,’x’,’x’,’x’. The code counts -the number of desired streams (`nstreams`) based on `histfreq`. - -The history variable names must be unique for netcdf, so in cases where -a variable is written at more than one frequency, the variable name is -appended with the frequency in files after the first one. In the example -above, `meltb` is called `meltb` in the monthly file (for backward -compatibility with the default configuration) and `meltb\_h` in the -6-hourly file. - -Using the same frequency twice in `histfreq` will have unexpected -consequences and currently will cause the code to abort. It is not -possible at the moment to output averages once a month and also once -every 3 months, for example. - -If `write\_ic` is set to true in **ice\_in**, a snapshot of the same set -of history fields at the start of the run will be written to the history -directory in **iceh\_ic.[timeID].nc(da)**. Several history variables are -hard-coded for instantaneous output regardless of the averaging flag, at -the frequency given by their namelist flag. - -The normalized principal components of internal ice stress are computed -in *principal\_stress* and written to the history file. This calculation -is not necessary for the simulation; principal stresses are merely -computed for diagnostic purposes and included here for the user’s -convenience. - -Several history variables are available in two forms, a value -representing an average over the sea ice fraction of the grid cell, and -another that is multiplied by :math:`a_i`, representing an average over -the grid cell area. Our naming convention attaches the suffix “\_ai" to -the grid-cell-mean variable names. - -**************** -Diagnostic files -**************** - -Like `histfreq`, the parameter `diagfreq` can be used to regulate how often -output is written to a log file. The log file unit to which diagnostic -output is written is set in **ice\_fileunits.F90**. If `diag\_type` = -‘stdout’, then it is written to standard out (or to **ice.log.[ID]** if -you redirect standard out as in **run\_ice**); otherwise it is written -to the file given by `diag\_file`. In addition to the standard diagnostic -output (maximum area-averaged thickness, velocity, average albedo, total -ice area, and total ice and snow volumes), the namelist options -`print\_points` and `print\_global` cause additional diagnostic information -to be computed and written. `print\_global` outputs global sums that are -useful for checking global conservation of mass and energy. -`print\_points` writes data for two specific grid points. Currently, one -point is near the North Pole and the other is in the Weddell Sea; these -may be changed in **ice\_in**. - -Timers are declared and initialized in **ice\_timers.F90**, and the code -to be timed is wrapped with calls to *ice\_timer\_start* and -*ice\_timer\_stop*. Finally, *ice\_timer\_print* writes the results to -the log file. The optional “stats" argument (true/false) prints -additional statistics. Calling *ice\_timer\_print\_all* prints all of -the timings at once, rather than having to call each individually. -Currently, the timers are set up as in :ref:`timers`. -Section :ref:`addtimer` contains instructions for adding timers. - -The timings provided by these timers are not mutually exclusive. For -example, the column timer (5) includes the timings from 6–10, and -subroutine *bound* (timer 15) is called from many different places in -the code, including the dynamics and advection routines. - -The timers use *MPI\_WTIME* for parallel runs and the F90 intrinsic -*system\_clock* for single-processor runs. - -:ref:`timers` : *CICE timers* - -.. _timers: - -.. table:: Table 5 - - +--------------+-------------+----------------------------------------------------+ - | **Timer** | | | - +--------------+-------------+----------------------------------------------------+ - | **Index** | **Label** | | - +--------------+-------------+----------------------------------------------------+ - | 1 | Total | the entire run | - +--------------+-------------+----------------------------------------------------+ - | 2 | Step | total minus initialization and exit | - +--------------+-------------+----------------------------------------------------+ - | 3 | Dynamics | EVP | - +--------------+-------------+----------------------------------------------------+ - | 4 | Advection | horizontal transport | - +--------------+-------------+----------------------------------------------------+ - | 5 | Column | all vertical (column) processes | - +--------------+-------------+----------------------------------------------------+ - | 6 | Thermo | vertical thermodynamics | - +--------------+-------------+----------------------------------------------------+ - | 7 | Shortwave | SW radiation and albedo | - +--------------+-------------+----------------------------------------------------+ - | 8 | Meltponds | melt ponds | - +--------------+-------------+----------------------------------------------------+ - | 9 | Ridging | mechanical redistribution | - +--------------+-------------+----------------------------------------------------+ - | 10 | Cat Conv | transport in thickness space | - +--------------+-------------+----------------------------------------------------+ - | 11 | Coupling | sending/receiving coupler messages | - +--------------+-------------+----------------------------------------------------+ - | 12 | ReadWrite | reading/writing files | - +--------------+-------------+----------------------------------------------------+ - | 13 | Diags | diagnostics (log file) | - +--------------+-------------+----------------------------------------------------+ - | 14 | History | history output | - +--------------+-------------+----------------------------------------------------+ - | 15 | Bound | boundary conditions and subdomain communications | - +--------------+-------------+----------------------------------------------------+ - | 16 | BGC | biogeochemistry | - +--------------+-------------+----------------------------------------------------+ - -************* -Restart files -************* - -CICE now provides restart data in binary unformatted or  formats, via -the `IO\_TYPE` flag in **comp\_ice** and namelist variable -`restart\_format`. Restart and history files must use the same format. As -with the history output, there is also an option for writing parallel -restart files using PIO. - -The restart files created by CICE contain all of the variables needed -for a full, exact restart. The filename begins with the character string -‘iced.’, and the restart dump frequency is given by the namelist -variables `dumpfreq` and `dumpfreq\_n`. The pointer to the filename from -which the restart data is to be read for a continuation run is set in -`pointer\_file`. The code assumes that auxiliary binary tracer restart -files will be identified using the same pointer and file name prefix, -but with an additional character string in the file name that is -associated with each tracer set. All variables are included in  restart -files. - -Additional namelist flags provide further control of restart behavior. -`dump\_last` = true causes a set of restart files to be written at the end -of a run when it is otherwise not scheduled to occur. The flag -`use\_restart\_time` enables the user to choose to use the model date -provided in the restart files. If `use\_restart\_time` = false then the -initial model date stamp is determined from the namelist parameters. -lcdf64 = true sets 64-bit  output, allowing larger file sizes with -version 3. - -Routines for gathering, scattering and (unformatted) reading and writing -of the “extended" global grid, including the physical domain and ghost -(halo) cells around the outer edges, allow exact restarts on regional -grids with open boundary conditions, and they will also simplify -restarts on the various tripole grids. They are accessed by setting -`restart\_ext` = true in namelist. Extended grid restarts are not -available when using PIO; in this case extra halo update calls fill -ghost cells for tripole grids (do not use PIO for regional grids). - -Two restart files are included with the CICE v5 code distribution, for -the gx3 and gx1 grids. The were created using the default model -configuration (settings as in **comp\_ice** and **ice\_in**), but -initialized with no ice. The gx3 case was run for 1 year using the 1997 -forcing data provided with the code. The gx1 case was run for 20 years, -so that the date of restart in the file is 1978-01-01. Note that the -restart dates provided in the restart files can be overridden using the -namelist variables `use\_restart\_time`, `year\_init` and `istep0`. The -forcing time can also be overridden using `fyear\_init`. - -Several changes in CICE v5 have made restarting from v4.1 restart files -difficult. First, the ice and snow enthalpy state variables are now -carried as tracers instead of separate arrays, and salinity has been -added as a necessary restart field. Second, the default number of ice -layers has been increased from 4 to 7. Third, netcdf format is now used -for all I/O; it is no longer possible to have history output as  and -restart output in binary format. However, some facilities are included -with CICE v5 for converting v4.1 restart files to the new file structure -and format, provided that the same number of ice layers and basic -physics packages will be used for the new runs. See Section -:ref:`restarttrouble` for details. - --------------------- -Execution procedures --------------------- - -To compile and execute the code: in the source directory, - -#. Download the forcing data used for testing from the CICE-Consortium github page, - https://github.com/CICE-Consortium . - -#. Create **Macros.\*** and **run\_ice.\*** files for your particular - platform, if they do not already exist (type ‘uname -s’ at the prompt - to get :math:`\langle`\ OS\ :math:`\rangle`). - -#. Alter directories in the script **comp\_ice**. - -#. Run **comp\_ice** to set up the run directory and make the executable - ‘**cice**’. - -#. | To clean the compile directory and start fresh, simply execute - ‘/bin/rm -rf compile’ from the run directory. - -In the run directory, - -#. Alter `atm\_data\_dir` and `ocn\_data\_dir` in the namelist file - **ice\_in**. - -#. Alter the script **run\_ice** for your system. - -#. Execute **run\_ice**. - -If this fails, see Section :ref:`setup`. - -This procedure creates the output log file **ice.log.[ID]**, and if -`npt` is long enough compared with `dumpfreq` and `histfreq`, dump files -**iced.[timeID]** and   (or binary) history output files -**iceh\_[timeID].nc (.da)**. Using the :math:`\left<3^\circ\right>` -grid, the log file should be similar to -**ice.log.\ :math:`\langle`\ OS\ :math:`\rangle`**, provided for the -user’s convenience. These log files were created using MPI on 4 -processors on the :math:`\left<3^\circ\right>` grid. - -Several options are available in **comp\_ice** for configuring the run, -shown in :ref:`comp-ice`. If `NTASK` = 1, then the **serial/** -code is used, otherwise the code in **mpi/** is used. Loops over blocks -have been threaded throughout the code, so that their work will be -divided among `OMP\_NUM\_THREADS` if `THRD` is ‘yes.’ Note that the value of -`NTASK` in **comp\_ice** must equal the value of `nprocs` in **ice\_in**. -Generally the value of `MXBLCKS` computed by **comp\_ice** is sufficient, -but sometimes it will need to be set explicitly, as discussed in -Section :ref:`performance`. To conserve memory, match the tracer requests -in **comp\_ice** with those in **ice\_in**. CESM uses 3 aerosol tracers; -the number given in **comp\_ice** must be less than or equal to the -maximum allowed in **ice\_domain\_size.F90**. - -The scripts define a number of environment variables, mostly as -directories that you will need to edit for your own environment. -`$SYSTEM\_USERDIR`, which on machines at Oak Ridge National Laboratory -points automatically to scratch space, is intended to be a disk where -the run directory resides. `SHRDIR` is a path to the CESM shared code. - -:ref:`comp-ice` : Configuration options available in **comp_ice**. - -.. _comp-ice: - -.. table:: Table 6 - - +---------------------+--------------------------------------+------------------------------------------------------------------------------------+ - | variable | options | description | - +=====================+======================================+====================================================================================+ - |RES | col, gx3, gx1 | grid resolution | - +---------------------+--------------------------------------+------------------------------------------------------------------------------------+ - |NTASK | (integer) | total number of processors | - +---------------------+--------------------------------------+------------------------------------------------------------------------------------+ - |BLCKX | (integer) | number of grid cells on each block in the x-direction :math:`^\dagger` | - +---------------------+--------------------------------------+------------------------------------------------------------------------------------+ - |BLCKY | (integer) | number of grid cells on each block in the y-direction :math:`^\dagger` | - +---------------------+--------------------------------------+------------------------------------------------------------------------------------+ - |MXBLCKS | (integer) | maximum number of blocks per processor | - +---------------------+--------------------------------------+------------------------------------------------------------------------------------+ - |NICELYR | (integer) | number of vertical layers in the ice | - +---------------------+--------------------------------------+------------------------------------------------------------------------------------+ - |NSNWLYR | (integer) | number of vertical layers in the snow | - +---------------------+--------------------------------------+------------------------------------------------------------------------------------+ - |NICECAT | (integer) | number of ice thickness categories | - +---------------------+--------------------------------------+------------------------------------------------------------------------------------+ - |TRAGE | 0 or 1 | set to 1 for ice age tracer | - +---------------------+--------------------------------------+------------------------------------------------------------------------------------+ - |TRFY | 0 or 1 | set to 1 for first-year ice age tracer | - +---------------------+--------------------------------------+------------------------------------------------------------------------------------+ - |TRLVL | 0 or 1 | set to 1 for level and deformed ice tracers | - +---------------------+--------------------------------------+------------------------------------------------------------------------------------+ - |TRPND | 0 or 1 | set to 1 for melt pond tracers | - +---------------------+--------------------------------------+------------------------------------------------------------------------------------+ - |NTRAERO | 0 or 1 | number of aerosol tracers | - +---------------------+--------------------------------------+------------------------------------------------------------------------------------+ - |TRBRINE | set to 1 for brine height tracer | | - +---------------------+--------------------------------------+------------------------------------------------------------------------------------+ - |NBGCLYR | (integer) | number of vertical layers for biogeochemical transport | - +---------------------+--------------------------------------+------------------------------------------------------------------------------------+ - |IO_TYPE | none/netcdf/pio | use ‘none’ if  library is unavailable,‘pio’ for PIO | - +---------------------+--------------------------------------+------------------------------------------------------------------------------------+ - |DITTO | yes/no | for reproducible diagnostics | - +---------------------+--------------------------------------+------------------------------------------------------------------------------------+ - |BARRIERS | yes/no | flushes MPI buffers during global scatters and gathers | - +---------------------+--------------------------------------+------------------------------------------------------------------------------------+ - |THRD | yes/no | set to yes for OpenMP threaded parallelism | - +---------------------+--------------------------------------+------------------------------------------------------------------------------------+ - |OMP_NUM_THREADS | (integer) | the number of OpenMP threads requested | - +---------------------+--------------------------------------+------------------------------------------------------------------------------------+ - |NUMIN | (integer) | smallest unit number assigned to CICE files | - +---------------------+--------------------------------------+------------------------------------------------------------------------------------+ - |NUMAX | (integer) | largest unit number assigned to CICE files | - +---------------------+--------------------------------------+------------------------------------------------------------------------------------+ - -The ‘reproducible’ option (`DITTO`) makes diagnostics bit-for-bit when -varying the number of processors. (The simulation results are -bit-for-bit regardless, because they do not require global sums or -max/mins as do the diagnostics.) This was done mainly by increasing the -precision for the global reduction calculations, except for regular -double-precision (r8) calculations involving MPI; MPI can not handle -MPI\_REAL16 on some architectures. Instead, these cases perform sums or -max/min calculations across the global block structure, so that the -results are bit-for-bit as long as the block distribution is the same -(the number of processors can be different). - -A more flexible option is available for double-precision MPI -calculations, using the namelist variable `bfbflag`. When true, this flag -produces bit-for-bit identical diagnostics with different tasks, -threads, blocks and grid decompositions. - -CICE namelist variables available for changes after compile time appear -in **ice.log.\*** with values read from the file **ice\_in**; their -definitions are given in Section :ref:`index`. For example, to run for a -different length of time, say three days, set `npt` = 72 in **ice\_in**. -At present, the user supplies the time step `dt`, the number of -dynamics/advection/ridging subcycles `ndtd`, and for classic EVP, the -number of EVP subcycles `ndte`; `dte` is then calculated in subroutine -*init\_evp*. The primary reason for doing it this way is to ensure that -`ndte` is an integer. (This is done differently for `revised\_evp` = true.; -see Section :ref:`dynam`). - -To restart from a previous run, set restart = true in **ice\_in**. There -are two ways of restarting from a given file. The restart pointer file -**ice.restart\_file** (created by the previous run) contains the name of -the last written data file (**iced.[timeID]**). Alternatively, a -filename can be assigned to ice\_ic in **ice\_in**. Consult -Section :ref:`init` for more details. Restarts are exact for MPI or -single processor runs. - -~~~~~~~ -Scripts -~~~~~~~ - -~~~~~~~~~~~ -Directories -~~~~~~~~~~~ - -~~~~~~~~~~~~~~~~~~~ -Local modifications -~~~~~~~~~~~~~~~~~~~ - -~~~~~~~~~~~~ -Forcing data -~~~~~~~~~~~~ - - -.. _performance: - ------------ -Performance ------------ - -Namelist options (*domain\_nml*) provide considerable flexibility for -finding the most efficient processor and block configuration. Some of -these choices are illustration in :ref:`fig-distrb`. `processor\_shape` -chooses between tall, thin processor domains (`slenderX1` or `slenderX2`, -often better for sea ice simulations on global grids where nearly all of -the work is at the top and bottom of the grid with little to do in -between) and close-to-square domains, which maximize the volume to -surface ratio (and therefore on-processor computations to message -passing, if there were ice in every grid cell). In cases where the -number of processors is not a perfect square (4, 9, 16...), the -`processor\_shape` namelist variable allows the user to choose how the -processors are arranged. Here again, it is better in the sea ice model -to have more processors in x than in y, for example, 8 processors -arranged 4x2 (`square-ice`) rather than 2x4 (`square-pop`). The latter -option is offered for direct-communication compatibility with POP, in -which this is the default. - -The user provides the total number of processors and the block -dimensions in the setup script (**comp\_ice**). When moving toward -smaller, more numerous blocks, there is a point where the code becomes -less efficient; blocks should not have fewer than about 20 grid cells in -each direction. Squarish blocks optimize the volume-to-surface ratio for -communications. - -.. _fig-distrb: - -.. figure:: ./figures/distrb.png - :align: center - :scale: 50% - - Figure 9 - -:ref:`fig-distrb` : Distribution of 256 blocks across 16 processors, -represented by colors, on the gx1 grid: (a) cartesian, slenderX1, (b) -cartesian, slenderX2, (c) cartesian, square-ice (square-pop is -equivalent here), (d) rake with block weighting, (e) rake with -latitude weighting, (f) spacecurve. Each block consists of 20x24 grid -cells, and white blocks consist entirely of land cells. - -The `distribution\_type` options allow standard Cartesian distribution of -blocks, redistribution via a ‘rake’ algorithm for improved load -balancing across processors, and redistribution based on space-filling -curves. There are also three additional distribution types -(‘roundrobin,’ ‘sectrobin,’ ‘sectcart’) that improve land-block -elimination rates and also allow more flexibility in the number of -processors used. The rake and space-filling curve algorithms are -primarily helpful when using squarish processor domains where some -processors (located near the equator) would otherwise have little work -to do. Processor domains need not be rectangular, however. - -`distribution\_wght` chooses how the work-per-block estimates are -weighted. The ‘block’ option is the default in POP, which uses a lot of -array syntax requiring calculations over entire blocks (whether or not -land is present), and is provided here for direct-communication -compatibility with POP. The ‘latitude’ option weights the blocks based -on latitude and the number of ocean grid cells they contain. - -The rake distribution type is initialized as a standard, Cartesian -distribution. Using the work-per-block estimates, blocks are “raked" -onto neighboring processors as needed to improve load balancing -characteristics among processors, first in the x direction and then in -y. - -Space-filling curves reduce a multi-dimensional space (2D, in our case) -to one dimension. The curve is composed of a string of blocks that is -snipped into sections, again based on the work per processor, and each -piece is placed on a processor for optimal load balancing. This option -requires that the block size be chosen such that the number of blocks in -the x direction equals the number of blocks in the y direction, and that -number must be factorable as :math:`2^n 3^m 5^p` where :math:`n, m, p` -are integers. For example, a 16x16 array of blocks, each containing -20x24 grid cells, fills the gx1 grid (:math:`n=4, m=p=0`). If either of -these conditions is not met, a Cartesian distribution is used instead. - -While the Cartesian distribution groups sets of blocks by processor, the -‘roundrobin’ distribution loops through the blocks and processors -together, putting one block on each processor until the blocks are gone. -This provides good load balancing but poor communication characteristics -due to the number of neighbors and the amount of data needed to -communicate. The ‘sectrobin’ and ‘sectcart’ algorithms loop similarly, -but put groups of blocks on each processor to improve the communication -characteristics. In the ‘sectcart’ case, the domain is divided into two -(east-west) halves and the loops are done over each, sequentially. -:ref:`fig-distribscorecard` provides an overview of the pros and cons -for the distribution types. - -.. _fig-distribscorecard: - -.. figure:: ./figures/scorecard.png - :align: center - :scale: 20% - - Figure 10 - -:ref:`fig-distribscorecard` : Scorecard for block distribution choices in -CICE, courtesy T. Craig. For more information, see -http://www.cesm.ucar.edu/events/ws.2012/Presentations/SEWG2/craig.pdf - -The `maskhalo` options in the namelist improve performance by removing -unnecessary halo communications where there is no ice. There is some -overhead in setting up the halo masks, which is done during the -timestepping procedure as the ice area changes, but this option -usually improves timings even for relatively small processor counts. -T. Craig has found that performance improved by more than 20% for -combinations of updated decompositions and masked haloes, in CESM’s -version of CICE. A practical guide for choosing a CICE grid -decomposition, based on experience in CESM, is available: -http://oceans11.lanl.gov/drupal/CICE/DecompositionGuide - -Throughout the code, (i, j) loops have been combined into a single loop, -often over just ocean cells or those containing sea ice. This was done -to reduce unnecessary operations and to improve vector performance. - -:ref:`fig-timings` illustrates the computational expense of various -options, relative to the total time (excluding initialization) of a -7-layer configuration using BL99 thermodynamics, EVP dynamics, and the -‘ccsm3’ shortwave parameterization on the gx1 grid, run for one year -from a no-ice initial condition. The block distribution consisted of -20 \ :math:`\times` 192 blocks spread over 32 processors (‘slenderX2’) -with no threads and -O2 optimization. Timings varied by about -:math:`\pm3`\ % in identically configured runs due to machine load. -Extra time required for tracers has two components, that needed to carry -the tracer itself (advection, category conversions) and that needed for -the calculations associated with the particular tracer. The age tracers -(FY and iage) require very little extra calculation, so their timings -represent essentially the time needed just to carry an extra tracer. The -topo melt pond scheme is slightly faster than the others because it -calculates pond area and volume once per grid cell, while the others -calculate it for each thickness category. - -.. _fig-timings: - -.. figure:: ./figures/histograms.png - :align: center - :scale: 20% - - Figure 11 - -:ref:`fig-timings` : Change in ‘TimeLoop’ timings from the 7-layer -configuration using BL99 thermodynamics and EVP dynamics. Timings -were made on a nondedicated machine, with variations of about -:math:`\pm3`\ % in identically configured runs (light grey). Darker -grey indicates the time needed for extra required options; The -Delta-Eddington radiation scheme is required for all melt pond -schemes and the aerosol tracers, and the level-ice pond -parameterization additionally requires the level-ice tracers. - -------------- -Adding things -------------- - -.. _addtimer: - -~~~~~~ -Timers -~~~~~~ - -Timing any section of code, or multiple sections, consists of defining -the timer and then wrapping the code with start and stop commands for -that timer. Printing of the timer output is done simultaneously for all -timers. To add a timer, first declare it (`timer\_[tmr]`) at the top of -**ice\_timers.F90** (we recommend doing this in both the **mpi/** and -**serial/** directories), then add a call to *get\_ice\_timer* in the -subroutine *init\_ice\_timers*. In the module containing the code to be -timed, `call ice\_timer\_start`(`timer\_[tmr]`) at the beginning of the -section to be timed, and a similar call to `ice\_timer\_stop` at the end. -A use `ice\_timers` statement may need to be added to the subroutine being -modified. Be careful not to have one command outside of a loop and the -other command inside. Timers can be run for individual blocks, if -desired, by including the block ID in the timer calls. - -.. _addhist: - -~~~~~~~~~~~~~~ -History fields -~~~~~~~~~~~~~~ - -To add a variable to be printed in the history output, search for -‘example’ in **ice\_history\_shared.F90**: - -#. add a frequency flag for the new field - -#. add the flag to the namelist (here and also in **ice\_in**) - -#. add an index number - -and in **ice\_history.F90**: - -#. broadcast the flag - -#. add a call to `define\_hist\_field` - -#. add a call to `accum\_hist\_field` - -The example is for a standard, two-dimensional (horizontal) field; for -other array sizes, choose another history variable with a similar shape -as an example. Some history variables, especially tracers, are grouped -in other files according to their purpose (bgc, melt ponds, etc.). - -To add an output frequency for an existing variable, see -section :ref:`history`. - -.. _addtrcr: - -~~~~~~~ -Tracers -~~~~~~~ - -Each optional tracer has its own module, **ice\_[tracer].F90**, which -also contains as much of the additional tracer code as possible, and for -backward compatibility of binary restart files, each new tracer has its -own binary restart file. We recommend that the logical namelist variable -`tr\_[tracer]` be used for all calls involving the new tracer outside of -**ice\_[tracer].F90**, in case other users do not want to use that -tracer. - -A number of optional tracers are available in the code, including ice -age, first-year ice area, melt pond area and volume, brine height, -aerosols, and level ice area and volume (from which ridged ice -quantities are derived). Salinity, enthalpies, age, aerosols, level-ice -volume, brine height and most melt pond quantities are volume-weighted -tracers, while first-year area, pond area, level-ice area and all of the -biogeochemistry tracers in this release are area-weighted tracers. In -the absence of sources and sinks, the total mass of a volume-weighted -tracer such as aerosol (kg) is conserved under transport in horizontal -and thickness space (the mass in a given grid cell will change), whereas -the aerosol concentration (kg/m) is unchanged following the motion, and -in particular, the concentration is unchanged when there is surface or -basal melting. The proper units for a volume-weighted mass tracer in the -tracer array are kg/m. - -In several places in the code, tracer computations must be performed on -the conserved “tracer volume" rather than the tracer itself; for -example, the conserved quantity is :math:`h_{pnd}a_{pnd}a_{lvl}a_{i}`, -not :math:`h_{pnd}`. Conserved quantities are thus computed according to -the tracer dependencies, and code must be included to account for new -dependencies (e.g., :math:`a_{lvl}` and :math:`a_{pnd}` in -**ice\_itd.F90** and **ice\_mechred.F90**). - -To add a tracer, follow these steps using one of the existing tracers as -a pattern. - -#. **ice\_domain\_size.F90**: increase `max\_ntrcr` (can also add option - to **comp\_ice** and **bld/Macros.\***) - -#. **ice\_state.F90**: declare `nt\_[tracer]` and `tr\_[tracer]` - -#. **ice\_[tracer].F90**: create initialization, physics, restart - routines - -#. **ice\_fileunits.F90**: add new dump and restart file units - -#. **ice\_init.F90**: (some of this may be done in **ice\_[tracer].F90** - instead) - - - add new module and `tr\_[tracer]` to list of used modules and - variables - - - add logical namelist variable `tr\_[tracer]` - - - initialize namelist variable - - - broadcast namelist variable - - - print namelist variable to diagnostic output file - - - increment number of tracers in use based on namelist input (`ntrcr`) - - - define tracer types (`trcr\_depend` = 0 for ice area tracers, 1 for - ice volume, 2 for snow volume, 2+nt\_[tracer] for dependence on - other tracers) - -#. **ice\_itd.F90**, **ice\_mechred.F90**: Account for new dependencies - if needed. - -#. **CICE\_InitMod.F90**: initialize tracer (includes reading restart - file) - -#. **CICE\_RunMod.F90**, **ice\_step\_mod.F90**: - - - call routine to write tracer restart data - - - call physics routines in **ice\_[tracer].F90** (often called from - **ice\_step\_mod.F90**) - -#. **ice\_restart.F90**: define restart variables (for binary,  and PIO) - -#. **ice\_history\_[tracer].F90**: add history variables - (Section :ref:`addhist`) - -#. **ice\_in**: add namelist variables to *tracer\_nml* and - *icefields\_nml* - -#. If strict conservation is necessary, add diagnostics as noted for - topo ponds in Section :ref:`ponds`. - ---------------- -Troubleshooting ---------------- - -Check the FAQ: http://oceans11.lanl.gov/drupal/CICE/FAQ. - -.. _setup: - -~~~~~~~~~~~~~ -Initial setup -~~~~~~~~~~~~~ - -The script **comp\_ice** is configured so that the files **grid**, -**kmt**, **ice\_in**, **run\_ice**, **iced\_gx3\_v5.0** and -**ice.restart\_file** are NOT overwritten after the first setup. If you -wish to make changes to the original files in **input\_templates/** -rather than those in the run directory, either remove the files from the -run directory before executing **comp\_ice** or edit the script. - -The code may abort during the setup phase for any number of reasons, and -often the buffer containing the diagnostic output fails to print before -the executable exits. The quickest way to get the diagnostic information -is to run the code in an interactive shell with just the command `cice` -for serial runs or “`mpirun -np N cice`” for MPI runs, where N is the -appropriate number of processors (or a command appropriate for your -computer’s software). - -If the code fails to compile or run, or if the model configuration is -changed, try the following: - -- create **Macros.\***. **Makefile.\*** and **run\_ice.\*** files for - your particular platform, if they do not already exist (type ‘uname - -s’ at the prompt and compare the result with the file suffixes; we - rename `UNICOS/mp` as `UNICOS` for simplicity). - -- modify the `INCLUDE` directory path and other settings for your system - in the scripts, **Macros.\*** and **Makefile.\*** files. - -- alter directory paths, file names and the execution command as needed - in **run\_ice** and **ice\_in**. - -- ensure that `nprocs` in **ice\_in** is equal to `NTASK` in **comp\_ice**. - -- ensure that the block size `NXBLOCK`, `NYBLOCK` in **comp\_ice** is - compatible with the processor\_shape and other domain options in - **ice\_in** - -- if using the rake or space-filling curve algorithms for block - distribution (`distribution\_type` in **ice\_in**) the code will abort - if `MXBLCKS` is not large enough. The correct value is provided in the - diagnostic output. - -- if starting from a restart file, ensure that kcatbound is the same as - that used to create the file (`kcatbound` = 0 for the files included in - this code distribution). Other configuration parameters, such as - `NICELYR`, must also be consistent between runs. - -- for stand-alone runs, check that `-Dcoupled` is *not* set in the - **Macros.\*** file. - -- for coupled runs, check that `-Dcoupled` and other - coupled-model-specific (e.g., CESM, popcice or hadgem) preprocessing - options are set in the **Macros.\*** file. - -- edit the grid size and other parameters in **comp\_ice**. - -- remove the **compile/** directory completely and recompile. - -.. _restarttrouble: - -~~~~~~~~ -Restarts -~~~~~~~~ - -CICE version 5 introduces a new model configuration that makes -restarting from older simulations difficult. In particular, the number -of ice categories, the category boundaries, and the number of vertical -layers within each category must be the same in the restart file and in -the run restarting from that file. Moreover, significant differences in -the physics, such as the salinity profile, may cause the code to fail -upon restart. Therefore, new model configurations may need to be started -using `runtype` = ‘initial’. Binary restart files that were provided with -CICE v4.1 were made using the BL99 thermodynamics with 4 layers and 5 -thickness categories (`kcatbound` = 0) and therefore can not be used for -the default CICE v5 configuration (7 layers). In addition, CICE’s -default restart file format is now  instead of binary. - -Restarting a run using `runtype` = ‘continue’ requires restart data for -all tracers used in the new run. If tracer restart data is not -available, use `runtype` = ‘initial’, setting `ice\_ic` to the name of the -core restart file and setting to true the namelist restart flags for -each tracer that is available. The unavailable tracers will be -initialized to their default settings. - -On tripole grids, use `restart\_ext` = true when using either binary or -regular (non-PIO) netcdf. - -Provided that the same number of ice layers (default: 4) will be used -for the new runs, it is possible to convert v4.1 restart files to the -new file structure and then to  format. If the same physical -parameterizations are used, the code should be able to execute from -these files. However if different physics is used (for instance, mushy -thermo instead of BL99), the code may still fail. To convert a v4.1 -restart file: - -#. Edit the code **input\_templates/convert\_restarts.f90** for your - model configuration and path names. Compile and run this code to - create a binary restart file that can be read using v5. Copy the - resulting file to the **restart/** subdirectory in your working - directory. - -#. In your working directory, turn off all tracer restart flags in - **ice\_in** and set the following: - - - runtype = ‘initial’ - - - ice\_ic = ‘./restart/[your binary file name]’ - - - restart = .true. - - - use\_restart\_time = .true. - -#. In **CICE\_InitMod.F90**, comment out the call to - restartfile(ice\_ic) and uncomment the call to - restartfile\_v4(ice\_ic) immediately below it. This will read the - v4.1 binary file and write a v5  file containing the same - information. - -If restart files are taking a long time to be written serially (i.e., -not using PIO), see the next section. - -~~~~~~~~~~~~~~ -Slow execution -~~~~~~~~~~~~~~ - -On some architectures, underflows (:math:`10^{-300}` for example) are -not flushed to zero automatically. Usually a compiler flag is available -to do this, but if not, try uncommenting the block of code at the end of -subroutine *stress* in **ice\_dyn\_evp.F90** or **ice\_dyn\_eap.F90**. -You will take a hit for the extra computations, but it will not be as -bad as running with the underflows. - -In some configurations, multiple calls to scatter or gather global -variables may overfill MPI’s buffers, causing the code to slow down -(particularly when writing large output files such as restarts). To -remedy this problem, set `BARRIERS yes` in **comp\_ice**. This -synchronizes MPI messages, keeping the buffers in check. - -~~~~~~~~~~~~~~~ -Debugging hints -~~~~~~~~~~~~~~~ - -Several utilities are available that can be helpful when debugging the -code. Not all of these will work everywhere in the code, due to possible -conflicts in module dependencies. - -*debug\_ice* (**CICE.F90**) - A wrapper for *print\_state* that is easily called from numerous - points during the timestepping loop (see - **CICE\_RunMod.F90\_debug**, which can be substituted for - **CICE\_RunMod.F90**). - -*print\_state* (**ice\_diagnostics.F90**) - Print the ice state and forcing fields for a given grid cell. - -`dbug` = true (**ice\_in**) - Print numerous diagnostic quantities. - -`print\_global` (**ice\_in**) - If true, compute and print numerous global sums for energy and mass - balance analysis. This option can significantly degrade code - efficiency. - -`print\_points` (**ice\_in**) - If true, print numerous diagnostic quantities for two grid cells, - one near the north pole and one in the Weddell Sea. This utility - also provides the local grid indices and block and processor numbers - (`ip`, `jp`, `iblkp`, `mtask`) for these points, which can be used in - conjunction with `check\_step`, to call *print\_state*. These flags - are set in **ice\_diagnostics.F90**. This option can be fairly slow, - due to gathering data from processors. - -*global\_minval, global\_maxval, global\_sum* (**ice\_global\_reductions.F90**) - Compute and print the minimum and maximum values for an individual - real array, or its global sum. - -~~~~~~~~~~ -Known bugs -~~~~~~~~~~ - -#. Fluxes sent to the CESM coupler may have incorrect values in grid - cells that change from an ice-free state to having ice during the - given time step, or vice versa, due to scaling by the ice area. The - authors of the CESM flux coupler insist on the area scaling so that - the ice and land models are treated consistently in the coupler (but - note that the land area does not suddenly become zero in a grid cell, - as does the ice area). - -#. With the old CCSM radiative scheme (`shortwave` = ‘default’ or - ‘ccsm3’), a sizable fraction (more than 10%) of the total shortwave - radiation is absorbed at the surface but should be penetrating into - the ice interior instead. This is due to use of the aggregated, - effective albedo rather than the bare ice albedo when - `snowpatch` :math:`< 1`. - -#. The date-of-onset diagnostic variables, `melt\_onset` and `frz\_onset`, - are not included in the core restart file, and therefore may be - incorrect for the current year if the run is restarted after Jan 1. - Also, these variables were implemented with the Arctic in mind and - may be incorrect for the Antarctic. - -#. The single-processor *system\_clock* time may give erratic results on - some architectures. - -#. History files that contain time averaged data (`hist\_avg` = true in - **ice\_in**) will be incorrect if restarting from midway through an - averaging period. - -#. In stand-alone runs, restarts from the end of `ycycle` will not be - exact. - -#. Using the same frequency twice in `histfreq` will have unexpected - consequences and causes the code to abort. - -#. Latitude and longitude fields in the history output may be wrong when - using padding. - -~~~~~~~~~~~~~~~~~~~~~~~~~ -Interpretation of albedos -~~~~~~~~~~~~~~~~~~~~~~~~~ - -The snow-and-ice albedo, `albsni`, and diagnostic albedos `albice`, `albsno`, -and `albpnd` are merged over categories but not scaled (divided) by the -total ice area. (This is a change from CICE v4.1 for `albsni`.) The latter -three history variables represent completely bare or completely snow- or -melt-pond-covered ice; that is, they do not take into account the snow -or melt pond fraction (`albsni` does, as does the code itself during -thermodyamic computations). This is to facilitate comparison with -typical values in measurements or other albedo parameterizations. The -melt pond albedo `albpnd` is only computed for the Delta-Eddington -shortwave case. - -With the Delta-Eddington parameterization, the albedo depends on the -cosine of the zenith angle (:math:`\cos\varphi`, `coszen`) and is zero if -the sun is below the horizon (:math:`\cos\varphi < 0`). Therefore -time-averaged albedo fields would be low if a diurnal solar cycle is -used, because zero values would be included in the average for half of -each 24-hour period. To rectify this, a separate counter is used for the -averaging that is incremented only when :math:`\cos\varphi > 0`. The -albedos will still be zero in the dark, polar winter hemisphere. - -~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ -Proliferating subprocess parameterizations -~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ - -With the addition of several alternative parameterizations for sea ice -processes, a number of subprocesses now appear in multiple parts of the -code with differing descriptions. For instance, sea ice porosity and -permeability, along with associated flushing and flooding, are -calculated separately for mushy thermodynamics, topo and level-ice melt -ponds, and for the brine height tracer, each employing its own -equations. Likewise, the BL99 and mushy thermodynamics compute freeboard -and snow–ice formation differently, and the topo and level-ice melt pond -schemes both allow fresh ice to grow atop melt ponds, using slightly -different formulations for Stefan freezing. These various process -parameterizations will be compared and their subprocess descriptions -possibly unified in the future. - ------------- -Testing CICE ------------- - -Version 6, August 2017 -This documents how to use the testing features developed for the -CICE Consortium CICE sea ice model. - -.. _basic: - -~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ -Individual tests and test suites -~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ - -The CICE scripts support both setup of individual tests as well as test suites. Individual -tests are run from the command line like - - > create.case -t smoke -m wolf -g gx3 -p 8x2 -s diag1,run5day -testid myid - -where -m designates a specific machine. Test suites are multiple tests that are specified in -an input file and are started on the command line like - - > create.case -ts base_suite -m wolf -testid myid - -create.case with -t or -ts require a testid to uniquely name test directories. The format -of the case directory name for a test will always be -${machine}_${test}_${grid}_${pes}_${soptions}.${testid} - -To build and run a test, the process is the same as a case, - cd into the test directory, - - run cice.build - - run cice.submit - -The test results will be generated in a local file called "test_output". - -When running a test suite, the create.case command line automatically generates all the tests -under a directory names ${test_suite}.${testid}. It then automatically builds and submits all -tests. When the tests are complete, run the results.csh script to see the results from all the -tests. - -Tests are defined under configuration/scripts/tests. The tests currently supported are: - smoke - Runs the model for default length. The length and options can - be set with the -s commmand line option. The test passes if the - model completes successfully. - restart - Runs the model for 10 days, writing a restart file at day 5 and - again at day 10. Runs the model a second time starting from the - day 5 restart and writing a restart at day 10 of the model run. - The test passes if both the 10 day and 5 day restart run complete and - if the restart files at day 10 from both runs are bit-for-bit identical. - -Please run './create.case -h' for additional details. - -.. _additional: - -~~~~~~~~~~~~~~~~~~~~~~~~~~ -Additional testing options -~~~~~~~~~~~~~~~~~~~~~~~~~~ - -There are several additional options on the create.case command line for testing that -provide the ability to regression test and compare tests to each other. - - -bd defines a baseline directory where tests can be stored for regression testing - - -bg defines a version name that where the current tests can be saved for regression testing - - -bc defines a version name that the current tests should be compared to for regression testing - - -td provides a way to compare tests with each other - -To use -bg, - > create.case -ts base_suite -m wolf -testid v1 -bg version1 -bd $SCRATCH/CICE_BASELINES - will copy all the results from the test suite to $SCRATCH/CICE_BASELINES/version1. - -To use -bc, - > create.case -ts base_suite -m wolf -testid v2 -bc version1 -bd $SCRATCH/CICE_BASELINES - will compare all the results from this test suite to results saved before in $SCRATCH/CICE_BASELINES/version1. - --bc and -bg can be combined, - >create.case -ts base_suite -m wolf -testid v2 -bg version2 -bc version1 -bd $SCRATCH/CICE_BASELINES - will save the current results to $SCRATCH/CICE_BASELINES/version2 and compare the current results to - results save before in $SCRATCH/CICE_BASELINES/version1. - --bg, -bc, and -bd are used for regression testing. There is a default -bd on each machine. - --td allows a user to compare one test result to another. For instance, - > create.case -t smoke -m wolf -g gx3 -p 8x2 -s run5day -testid t01 - > create.case -t smoke -m wolf -g gx3 -p 4x2 -s run5day -testid t01 -td smoke_gx3_8x2_run5day - -An additional check will be done for the second test (because of the -td argument), and it will compare -the output from the first test "smoke_gx3_8x2_run5day" to the output from it's test "smoke_gx3_4x2_run5day" -and generate a result for that. It's important that the first test complete before the second test is -done. Also, the -td option works only if the testid and the machine are the same for the baseline -run and the current run. - -.. _format: - -~~~~~~~~~~~~~~~~~ -Test suite format -~~~~~~~~~~~~~~~~~ - -The format for the test suite file is relatively simple. It is a text file with white space delimited -columns like, - -.. _tab-test: - -.. csv-table:: Table 7 - :header: "#Test", "Grid", "PEs", "Sets", "BFB-compare" - :widths: 7, 7, 7, 15, 15 - - "smoke", "gx3", "8x2", "diag1,run5day", "" - "smoke", "gx3", "8x2", "diag24,run1year,medium", "" - "smoke", "gx3", "4x1", "debug,diag1,run5day", "" - "smoke", "gx3", "8x2", "debug,diag1,run5day", "" - "smoke", "gx3", "4x2", "diag1,run5day", "smoke_gx3_8x2_diag1_run5day" - "smoke", "gx3", "4x1", "diag1,run5day,thread", "smoke_gx3_8x2_diag1_run5day" - "smoke", "gx3", "4x1", "diag1,run5day", "smoke_gx3_4x1_diag1_run5day_thread" - "restart", "gx3", "8x1", "", "" - "restart", "gx3", "4x2", "debug", "" - - -The first column is the test name, the second the grid, the third the pe count, the fourth column is -the -s options and the fifth column is the -td argument. The fourth and fifth columns are optional. -The argument to -ts defines which filename to choose and that argument can contain a path. create.case -will also look for the filename in configuration/scripts/tests where some preset test suites are defined. - -~~~~~~~~~~~~~~~~~~~~~~~~~~ -Example Tests (Quickstart) -~~~~~~~~~~~~~~~~~~~~~~~~~~ - -********************************************** -To generate a baseline dataset for a test case -********************************************** - -./create.case -t smoke -m wolf -bg cicev6.0.0 -testid t00 - -cd wolf_smoke_gx3_4x1.t00 - -./cice.build - -./cice.submit - -# After job finishes, check output - -cat test_output - -**************************************************** -To run a test case and compare to a baseline dataset -**************************************************** - -./create.case -t smoke -m wolf -bc cicev6.0.0 -testid t01 - -cd wolf_smoke_gx3_4x1.t01 - -./cice.build - -./cice.submit - -# After job finishes, check output - -cat test_output - -********************************************* -To run a test suite to generate baseline data -********************************************* - -./create.case -m wolf -ts base_suite -testid t02 -bg cicev6.0.0bs - -cd base_suite.t02 - -# Once all jobs finish, concatenate all output - -./results.csh # All tests results will be stored in results.log - -# To plot a timeseries of "total ice extent", "total ice area", and "total ice volume" - -./timeseries.csh - -ls \*.png - -*********************************************** -To run a test suite to compare to baseline data -*********************************************** - -./create.case -m wolf -ts base_suite -testid t03 -bc cicev6.0.0bs - -cd base_suite.t03 - -# Once all jobs finish, concatenate all output - -./results.csh # All tests results will be stored in results.log - -# To plot a timeseries of "total ice extent", "total ice area", and "total ice volume" - -./timeseries.csh - -ls \*.png - -************************** -To compare to another test -************************** -`First:` - -./create.case -m wolf -t smoke -testid t01 -p 8x2 - -cd wolf_smoke_gx3_8x2.t01 - -./cice.build - -./cice.submit - -# After job finishes, check output - -cat test_output - -`Then, do the comparison:` - -./create.case -m wolf -t smoke -testid t01 -td smoke_gx3_8x2 -s thread -p 4x1 - -cd wolf_smoke_gx3_4x1_thread.t01 - -./cice.build - -./cice.submit - -# After job finishes, check output - -cat test_output - -****************** -Additional Details -****************** - -- In general, the baseline generation, baseline compare, and test diff are independent. -- Use the '-bd' flag to specify the location where you want the baseline dataset - to be written. Without specifying '-bd', the baseline dataset will be written - to the default baseline directory found in the env. file (ICE_MACHINE_BASELINE). -- If '-bd' is not passed, the scripts will look for baseline datasets in the default - baseline directory found in the env. file (ICE_MACHINE_BASELINE). - If the '-bd' option is passed, the scripts will look for baseline datasets in the - location passed to the -bd argument. -- To generate a baseline dataset for a specific version (for regression testing), - use '-bg '. The scripts will then place the baseline dataset - in $ICE_MACHINE_BASELINE// -- The '-testid' flag allows users to specify a testing id that will be added to the - end of the case directory. For example, "./create.case -m wolf -t smoke -testid t12 -p 4x1" - creates the directory wolf_smoke_gx3_4x1.t12. This flag is REQUIRED if using -t or -ts. - -.. _compliance: - -~~~~~~~~~~~~~~~~~~~~ -Code Compliance Test -~~~~~~~~~~~~~~~~~~~~ - -A core tenet of CICE dycore and Icepack innovations is that they must not change -the physics and biogeochemistry of existing model configurations, notwithstanding -obsolete model components. Therefore, alterations to existing CICE Consortium code -must only fix demonstrable numerical or scientific inaccuracies or bugs, or be -necessary to introduce new science into the code. New physics and biogeochemistry -introduced into the model must not change model answers when switched off, and in -that case CICEcore and Icepack must reproduce answers bit-for-bit as compared to -previous simulations with the same namelist configurations. This bit-for-bit -requirement is common in Earth System Modeling projects, but often cannot be achieved -in practice because model additions may require changes to existing code. In this -circumstance, bit-for-bit reproducibility using one compiler may not be unachievable -on a different computing platform with a different compiler. Therefore, tools for -scientific testing of CICE code changes have been developed to accompany bit-for-bit -testing. These tools exploit the statistical properties of simulated sea ice thickness -to confirm or deny the null hypothesis, which is that new additions to the CICE dycore -and Icepack have not significantly altered simulated ice volume using previous model -configurations. Here we describe the CICE testing tools, which are applies to output -from five-year gx-1 simulations that use the standard CICE atmospheric forcing. -A scientific justification of the testing is provided in -:cite:`Hunke2018`. - -.. _paired: - -******************************* -Two-Stage Paired Thickness Test -******************************* - -The first quality check aims to confirm the null hypotheses -:math:`H_0\!:\!\mu_d{=}0` at every model grid point, given the mean -thickness difference :math:`\mu_d` between paired CICE simulations -‘:math:`a`’ and ‘:math:`b`’ that should be identical. :math:`\mu_d` is -approximated as -:math:`\bar{h}_{d}=\tfrac{1}{n}\sum_{i=1}^n (h_{ai}{-}h_{bi})` for -:math:`n` paired samples of ice thickness :math:`h_{ai}` and -:math:`h_{bi}` in each grid cell of the gx-1 mesh. Following -:cite:`Wilks2006`, the associated :math:`t`-statistic -expects a zero mean, and is therefore - -.. math:: - t=\frac{\bar{h}_{d}}{\sigma_d/\sqrt{n_{eff}}} - :label: t-distribution - -given variance -:math:`\sigma_d^{\;2}=\frac{1}{n-1}\sum_{i=1}^{n}(h_{di}-\bar{h}_d)^2` -of :math:`h_{di}{=}(h_{ai}{-}h_{bi})` and effective sample size - -.. math:: - n_{eff}{=}n\frac{({1-r_1})}{({1+r_1})} - :label: neff - -for lag-1 autocorrelation: - -.. math:: - r_1=\frac{\sum\limits_{i=1}^{n-1}\big[(h_{di}-\bar{h}_{d1:n-1})(h_{di+1}-\bar{h}_{d2:n})\big]}{\sqrt{\sum\limits_{i=1}^{n-1} (h_{di}-\bar{h}_{d1:n-1})^2 \sum\limits_{i=2}^{n} (h_{di}-\bar{h}_{d2:n})^2 }}. - :label: r1 - -Here, :math:`\bar{h}_{d1:n-1}` is the mean of all samples except the -last, and :math:`\bar{h}_{d2:n}` is the mean of samples except the -first, and both differ from the overall mean :math:`\bar{h}_d` in -equations (:eq:`t-distribution`). That is: - -.. math:: - \bar{h}_{d1:n-1}=\frac{1}{n{-}1} \sum \limits_{i=1}^{n-1} h_{di},\quad - \bar{h}_{d2:n}=\frac{1}{n{-}1} \sum \limits_{i=2}^{n} h_{di},\quad - \bar{h}_d=\frac{1}{n} \sum \limits_{i=1}^{n} {h}_{di} - :label: short-means - -Following :cite:`Zwiers1995`, the effective sample size is -limited to :math:`n_{eff}\in[2,n]`. This definition of :math:`n_{eff}` -assumes ice thickness evolves as an AR(1) process -:cite:`VonStorch1999`, which can be justified by analyzing -the spectral density of daily samples of ice thickness from 5-year -records in CICE Consortium member models :cite:`Hunke2018`. -The AR(1) approximation is inadmissible for paired velocity samples, -because ice drift possesses periodicity from inertia and tides -:cite:`Hibler2006,Lepparanta2012,Roberts2015`. Conversely, -tests of paired ice concentration samples may be less sensitive to ice -drift than ice thickness. In short, ice thickness is the best variable -for CICE Consortium quality control (QC), and for the test of the mean -in particular. - -Care is required in analyzing mean sea ice thickness changes using -(:eq:`t-distribution`) with -:math:`N{=}n_{eff}{-}1` degrees of freedom. -:cite:`Zwiers1995` demonstrate that the :math:`t`-test in -(:eq:`t-distribution`) becomes conservative when -:math:`n_{eff} < 30`, meaning that :math:`H_0` may be erroneously -confirmed for highly auto-correlated series. Strong autocorrelation -frequently occurs in modeled sea ice thickness, and :math:`r_1>0.99` is -possible in parts of the gx-1 domain for the five-year QC simulations. -In the event that :math:`H_0` is confirmed but :math:`2\leq n_{eff}<30`, -the :math:`t`-test progresses to the ‘Table Lookup Test’ of -:cite:`Zwiers1995`, to check that the first-stage test -using (:eq:`t-distribution`) was not -conservative. The Table Lookup Test chooses critical :math:`t` values -:math:`|t|`_. -There are 2 options for posting CICE results to CDash: 1) The automated -script, 2) The manual method. - -***************** -Automatic Script -***************** - -To automatically run the CICE tests, and post the results to the CICE Cdash dashboard, -users need to copy and run the ``run.suite`` script: - -.. code-block:: bash - - cp configuration/scripts/run.suite . - ./run.suite -m -testid -bc -bg - -The run.suite script does the following: - -- Creates a fresh clone of the CICE-Consortium repository -- ``cd`` to cloned repo -- run ``create.case`` to generate the base_suite directories. The output - is piped to ``log.suite`` -- Running ``create.case`` submits each individual job to the queue. -- ``run.suite`` monitors the queue manager to determine when all jobs have - finished (pings the queue manager once every 5 minutes). -- Once all jobs complete, cd to base_suite directory and run ``./results.csh`` -- Run ``./run_ctest.csh`` in order to post the test results to the CDash dashboard - -***************** -Manual Method -***************** - -To manually run the CICE tests and post the results to the CICE CDash dashboard, -users essentially just need to perform all steps available in run.suite, detailed below: - -- Pass the ``-report`` flag to create.case when running the ``base_suite`` test suite. - The ``-report`` flag copies the required CTest / CDash scripts to the suite - directory. -- ``create.case`` compiles the CICE code, and submits all of the jobs to the - queue manager. -- After every job has been submitted and completed, ``cd`` to the suite directory. -- Parse the results, by running ``./results.csh``. -- Run the CTest / CDash script ``./run_ctest.csh``. - -If the ``run_ctest.csh`` script is unable to post the testing results to the CDash -server, a message will be printed to the screen detailing instructions on how to attempt -to post the results from another server. If ``run_ctest.csh`` fails to submit the results, -it will generate a tarball ``cice_ctest.tgz`` that contains the necessary files for -submission. Copy this file to another server (CMake version 2.8+ required), extract the -archive, and run ``./run_ctest.csh -submit``. - -~~~~~~~~~~~~~~~~~~~~~~~~~~~~ -End-To-End Testing Procedure -~~~~~~~~~~~~~~~~~~~~~~~~~~~~ - -Below is an example of a step-by-step procedure for testing a code change that results -in non-bit-for-bit results: - -.. code-block:: bash - - # Create a baseline dataset (only necessary if no baseline exists on the system) - ./create.case -m onyx -ts base_suite -testid base0 -bg cicev6.0.0 -a - - # Check out the updated code, or clone from a pull request - - # Run the test with the new code - ./create.case -m onyx -ts base_suite -testid test0 -bc cicev6.0.0 -a - - # Check the results - cd base_suite.test0 - ./results.csh - - #### If the BFB tests fail, perform the compliance testing #### - # Create a QC baseline - ./create.case -m onyx -t smoke -g gx1 -p 44x1 -testid qc_base -s qc,medium -a - cd onyx_smoke_gx1_44x1_medium_qc.qc_base - ./cice.build - ./cice.submit - - # Check out the updated code or clone from a pull request - - # Create the t-test testing data - ./create.case -m onyx -t smoke -g gx1 -p 44x1 -testid qc_test -s qc,medium -a - cd onyx_smoke_gx1_44x1_medium_qc.qc_test - ./cice.build - ./cice.submit - - # Wait for runs to finish - - # Perform the QC test - cp configuration/scripts/tests/QC/cice.t-test.py - ./cice.t-test.py /p/work/turner/CICE_RUNS/onyx_smoke_gx1_44x1_medium_qc.qc_base \ - /p/work/turner/CICE_RUNS/onyx_smoke_gx1_44x1_medium_qc.qc_test - - # Example output: - INFO:__main__:Number of files: 1825 - INFO:__main__:Two-Stage Test Passed - INFO:__main__:Quadratic Skill Test Passed for Northern Hemisphere - INFO:__main__:Quadratic Skill Test Passed for Southern Hemisphere - INFO:__main__: - INFO:__main__:Quality Control Test PASSED - -.. _tabnamelist: - -------------------------- -Table of namelist options -------------------------- - -.. _tab-namelist: - -.. csv-table:: Table 8 - :header: "variable", "options/format", "description", "recommended value" - :widths: 15, 15, 30, 15 - - "*setup_nml*", " ", " ", " " - "", "", "*Time, Diagnostics*", "" - "``days_per_year``", "``360`` or ``365``", "number of days in a model year", "365" - "``use_leap_years``", "true/false", "if true, include leap days", "" - "``year_init``", "yyyy", "the initial year, if not using restart", "" - "``istep0``", "integer", "initial time step number", "0" - "``dt``", "seconds", "thermodynamics time step length", "3600." - "``npt``", "integer", "total number of time steps to take", "" - "``ndtd``", "integer", "number of dynamics/advection/ridging/steps per thermo timestep", "1" - "", "", "*Initialization/Restarting*", "" - "``runtype``", "``initial``", "start from ``ice_ic``", "" - "", "``continue``", "restart using ``pointer_file``", "" - "``ice_ic``", "``default``", "latitude and sst dependent", "default" - "", "``none``", "no ice", "" - "", "path/file", "restart file name", "" - "``restart``", "true/false", "initialize using restart file", "``.true.``" - "``use_restart_time``", "true/false", "set initial date using restart file", "``.true.``" - "``restart_format``", "nc", "read/write  restart files (use with PIO)", "" - "", "bin", "read/write binary restart files", "" - "``lcdf64``", "true/false", "if true, use 64-bit  format", "" - "``restart_dir``", "path/", "path to restart directory", "" - "``restart_ext``", "true/false", "read/write halo cells in restart files", "" - "``restart_file``", "filename prefix", "output file for restart dump", "‘iced’" - "``pointer_file``", "pointer filename", "contains restart filename", "" - "``dumpfreq``", "``y``", "write restart every ``dumpfreq_n`` years", "y" - "", "``m``", "write restart every ``dumpfreq_n`` months", "" - "", "``d``", "write restart every ``dumpfreq_n`` days", "" - "``dumpfreq_n``", "integer", "frequency restart data is written", "1" - "``dump_last``", "true/false", "if true, write restart on last time step of simulation", "" - "", "", "*Model Output*", "" - "``bfbflag``", "true/false", "for bit-for-bit diagnostic output", "" - "``diagfreq``", "integer", "frequency of diagnostic output in ``dt``", "24" - "", "*e.g.*, 10", "once every 10 time steps", "" - "``diag_type``", "``stdout``", "write diagnostic output to stdout", "" - "", "``file``", "write diagnostic output to file", "" - "``diag_file``", "filename", "diagnostic output file (script may reset)", "" - "``print_global``", "true/false", "print diagnostic data, global sums", "``.false.``" - "``print_points``", "true/false", "print diagnostic data for two grid points", "``.false.``" - "``latpnt``", "real", "latitude of (2) diagnostic points", "" - "``lonpnt``", "real", "longitude of (2) diagnostic points", "" - "``dbug``", "true/false", "if true, write extra diagnostics", "``.false.``" - "``histfreq``", "string array", "defines output frequencies", "" - "", "``y``", "write history every ``histfreq_n`` years", "" - "", "``m``", "write history every ``histfreq_n`` months", "" - "", "``d``", "write history every ``histfreq_n`` days", "" - "", "``h``", "write history every ``histfreq_n`` hours", "" - "", "``1``", "write history every time step", "" - "", "``x``", "unused frequency stream (not written)", "" - "``histfreq_n``", "integer array", "frequency history output is written", "" - "", "0", "do not write to history", "" - "``hist_avg``", "true", "write time-averaged data", "``.true.``" - "", "false", "write snapshots of data", "" - "``history_dir``", "path/", "path to history output directory", "" - "``history_file``", "filename prefix", "output file for history", "‘iceh’" - "``write_ic``", "true/false", "write initial condition", "" - "``incond_dir``", "path/", "path to initial condition directory", "" - "``incond_file``", "filename prefix", "output file for initial condition", "‘iceh’" - "``runid``", "string", "label for run (currently CESM only)", "" - "", "", "", "" - "*grid_nml*", "", "", "" - "", "", "*Grid*", "" - "``grid_format``", "``nc``", "read  grid and kmt files", "‘bin’" - "", "``bin``", "read direct access, binary file", "" - "``grid_type``", "``rectangular``", "defined in *rectgrid*", "" - "", "``displaced_pole``", "read from file in *popgrid*", "" - "", "``tripole``", "read from file in *popgrid*", "" - "", "``regional``", "read from file in *popgrid*", "" - "``grid_file``", "filename", "name of grid file to be read", "‘grid’" - "``kmt_file``", "filename", "name of land mask file to be read", "‘kmt’" - "``gridcpl_file``", "filename", "input file for coupling grid info", "" - "``kcatbound``", "``0``", "original category boundary formula", "0" - "", "``1``", "new formula with round numbers", "" - "", "``2``", "WMO standard categories", "" - "", "``-1``", "one category", "" - "", "", "", "" - "*domain_nml*", "", "", "" - "", "", "*Domain*", "" - "``nprocs``", "integer", "number of processors to use", "" - "``processor_shape``", "``slenderX1``", "1 processor in the y direction (tall, thin)", "" - "", "``slenderX2``", "2 processors in the y direction (thin)", "" - "", "``square-ice``", "more processors in x than y, :math:`\sim` square", "" - "", "``square-pop``", "more processors in y than x, :math:`\sim` square", "" - "``distribution_type``", "``cartesian``", "distribute blocks in 2D Cartesian array", "" - "", "``roundrobin``", "1 block per proc until blocks are used", "" - "", "``sectcart``", "blocks distributed to domain quadrants", "" - "", "``sectrobin``", "several blocks per proc until used", "" - "", "``rake``", "redistribute blocks among neighbors", "" - "", "``spacecurve``", "distribute blocks via space-filling curves", "" - "``distribution_weight``", "``block``", "full block size sets ``work_per_block``", "" - "", "``latitude``", "latitude/ocean sets ``work_per_block``", "" - "``ew_boundary_type``", "``cyclic``", "periodic boundary conditions in x-direction", "" - "", "``open``", "Dirichlet boundary conditions in x", "" - "``ns_boundary_type``", "``cyclic``", "periodic boundary conditions in y-direction", "" - "", "``open``", "Dirichlet boundary conditions in y", "" - "", "``tripole``", "U-fold tripole boundary conditions in y", "" - "", "``tripoleT``", "T-fold tripole boundary conditions in y", "" - "``maskhalo_dyn``", "true/false", "mask unused halo cells for dynamics", "" - "``maskhalo_remap``", "true/false", "mask unused halo cells for transport", "" - "``maskhalo_bound``", "true/false", "mask unused halo cells for boundary updates", "" - "", "", "", "" - "*tracer_nml*", "", "", "" - "", "", "*Tracers*", "" - "``tr_iage``", "true/false", "ice age", "" - "``restart_age``", "true/false", "restart tracer values from file", "" - "``tr_FY``", "true/false", "first-year ice area", "" - "``restart_FY``", "true/false", "restart tracer values from file", "" - "``tr_lvl``", "true/false", "level ice area and volume", "" - "``restart_lvl``", "true/false", "restart tracer values from file", "" - "``tr_pond_cesm``", "true/false", "CESM melt ponds", "" - "``restart_pond_cesm``", "true/false", "restart tracer values from file", "" - "``tr_pond_topo``", "true/false", "topo melt ponds", "" - "``restart_pond_topo``", "true/false", "restart tracer values from file", "" - "``tr_pond_lvl``", "true/false", "level-ice melt ponds", "" - "``restart_pond_lvl``", "true/false", "restart tracer values from file", "" - "``tr_aero``", "true/false", "aerosols", "" - "``restart_aero``", "true/false", "restart tracer values from file", "" - "*thermo_nml*", "", "", "" - "", "", "*Thermodynamics*", "" - "``kitd``", "``0``", "delta function ITD approximation", "1" - "", "``1``", "linear remapping ITD approximation", "" - "``ktherm``", "``0``", "zero-layer thermodynamic model", "" - "", "``1``", "Bitz and Lipscomb thermodynamic model", "" - "", "``2``", "mushy-layer thermodynamic model", "" - "``conduct``", "``MU71``", "conductivity :cite:`MU71`", "" - "", "``bubbly``", "conductivity :cite:`PETB07`", "" - "``a_rapid_mode``", "real", "brine channel diameter", "0.5x10 :math:`^{-3}` m" - "``Rac_rapid_mode``", "real", "critical Rayleigh number", "10" - "``aspect_rapid_mode``", "real", "brine convection aspect ratio", "1" - "``dSdt_slow_mode``", "real", "drainage strength parameter", "-1.5x10 :math:`^{-7}` m/s/K" - "``phi_c_slow_mode``", ":math:`0<\phi_c < 1`", "critical liquid fraction", "0.05" - "``phi_i_mushy``", ":math:`0<\phi_i < 1`", "solid fraction at lower boundary", "0.85" - "", "", "", "" - "*dynamics_nml*", "", "", "" - "", "", "*Dynamics*", "" - "``kdyn``", "``0``", "dynamics OFF", "1" - "", "``1``", "EVP dynamics", "" - "", "``2``", "EAP dynamics", "" - "``revised_evp``", "true/false", "use revised EVP formulation", "" - "``ndte``", "integer", "number of EVP subcycles", "120" - "``advection``", "``remap``", "linear remapping advection", "‘remap’" - "", "``upwind``", "donor cell advection", "" - "``kstrength``", "``0``", "ice strength formulation :cite:`Hibler79`", "1" - "", "``1``", "ice strength formulation :cite:`Rothrock75`", "" - "``krdg_partic``", "``0``", "old ridging participation function", "1" - "", "``1``", "new ridging participation function", "" - "``krdg_redist``", "``0``", "old ridging redistribution function", "1" - "", "``1``", "new ridging redistribution function", "" - "``mu_rdg``", "real", "e-folding scale of ridged ice", "" - "``Cf``", "real", "ratio of ridging work to PE change in ridging", "17." - "", "", "", "" - "*shortwave_nml*", "", "", "" - "", "", "*Shortwave*", "" - "``shortwave``", "``default``", "NCAR CCSM3 distribution method", "" - "", "``dEdd``", "Delta-Eddington method", "" - "``albedo_type``", "``default``", "NCAR CCSM3 albedos", "‘default’" - "", "``constant``", "four constant albedos", "" - "``albicev``", ":math:`0<\alpha <1`", "visible ice albedo for thicker ice", "" - "``albicei``", ":math:`0<\alpha <1`", "near infrared ice albedo for thicker ice", "" - "``albsnowv``", ":math:`0<\alpha <1`", "visible, cold snow albedo", "" - "``albsnowi``", ":math:`0<\alpha <1`", "near infrared, cold snow albedo", "" - "``ahmax``", "real", "albedo is constant above this thickness", "0.3 m" - "``R_ice``", "real", "tuning parameter for sea ice albedo from Delta-Eddington shortwave", "" - "``R_pnd``", "real", "... for ponded sea ice albedo …", "" - "``R_snw``", "real", "... for snow (broadband albedo) …", "" - "``dT_mlt``", "real", ":math:`\Delta` temperature per :math:`\Delta` snow grain radius", "" - "``rsnw_mlt``", "real", "maximum melting snow grain radius", "" - "``kalg``", "real", "absorption coefficient for algae", "" - "", "", "", "" - "*ponds_nml*", "", "", "" - "", "", "*Melt Ponds*", "" - "``hp1``", "real", "critical ice lid thickness for topo ponds", "0.01 m" - "``hs0``", "real", "snow depth of transition to bare sea ice", "0.03 m" - "``hs1``", "real", "snow depth of transition to pond ice", "0.03 m" - "``dpscale``", "real", "time scale for flushing in permeable ice", ":math:`1\times 10^{-3}`" - "``frzpnd``", "``hlid``", "Stefan refreezing with pond ice thickness", "‘hlid’" - "", "``cesm``", "CESM refreezing empirical formula", "" - "``rfracmin``", ":math:`0 \le r_{min} \le 1`", "minimum melt water added to ponds", "0.15" - "``rfracmax``", ":math:`0 \le r_{max} \le 1`", "maximum melt water added to ponds", "1.0" - "``pndaspect``", "real", "aspect ratio of pond changes (depth:area)", "0.8" - "", "", "", "" - "*zbgc_nml*", "", "", "" - "", "", "*Biogeochemistry*", "" - "``tr_brine``", "true/false", "brine height tracer", "" - "``tr_zaero``", "true/false", "vertical aerosol tracers", "" - "``modal_aero``", "true/false", "modal aersols", "" - "``restore_bgc``", "true/false", "restore bgc to data", "" - "``solve_zsal`", "true/false", "update salinity tracer profile", "" - "``bgc_data_dir``", "path/", "data directory for bgc", "" - "``skl_bgc``", "true/false", "biogeochemistry", "" - "``sil_data_type``", "``default``", "default forcing value for silicate", "" - "", "``clim``", "silicate forcing from ocean climatology :cite:`GLBA06`", "" - "``nit_data_type``", "``default``", "default forcing value for nitrate", "" - "", "``clim``", "nitrate forcing from ocean climatology :cite:`GLBA06`", "" - "", "``sss``", "nitrate forcing equals salinity", "" - "``fe_data_type``", "``default``", "default forcing value for iron", "" - "", "``clim``", "iron forcing from ocean climatology", "" - "``bgc_flux_type``", "``Jin2006``", "ice–ocean flux velocity of :cite:`JDWSTWLG06`", "" - "", "``constant``", "constant ice–ocean flux velocity", "" - "``restart_bgc``", "true/false", "restart tracer values from file", "" - "``tr_bgc_C_sk``", "true/false", "algal carbon tracer", "" - "``tr_bgc_chl_sk``", "true/false", "algal chlorophyll tracer", "" - "``tr_bgc_Am_sk``", "true/false", "ammonium tracer", "" - "``tr_bgc_Sil_sk``", "true/false", "silicate tracer", "" - "``tr_bgc_DMSPp_sk``", "true/false", "particulate DMSP tracer", "" - "``tr_bgc_DMSPd_sk``", "true/false", "dissolved DMSP tracer", "" - "``tr_bgc_DMS_sk``", "true/false", "DMS tracer", "" - "``phi_snow``", "real", "snow porosity for brine height tracer", "" - "", "", "", "" - "*forcing_nml*", "", "", "" - "", "", "*Forcing*", "" - "``formdrag``", "true/false", "calculate form drag", "" - "``atmbndy``", "``default``", "stability-based boundary layer", "‘default’" - "", "``constant``", "bulk transfer coefficients", "" - "``fyear_init``", "yyyy", "first year of atmospheric forcing data", "" - "``ycycle``", "integer", "number of years in forcing data cycle", "" - "``atm_data_format``", "``nc``", "read  atmo forcing files", "" - "", "``bin``", "read direct access, binary files", "" - "``atm_data_type``", "``default``", "constant values defined in the code", "" - "", "``LYq``", "AOMIP/Large-Yeager forcing data", "" - "", "``monthly``", "monthly forcing data", "" - "", "``ncar``", "NCAR bulk forcing data", "" - "", "``oned``", "column forcing data", "" - "``atm_data_dir``", "path/", "path to atmospheric forcing data directory", "" - "``calc_strair``", "true", "calculate wind stress and speed", "" - "", "false", "read wind stress and speed from files", "" - "``highfreq``", "true/false", "high-frequency atmo coupling", "" - "``natmiter``", "integer", "number of atmo boundary layer iterations", "" - "``calc_Tsfc``", "true/false", "calculate surface temperature", "``.true.``" - "``precip_units``", "``mks``", "liquid precipitation data units", "" - "", "``mm_per_month``", "", "" - "", "``mm_per_sec``", "(same as MKS units)", "" - "``tfrz_option``", "``minus1p8``", "constant ocean freezing temperature (:math:`-1.8^{\circ} C`)", "" - "", "``linear_salt``", "linear function of salinity (ktherm=1)", "" - "", "``mushy_layer``", "matches mushy-layer thermo (ktherm=2)", "" - "``ustar_min``", "real", "minimum value of ocean friction velocity", "0.0005 m/s" - "``fbot_xfer_type``", "``constant``", "constant ocean heat transfer coefficient", "" - "", "``Cdn_ocn``", "variable ocean heat transfer coefficient", "" - "``update_ocn_f``", "true", "include frazil water/salt fluxes in ocn fluxes", "" - "", "false", "do not include (when coupling with POP)", "" - "``l_mpond_fresh``", "true", "retain (topo) pond water until ponds drain", "" - "", "false", "release (topo) pond water immediately to ocean", "" - "``oceanmixed_ice``", "true/false", "active ocean mixed layer calculation", "``.true.`` (if uncoupled)" - "``ocn_data_format``", "``nc``", "read  ocean forcing files", "" - "", "``bin``", "read direct access, binary files", "" - "``sss_data_type``", "``default``", "constant values defined in the code", "" - "", "``clim``", "climatological data", "" - "", "``near``", "POP ocean forcing data", "" - "``sst_data_type``", "``default``", "constant values defined in the code", "" - "", "``clim``", "climatological data", "" - "", "``ncar``", "POP ocean forcing data", "" - "``ocn_data_dir``", "path/", "path to oceanic forcing data directory", "" - "``oceanmixed_file``", "filename", "data file containing ocean forcing data", "" - "``restore_sst``", "true/false", "restore sst to data", "" - "``trestore``", "integer", "sst restoring time scale (days)", "" - "``restore_ice``", "true/false", "restore ice state along lateral boundaries", "" - "", "", "", "" - "*icefields_tracer_nml*", "", "", "" - "", "", "*History Fields*", "" - "``f_``", "string", "frequency units for writing ```` to history", "" - "", "``y``", "write history every ``histfreq_n`` years", "" - "", "``m``", "write history every ``histfreq_n`` months", "" - "", "``d``", "write history every ``histfreq_n`` days", "" - "", "``h``", "write history every ``histfreq_n`` hours", "" - "", "``1``", "write history every time step", "" - "", "``x``", "do not write ```` to history", "" - "", "``md``", "*e.g.,* write both monthly and daily files", "" - "``f__ai``", "", "grid cell average of ```` (:math:`\times a_i`)", "" - diff --git a/doc/source/cice_4_index.rst b/doc/source/cice_index.rst similarity index 99% rename from doc/source/cice_4_index.rst rename to doc/source/cice_index.rst index fc1abd123..1a05b4fd4 100644 --- a/doc/source/cice_4_index.rst +++ b/doc/source/cice_index.rst @@ -5,17 +5,13 @@ Index of primary variables and parameters ========================================== -This index defines many of the symbols used frequently in the ice model +This index defines many of the symbols used frequently in the CICE model code. Values appearing in this list are fixed or recommended; most namelist parameters are indicated ( :math:`E_\circ`) with their default -values. For other namelist options, see Section :ref:`tab-namelist`. All +values. For other namelist options, see Section :ref:`tabnamelist`. All quantities in the code are expressed in MKS units (temperatures may take either Celsius or Kelvin units). -================================ -Comprehensive Alphabetical Index -================================ - .. csv-table:: Alphabetical Index :header: " ", " ", " " :widths: 15, 30, 15, 1 diff --git a/doc/source/conf.py b/doc/source/conf.py index 674b348f7..bd4dc5c90 100644 --- a/doc/source/conf.py +++ b/doc/source/conf.py @@ -62,9 +62,9 @@ # built documents. # # The short X.Y version. -version = u'6.0.0' +version = u'6.0.dev' # The full version, including alpha/beta/rc tags. -release = u'6.0.0.alpha' +version = u'6.0.dev' # The language for content autogenerated by Sphinx. Refer to documentation # for a list of supported languages. diff --git a/doc/source/developer_guide/dg_about.rst b/doc/source/developer_guide/dg_about.rst new file mode 100755 index 000000000..62f1ec484 --- /dev/null +++ b/doc/source/developer_guide/dg_about.rst @@ -0,0 +1,18 @@ +:tocdepth: 3 + +.. _dev_about: + +About Development +================== + +The CICE model consists of four different parts, the CICE dynamics and supporting infrastructure, +the CICE driver code, the Icepack column physics code, and the scripts. Development of each of these +pieces is described separately. + +Guiding principles for the creation of CICE include the following: + - CICE can be run in stand-alone or coupled modes. A top layer driver, coupling layer, + or model cap can be used to drive the CICE model. + - The Icepack column physics modules are independent, consist of methods that operate + on individual gridcells, and contain so underlying infrastructure. CICE must call + into the Icepack using interfaces and approaches specified by Icepack. + diff --git a/doc/source/developer_guide/dg_documentation.rst b/doc/source/developer_guide/dg_documentation.rst new file mode 100755 index 000000000..e5c46a2c4 --- /dev/null +++ b/doc/source/developer_guide/dg_documentation.rst @@ -0,0 +1,259 @@ +:tocdepth: 3 + +.. _doc: + +Documentation System +==================== + +With CICE development, corresponding updates or modification to the CICE +documentation are required. Whenever you modify the model you should update +documentation. CICE uses `readthedocs.org `_ to create +online HTML and PDF documentation. + +FAQs +---- + +1) What is reStructuredText (RST)? + + The CICE and Icepack documentation is written using reStructuredText (RST) markup language. + ReStructuredText is a markup language, like HTML, markdown, or LaTeX. + `readthedocs.org `_ is a tool for publishing RST documents in other formats + such as HTML and PDF. Additional information about using RST and `readthedocs.org `_ + are found in the sections below. + +2) What is expected of *me* when changing the documentation? + + Updated static PDF documentation will be generated for each new CICE code release. However, + the online "master" version of HTML or PDF documentation is considered a living document and + will be updated regularly with regular code development workflow. + + We expect that if you need to add or modify documentation that you will be able to modify the + RST source files and generate HTML in order to review the HTML documentation. We + will review the RST and HTML during a Pull Request to verify it is working properly and is consistent + with the rest of the CICE-Consortium documentation format. Then we will trigger a new documentation build + on `CICE's documentation page `_ when + the Pull Request is successful. The new documentation build will create the HTML and PDF versions of + CICE's documentation along with your updates. + + In particular, it is important that you test out tables, equations, section references, figures, and/or citations + in your contributed documentation as these can be particularly fiddly to get right. + + +3) Where are the documentation files kept? + + The RST source files for generating HTML and PDF are stored in the master branch of the repository under /doc/source/. + + The HTML and PDF versions of the documentation are available at `CICE's + documentation page `_ + HTML documentation for the current "master" branch as well as static documentation for releases of CICE + will be available on the `Versions page `_ + while corresponding PDF documentation is available on the `Downloads page + `_. The CICE-Consortium team will trigger + builds of both HTML and PDF documentation with each pull request. + +.. _moddocs: + +Steps for Modifying Documentation +--------------------------------- + +Setting up readthedocs.org +~~~~~~~~~~~~~~~~~~~~~~~~~~ + +The CICE-Consortium recommends that developers use `readthedocs.org `_ to generate and test +their contributions to the CICE documentation. This tool does not require external libraries to be built +on each developer's personal computer and is free and easy to use. You can follow the steps below and also +reference the `Getting Started `_ guide available from `readthedocs.org `_. + +1. Sign up for a free account at `readthedocs.org `_ + + Select a username and password. These do not have to match your GitHub username and password, but having + the same username can be simpler if the user choses to do this. Below, + USERNAME is a placeholder - you would need to replace this with your personal username. + +2. Connect your GitHub account + + Click on your username in the upper right hand corner and select 'Settings' and then select 'Connected + Services' to connect your GitHub account. This process will ask you to authorize a connection to + readthedocs.org that allows for reading of information about and cloning of your repositories. + +3. Import your projects + + Click on your username in the upper right hand corner and select 'My Projects'. Then click the 'Import + a Project' green button. This will generate a list of repositories you are able to import. To add a + repository click the + button. Once added and imported to readthedocs, the icon will change to an + upward pointing arrow. + +4. Modify the project settings + + Click on the project you are editing then click on the 'Admin' tab on the far right. The CICE-Consortium + has found the following settings to be important for proper building. + + Under 'Settings' modify and save the following: + - Name: USERNAME CICE (this is the local name of the repository on readthedocs.org) + - Repository URL: https://github.com/USERNAME/CICE.git + - Repository type: Git + - Description: (Add anything that would be useful to you to describe your fork.) + - Documentation type: Sphinx html + - Language: English + - Programming Language: Only Words + + Under 'Advanced Settings' modify the following: + - Install Project: Unchecked box + - Requirements file: doc/requirements.txt (*VERY IMPORTANT, see below*) + - Default branch: readthedocs (whatever branch you are working on for development. If not set, this will default to master.) + - Default version: latest (what your documentation build will be called) + - Enable PDF build: Checked box + - Enable EPUB build: Checked box + - Privacy Level: Public (this is useful to keep public if you want to point to the tested documentation as part of a Pull Request) + - Python Interpreter: Python 2.x + + +Model sandbox and documentation +~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ + +Follow the general `CICE-Consortium Git Workflow and Developer's guide `_ +to clone the repository and create your personal fork for model modifications. Whenever you modify the model +you should update documentation. You can update the documentation on the same branch of your fork on which +you test code, or you can create a separate branch called 'readthedocs' to test only the RST and HTML documentation. + +There are some important files you will need in order to correctly build the documentation. These should all be included automatically when you fork from the CICE-Consortium repositories: + + - /doc/requirements.txt : This file is necessary to get the references and citations working properly by using sphinxcontrib-bibtex. This file should *not* need to be modified by developers generally. + - /doc/source/conf.py : Basic documentation information for the Consortium including template, etc. This file should *not* need to be modified by developers generally. + - /doc/source/zreferences.rst : required for the references to link properly. This file should *not* need to be modified by developers generally. + - /doc/source/master_list.bib : the master list of references cited in the documentation. This file *may need* to be modified by developers with documentation updates. This file is currently ordered sequentially from oldest to newest and alphabetically within a given year. To add references for your documentation, edit the master_list.bib file using the Articles and/or Books entries as examples for your addition(s). Please follow the format for ordering the date/alphabetization as well as including a URL with the document's DOI. + + +Editing RST files +~~~~~~~~~~~~~~~~~~ + +Open the RST file using a text editor and make the changes necessary. Note that from the User's Guide documentation (see link above) there is a hyperlink called "Show Source" on the left hand column that will show you the RST source code for the HTML you are viewing. This is a good way to see the syntax for tables, equations, linking references, labeling tables or figures, and correctly identifying documentation sections or subsections. + +Here are some resources for using RST files: + +* `RST Primer1 `_ + +* `RST Primer2 `_ + +* `RST Syntax `_ + +* `RST tables `_ - Note that tables can be tricky in Sphinx and we prefer using `comma separated tables `_ whenever possible. + +Building documentation +~~~~~~~~~~~~~~~~~~~~~~ + +Once you've committed and pushed changes to the documentation *.rst files on your personal development fork. +Go to your readthedocs.org site and then select your project "Overview". Whenever you commit to your fork +the documents will automatically build. There is also an option to "Build a Version". Choose "latest" +and then click the green "Build version" button. + +You will automatically be taken to the "Builds" page with a list of recent documentation builds. +The documentation build you just started will be listed as "Triggered" and then "Building". +If the build is successful the status will change to "Passed"; if the build is not successful +then the status will change to "Failed". You can click on the "Passed" or "Failed" text to get +information about the build and what might be problematic. The time of the build attempt is also +listed with the most recent build appearing at the top of the list. + +To see the HTML you just successfully built, go to "Overview" and click on "latest" under versions. To see the PDF you just successfully built, go to "Downloads" and click on "latest PDF". + + +Push changes back to the repository +~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ + +When you're happy with the documentation you've generated, follow the standard CICE-Consortium +`Git Workflow and Developer's guide `_ +to do a Pull Request and make sure to note in the Pull Request Template that documentation has also +been updated. We will test the HTML and PDF as part of the Pull Request before it is merged to the repository. +It can be particularly helpful if you include the link to your successfully built documentation that is +part of the Pull Request, and in order to do this you must ensure that your settings in readthedocs.org +are set to "Public". + + +Other Tips and Tricks +--------------------- + +Converting LaTeX to RST +~~~~~~~~~~~~~~~~~~~~~~~ + +If you start from a LaTeX (``*.tex``) document you will need to convert this to the RST format that Sphinx +requires. A handy tool to do this is `Pandoc `_, which you +can install quickly and run from the command line. + +Once Pandoc is installed, the basic command line syntax to convert a file is :: + + $ pandoc NAMEIN.tex -f latex -t rst -s -ou NAMEOUT.rst + +The NAMEOUT.rst file can be directly edited for Sphinx. Pandoc does a beautiful job of converting the text, +equations, and many tables. However, equation numbering, section linking, references, figures, and some +tables required more hands on care to be sure they render correctly. + +Pandoc requires that the ``*.tex`` files be in utf-8 encoding. To easily do this open the ``*.tex`` +document in Emacs then do ``ctrl-x ctrl-m f`` and you will be prompted to enter encoding type. Just +type in ``utf-8`` and hit enter. Then save with ``ctrl-x ctrl-s`` . You are done and the document can be +converted with Pandoc. + +Using Sphinx +~~~~~~~~~~~~ + +We recommend that you use `readthedocs.org `_ to test documentation +(see :ref:`moddocs`). However, it is also possible to use Sphinx to build and test documentation. +If you choose to follow this workflow, below are some tips for using Sphinx. + +Installing Sphinx +````````````````` + +Sphinx must be installed once on each platform. See `Sphinx `_ or +`Installing Sphinx `_ for details. Below are the +commands for installing Sphinx on a mac laptop at the command line. +Other platforms may require other steps. :: + + $ sudo pip install --ignore-installed sphinx + $ sudo pip install --ignore-installed sphinxcontrib-bibtex + +The CICE Consortium has used the following software to get successful Sphinx HTML builds, including linked +references: + +* python 2.7.11 + +* Sphinx (1.6.3) + +* sphinx-rtd-theme (0.1.9) + +* sphinxcontrib-bibtex (0.3.5) + +* sphinxcontrib-websupport (1.0.1) + +As mentioned above, you will need the conf.py, zreferences.rst, and master_list.bib files that are part of the +master branch and automatically included in your checkout. To use linked references you will need to have the sphinxcontrib-bibtex package as well. + +Building HTML +````````````` + +Move into the /doc/ directory of your sandbox. Then execute the following command:: + + $ make clean + +to get rid of old HTML files. Then execute:: + + $ make html + +to build HTML into /build/html/ directory. It will also give you errors if there is a problem with the build that will help you figure out how you need to modify your RST files for a successful HTML build. Finally :: + + $ open /build/html/FILE.html + +Open the HTML on your browser for testing. + + +Converting RST to PDF +````````````````````` + +Generating a PDF is more complex and currently requires a two-step process. The generation will require +recent versions of both LaTeX and Sphinx. From the /doc/ directory do the following:: + + $ make latex + $ cd build/latex + $ make + +Then search for the ``*.pdf`` document created. + + diff --git a/doc/source/developer_guide/dg_driver.rst b/doc/source/developer_guide/dg_driver.rst new file mode 100755 index 000000000..076c77252 --- /dev/null +++ b/doc/source/developer_guide/dg_driver.rst @@ -0,0 +1,90 @@ +:tocdepth: 3 + +.. _dev_driver: + + +Driver and Coupling Implementation +==================================== + +The driver and coupling layer is found in **cicecore/drivers/**. The standalone driver is found +under **cicecore/drivers/cice/** and other high level coupling layers are found in other directories. +In general, CICE will build with only one of these drivers, depending how the model is run and +coupled. Within the **cicecore/drivers/cice/** directory, the following files are found, + +**CICE.F90** is the top level program file and that calls CICE_Initialize, CICE_Run, and CICE_Finalize methods. +**CICE_InitMod.F90** contains the CICE_Initialize method and other next level source code. +**CICE_RunMod.F90** contains the CICE_Run method and other next level source code. +**CICE_FinalMod.F90 ** contains the CICE_Finalize method and other next level source code. + +Other **cicecore/drivers/** directories are similarly implemented with a top level coupling layer, +that is largely specified by an external coupled system and then some version of the **CICE_InitMod.F90**, +**CICE_RunMod.F90**, and **CICE_FinalMod.F90** files. + + +Calling Sequence +~~~~~~~~~~~~~~~~~~~~~~~~~~~~ + +The initialize calling sequence looks something like:: + + call init_communicate ! initial setup for message passing + call init_fileunits ! unit numbers + call icepack_configure() ! initialize icepack + call input_data ! namelist variables + call init_zbgc ! vertical biogeochemistry namelist + call init_domain_blocks ! set up block decomposition + call init_grid1 ! domain distribution + call init_ice_timers ! initialize all timers + call init_grid2 ! grid variables + call init_calendar ! initialize some calendar stuff + call init_hist (dt) ! initialize output history file + if (kdyn == 2) then + call init_eap (dt_dyn) ! define eap dynamics parameters, variables + else ! for both kdyn = 0 or 1 + call init_evp (dt_dyn) ! define evp dynamics parameters, variables + endif + call init_coupler_flux ! initialize fluxes exchanged with coupler + call init_thermo_vertical ! initialize vertical thermodynamics + call icepack_init_itd(ncat, hin_max) ! ice thickness distribution + call calendar(time) ! determine the initial date + call init_forcing_ocn(dt) ! initialize sss and sst from data + call init_state ! initialize the ice state + call init_transport ! initialize horizontal transport + call ice_HaloRestore_init ! restored boundary conditions + call init_restart ! initialize restart variables + call init_diags ! initialize diagnostic output points + call init_history_therm ! initialize thermo history variables + call init_history_dyn ! initialize dynamic history variables + call init_shortwave ! initialize radiative transfer + call init_forcing_atmo ! initialize atmospheric forcing (standalone) + +See a **CICE_InitMod.F90** file for the latest. + +The run sequence within a time loop looks something like:: + + call init_mass_diags ! diagnostics per timestep + call init_history_therm + call init_history_bgc + + do iblk = 1, nblocks + if (calc_Tsfc) call prep_radiation (dt, iblk) + call step_therm1 (dt, iblk) ! vertical thermodynamics + call biogeochemistry (dt, iblk) ! biogeochemistry + call step_therm2 (dt, iblk) ! ice thickness distribution thermo + enddo ! iblk + + call update_state (dt, daidtt, dvidtt, dagedtt, offset) + + do k = 1, ndtd + call step_dyn_horiz (dt_dyn) + do iblk = 1, nblocks + call step_dyn_ridge (dt_dyn, ndtd, iblk) + enddo + call update_state (dt_dyn, daidtd, dvidtd, dagedtd, offset) + enddo + + do iblk = 1, nblocks + call step_radiation (dt, iblk) + call coupling_prep (iblk) + enddo ! iblk + +See a **CICE_RunMod.F90** file for the latest. \ No newline at end of file diff --git a/doc/source/developer_guide/dg_dynamics.rst b/doc/source/developer_guide/dg_dynamics.rst new file mode 100755 index 000000000..d86b9145f --- /dev/null +++ b/doc/source/developer_guide/dg_dynamics.rst @@ -0,0 +1,130 @@ +:tocdepth: 3 + +.. _dev_dynamics: + + +Dynamics and Infrastructure Implementation +================================================ + +The CICE **cicecore/** directory consists of the non icepack source code. Within that +directory there are the following subdirectories + +**cicecore/cicedynB/analysis** contains higher level history and diagnostic routines. + +**cicecore/cicedynB/dynamics** contains all the dynamical evp, eap, and transport routines. + +**cicecore/cicedynB/general** contains routines associated with forcing, flux calculation, +initialization, and model timestepping. + +**cicecore/cicedynB/infrastructure** contains most of the low-level infrastructure associated +with communication (halo updates, gather, scatter, global sums, etc) and I/O reading and writing +binary and netcdf files. + +**cicecore/drivers/** contains subdirectories that support stand-alone drivers and other high level +coupling layers. + +**cicecore/shared/** contains some basic methods related to grid decomposition, time managers, constants, +kinds, and restart capabilities. + + +Dynamics +~~~~~~~~~~~~~~ + +Dyanamical Solvers +************************ + +The dynamics solvers are found in **cicecore/cicedynB/dynamics/**. A couple of different solvers are +available including EVP, revised EVP, and EAP. The dynamics solver is specified in namelist with the +``kdyn`` variable. ``kdyn=1`` is evp, ``kdyn=2`` is eap, and revised evp requires the ``revised_evp`` +namelist flag be set to true. + + +Transport +************** + +The transport (advection) methods are found in **cicecore/cicedynB/dynamics/**. Two methods are supported, +upwind and remap. These are set in namelist via the advection variable. + + +Infrastructure +~~~~~~~~~~~~~~~~~~~~ + +Kinds +********* + +**cicecore/shared/ice_kinds_mod.F90** defines the kinds datatypes used in CICE. These kinds are +used throughout CICE code to define variable types. The CICE kinds are adopted from the kinds +defined in Icepack for consistency in interfaces. + +Constants +************* + +**cicecore/shared/ice_constants.F90** defines several model constants. Some are hardwired parameters +while others have internal defaults and can be set thru namelist. + +Static Array Allocation +************************** + +CICE is implemented using mainly static arrays. CICE arrays tend to be defined based on the +block size and tracer count. Those block sizes and tracer counts are defined in the case +**cice.settings** file, are passed to the build script as CPPs, like +:: + +ftn -c ... -DNXGLOB=100 -DNYGLOB=116 -DBLCKX=25 -DBLCKY=29 -DMXBLCKS=4 -DNICELYR=7 -DNSNWLYR=1 -DNICECAT=5 -DTRAGE=1 -DTRFY=1 -DTRLVL=1 -DTRPND=1 -DTRBRI=0 -DNTRAERO=1 -DTRZS=0 -DNBGCLYR=7 -DTRALG=0 -DTRBGCZ=0 -DTRDOC=0 -DTRDOC=0 -DTRDIC=0 -DTRDON=0 -DTRFED=0 -DTRFEP=0 -DTRZAERO=0 -DTRBGCS=0 -DNUMIN=11 -DNUMAX=99 ... + +and then those CPPs replace variable strings in the file **cicecore/shared/ice_domain_size.F90**. +Data in **ice_domain_size.F90** provides parameters for static array allocations throughout CICE. + +Time Manager +**************** + +Time manager data is module data in **cicecore/shared/ice_calendar.F90**. Much of the time manager +data is public and operated on during the model timestepping. The model timestepping actually takes +place in the **CICE_RunMod.F90** file which is part of the driver code and tends to look like this:: + + call ice_step + istep = istep + 1 ! update time step counters + istep1 = istep1 + 1 + time = time + dt ! determine the time and date + + + +Communication +******************** + +Two low-level communications packages, mpi and serial, are provided as part of CICE. This software +provides a middle layer between the model and the underlying libraries. Only the CICE mpi or +serial directories are compiled with CICE, not both. + +**cicedynB/infrastructure/comm/mpi/** +is based on MPI and provides various methods to do halo updates, global sums, gather/scatter, broadcasts +and similar using some fairly generic interfaces to isolate the MPI calls in the code. + +**cicedynB/infrastructure/comm/serial/** support the same interfaces, but operates +in shared memory mode with no MPI. The serial library will be used, by default in the CICE scripts, +if the number of MPI tasks is set to 1. The serial library allows the model to be run on a single +core or with OpenMP parallelism only without requiring an MPI library. + +I/O +*********** + +There are three low-level IO packages in CICE, io_netcdf, io_binary, and io_pio. This software +provides a middle layer between the model and the underlying IO writing. +Only one of the three IO directories can be built with CICE. The CICE scripts will build with the io_netcdf +by default, but other options can be selecting by setting ``ICE_IOTYPE`` in **cice.settings** in the +case. This has to be set before CICE is built. + +**cicedynB/infrastructure/io/io_netcdf/** is the +default for the standalone CICE model, and it supports writing history and restart files in netcdf +format using standard netcdf calls. It does this by writing from and reading to the root task and +gathering and scattering fields from the root task to support model parallelism. + +**cicedynB/infrastructure/io/io_binary/** supports files in binary format using a gather/scatter +approach and reading to and writing from the root task. + +**cicedynB/infrastructure/io/io_pio/** support reading and writing through the pio interface. pio +is a parallel io library (https://github.com/NCAR/ParallelIO) that supports reading and writing of +binary and netcdf file through various interfaces including netcdf and pnetcdf. pio is generally +more parallel in memory even when using serial netcdf than the standard gather/scatter methods, +and it provides parallel read/write capabilities by optionally linking and using pnetcdf. + diff --git a/doc/source/developer_guide/dg_icepack.rst b/doc/source/developer_guide/dg_icepack.rst new file mode 100755 index 000000000..7d71ed1ca --- /dev/null +++ b/doc/source/developer_guide/dg_icepack.rst @@ -0,0 +1,18 @@ +:tocdepth: 3 + +.. _dev_icepack: + +Icepack +================== + +The CICE model calls the Icepack columnphysics source code. The Icepack model is documented +separately, see https://github.com/cice-consortium/icepack. + +More specifically, the CICE model uses methods defined in **icepack_intfc.F90**. It uses +the init, query, and write methods to set, get, and document icepack values. And it follows +the icepack_warnings methodology where icepack_warnings_aborted is checked and +icepack_warnings_print is called after every call to an icepack method. It does not directly +"use" icepack data and access icepack data only thru interfaces. + + + diff --git a/doc/source/developer_guide/dg_other.rst b/doc/source/developer_guide/dg_other.rst new file mode 100644 index 000000000..17c4fec53 --- /dev/null +++ b/doc/source/developer_guide/dg_other.rst @@ -0,0 +1,161 @@ +:tocdepth: 3 + +.. _adding: + +Other things +============= + + +Reproducible Sums +---------------------- + +The ‘reproducible’ option (`DITTO`) makes diagnostics bit-for-bit when +varying the number of processors. (The simulation results are +bit-for-bit regardless, because they do not require global sums or +max/mins as do the diagnostics.) This was done mainly by increasing the +precision for the global reduction calculations, except for regular +double-precision (r8) calculations involving MPI; MPI can not handle +MPI\_REAL16 on some architectures. Instead, these cases perform sums or +max/min calculations across the global block structure, so that the +results are bit-for-bit as long as the block distribution is the same +(the number of processors can be different). + +A more flexible option is available for double-precision MPI +calculations, using the namelist variable `bfbflag`. When true, this flag +produces bit-for-bit identical diagnostics with different tasks, +threads, blocks and grid decompositions. + + +.. _addtimer: + +Adding Timers +----------------- + +Timing any section of code, or multiple sections, consists of defining +the timer and then wrapping the code with start and stop commands for +that timer. Printing of the timer output is done simultaneously for all +timers. To add a timer, first declare it (`timer\_[tmr]`) at the top of +**ice\_timers.F90** (we recommend doing this in both the **mpi/** and +**serial/** directories), then add a call to *get\_ice\_timer* in the +subroutine *init\_ice\_timers*. In the module containing the code to be +timed, `call ice\_timer\_start`(`timer\_[tmr]`) at the beginning of the +section to be timed, and a similar call to `ice\_timer\_stop` at the end. +A use `ice\_timers` statement may need to be added to the subroutine being +modified. Be careful not to have one command outside of a loop and the +other command inside. Timers can be run for individual blocks, if +desired, by including the block ID in the timer calls. + +.. _addhist: + +Adding History fields +------------------------- + +To add a variable to be printed in the history output, search for +‘example’ in **ice\_history\_shared.F90**: + +#. add a frequency flag for the new field + +#. add the flag to the namelist (here and also in **ice\_in**) + +#. add an index number + +and in **ice\_history.F90**: + +#. broadcast the flag + +#. add a call to `define\_hist\_field` + +#. add a call to `accum\_hist\_field` + +The example is for a standard, two-dimensional (horizontal) field; for +other array sizes, choose another history variable with a similar shape +as an example. Some history variables, especially tracers, are grouped +in other files according to their purpose (bgc, melt ponds, etc.). + +To add an output frequency for an existing variable, see +section :ref:`history`. + +.. _addtrcr: + +Adding Tracers +--------------------- + +A number of optional tracers are available in the code, including ice +age, first-year ice area, melt pond area and volume, brine height, +aerosols, and level ice area and volume (from which ridged ice +quantities are derived). Salinity, enthalpies, age, aerosols, level-ice +volume, brine height and most melt pond quantities are volume-weighted +tracers, while first-year area, pond area, level-ice area and all of the +biogeochemistry tracers in this release are area-weighted tracers. In +the absence of sources and sinks, the total mass of a volume-weighted +tracer such as aerosol (kg) is conserved under transport in horizontal +and thickness space (the mass in a given grid cell will change), whereas +the aerosol concentration (kg/m) is unchanged following the motion, and +in particular, the concentration is unchanged when there is surface or +basal melting. The proper units for a volume-weighted mass tracer in the +tracer array are kg/m. + +In several places in the code, tracer computations must be performed on +the conserved “tracer volume" rather than the tracer itself; for +example, the conserved quantity is :math:`h_{pnd}a_{pnd}a_{lvl}a_{i}`, +not :math:`h_{pnd}`. Conserved quantities are thus computed according to +the tracer dependencies, and code must be included to account for new +dependencies (e.g., :math:`a_{lvl}` and :math:`a_{pnd}` in +**ice\_itd.F90** and **ice\_mechred.F90**). + +To add a tracer, follow these steps using one of the existing tracers as +a pattern. + +- **ice\_domain\_size.F90**: increase `max\_ntrcr` via cpps in the build. + +- **ice\_state.F90**: declare `nt\_[tracer]` and `tr\_[tracer]` + +- create initialization, physics, and restart routines. The restart and history + routine will be in CICE. The physics will be in Icepack. + +- **ice\_fileunits.F90**: add new dump and restart file units + +- to control the new tracer + + - add new module and `tr\_[tracer]` to list of used modules and + variables + + - add logical namelist variable `tr\_[tracer]` + + - initialize namelist variable + + - broadcast namelist variable + + - print namelist variable to diagnostic output file + + - increment number of tracers in use based on namelist input (`ntrcr`) + + - define tracer types (`trcr\_depend` = 0 for ice area tracers, 1 for + ice volume, 2 for snow volume, 2+nt\_[tracer] for dependence on + other tracers) + +- **ice\_itd.F90**, **ice\_mechred.F90**: Account for new dependencies + if needed. + +- **CICE\_InitMod.F90**: initialize tracer (includes reading restart + file) + +- **CICE\_RunMod.F90**, **ice\_step\_mod.F90**: + + - call routine to write tracer restart data + + - call physics routines as needed (often called from + **ice\_step\_mod.F90**) + +- **ice\_restart.F90**: define restart variables (for binary,  and PIO) + +- **ice\_history\_[tracer].F90**: add history variables + (Section :ref:`addhist`) + +- **ice\_in**: add namelist variables to *tracer\_nml* and + *icefields\_nml* + +- If strict conservation is necessary, add diagnostics as noted for + topo ponds in Section :ref:`ponds`. + +See also Icepack documentation. diff --git a/doc/source/developer_guide/dg_scripts.rst b/doc/source/developer_guide/dg_scripts.rst new file mode 100755 index 000000000..e73ee3d28 --- /dev/null +++ b/doc/source/developer_guide/dg_scripts.rst @@ -0,0 +1,146 @@ +:tocdepth: 3 + +.. _dev_scripts: + +Scripts Implementation +======================== + +The scripts are the third part of the cice package. They support setting up +cases, building, and running the cice stand-alone model. + +File List +-------------- + +The directory structure under configure/scripts is as follows. + +| **configuration/scripts/** +| **Makefile** primary makefile +| **cice.batch.csh** creates batch scripts for particular machines +| **cice.build** compiles the code +| **cice.decomp.csh** computes a decomposition given a grid and task/thread count +| **cice.launch.csh** creates script logic that runs the executable +| **cice.run.setup.csh** sets up the run scripts +| **cice.settings** defines environment, model configuration and run settings +| **cice.test.setup.csh** creates configurations for testing the model +| **ice_in** namelist input data +| **machines/** machine specific files to set env and Macros +| **makdep.c** determines module dependencies +| **options/** other namelist configurations available from the cice.setup command line +| **parse_namelist.sh** replaces namelist with command-line configuration +| **parse_namelist_from_settings.sh** replaces namelist with values from cice.settings +| **parse_settings.sh** replaces settings with command-line configuration +| **setup_run_dirs.csh creates the case run directories +| **set_version_number.csh** updates the model version number from the **cice.setup** command line +| **tests/** scripts for configuring and running basic tests + +.. _dev_strategy: + +Strategy +----------- + +The cice scripts are implemented such that everything is resolved after +**cice.setup** is called. This is done by both copying specific files +into the case directory and running scripts as part of the **cice.setup** +command line to setup various files. + +**cice.setup** drives the case setup. It is written in csh. All supporting +scripts are relatively simple csh or sh scripts. See :ref:`scripts` for additional +details. + +The file **cice.settings** specifies a set of env defaults for the case. The file +**ice_in** defines the namelist input for the cice driver. + + +.. _dev_options: + +Preset Case Options +--------------------- + +The ``cice.setup --set`` option allows the user to choose some predetermined cice +settings and namelist. Those options are defined in **configurations/scripts/options/** +and the files are prefixed by either set_env or set_nml. When **cice.setup** +is executed, the appropriate files are read from **configurations/scripts/options/** +and the **cice.settings** and/or **ice_in** files are updated in the case directory +based on the values in those files. + +The filename suffix determines the name of the -s option. So, for instance, + + ``cice.setup -s diag1,debug,bgcISPOL`` + +will search for option files with suffixes of diag1, debug, and bgcISPOL and then +apply those settings. + +**parse_namelist.sh**, **parse_settings.sh**, and **parse_namelist_from_settings.sh** +are the three scripts that modify **ice_in** and **cice.settings**. + +To add new options, just add new files to the **configurations/scripts/options/** directory +with appropriate names and syntax. The set_nml file syntax is the same as namelist +syntax and the set_env files are consistent with csh setenv syntax. See other files for +examples of the syntax. + +.. _dev_machines: + +Machines +----------- + +Machine specific information is contained in **configuration/scripts/machines**. That +directory contains a Macros file and an env file for each supported machine. +One other files will need to be +changed to support a port, that is **configuration/scripts/cice.batch.csh**. +To port to a new machine, see :ref:`porting`. + +.. _dev_options: + +Test Options +--------------- + +Values that are associated with the `--sets` cice.setup are defined in +**configuration/scripts/options**. Those files are text files and cice.setup +uses the values in those files to modify the `cice.settings` and `ice_in` files +in the case as the case is created. Files name `set_env.$option` are associated +with values in the `cice.settings` file. Files named `set_nml.$option` are associated +with values in `ice.in`. These files contain simple keyword pair values one line +at a time. A line starting with # is a comment. Files names that start with `test_` +are used specifically for tests. + +That directory also contains files named `set_files.$option`. This provides an +extra layer on top of the individual setting files that allows settings to be +defined based on groups of other settings. The `set_files.$option` files +contain a list of `--sets` options to be applied. + +The $option part of the filename is the argument to `--sets` argument in `cice.setup`. +Multiple options can be specified by creating a comma delimited list. In the case +where settings contradict each other, the last defined is used. + +.. _dev_testing: + +Test scripts +------------- + +Under **configuration/scripts/tests** are several files including the scripts to +setup the various tests, such as smoke and restart tests (**test_smoke.script**, **test_restart.script**) +and the files that describe with options files are needed for each test (ie. **test_smoke.files**, **test_restart.files**). +A baseline test script (**baseline.script**) is also there to setup the general regression +and comparison testing. That directory also contains the preset test suites +(ie. **base_suite.ts**) and a file that supports post-processing on the model +output (**timeseries.csh**). There is also a script **report_results.csh** that pushes results +from test suites back to the CICE-Consortium test results wiki page. + +The directory **configuration/scripts/tests/QC** contains scripts related to the non bit-for-bit +compliance testing described in :ref:`compliance`. + +To add a new test (for example newtest), several files may be needed, + +- **configuration/scripts/tests/test_newtest.script** defines how to run the test. This chunk + of script will be incorporated into the case test script +- **configuration/scripts/tests/test_newtest.files** list the set of options files found in + **configuration/scripts/options/** needed to + run this test. Those files will be copied into the test directory when the test is invoked + so they are available for the **test_newtest.script** to use. +- some new files may be needed in **configuration/scripts/options/**. These could be relatively + generic **set_nml** or **set_env** files, or they could be test specific files typically carrying + a prefix of **test_nml**. + +Generating a new test, particularly the **test_newtest.script** usually takes some iteration before +it's working properly. + diff --git a/doc/source/developer_guide/index.rst b/doc/source/developer_guide/index.rst new file mode 100755 index 000000000..1b2c2c916 --- /dev/null +++ b/doc/source/developer_guide/index.rst @@ -0,0 +1,21 @@ +.. CICE-Consortium documentation master file, created by + sphinx-quickstart on Thu Jun 29 13:47:09 2017. + You can adapt this file completely to your liking, but it should at least + contain the root `toctree` directive. + +.. _developer_guide: + +Developer Guide +----------------- + +.. toctree:: + :maxdepth: 3 + + dg_about.rst + dg_dynamics.rst + dg_driver.rst + dg_icepack.rst + dg_scripts.rst + dg_documentation.rst + dg_other.rst + diff --git a/doc/source/figures/.DS_Store b/doc/source/figures/.DS_Store deleted file mode 100644 index 173aa9379..000000000 Binary files a/doc/source/figures/.DS_Store and /dev/null differ diff --git a/doc/source/index.rst b/doc/source/index.rst index c6f42e413..7f4629bc3 100644 --- a/doc/source/index.rst +++ b/doc/source/index.rst @@ -11,13 +11,14 @@ Table of Contents: ------------------ .. toctree:: - :maxdepth: 5 + :maxdepth: 2 :numbered: - cice_1_introduction.rst - cice_2_science_guide.rst - cice_3_user_guide.rst - cice_4_index.rst + intro/index.rst + science_guide/index.rst + user_guide/index.rst + developer_guide/index.rst + cice_index.rst zreferences.rst Useful tools diff --git a/doc/source/intro/about.rst b/doc/source/intro/about.rst new file mode 100755 index 000000000..366f4c6bf --- /dev/null +++ b/doc/source/intro/about.rst @@ -0,0 +1,41 @@ +:tocdepth: 3 + +.. _about: + +About CICE +============= + +The Los Alamos sea ice model (CICE) is the result of an effort to +develop a computationally efficient sea ice component for a fully +coupled atmosphere--land global climate model. It was +designed to be compatible with the Parallel Ocean Program +(POP), an ocean circulation model developed at +Los Alamos National Laboratory for use on massively parallel computers +:cite:`SDM92,DSM93,DSM94`. The current version of the +model has been enhanced greatly through collaborations with members of +the community. + +CICE has several interacting components: a thermodynamic model that +computes local growth rates of snow and ice due to vertical conductive, +radiative and turbulent fluxes, along with snowfall; a model of ice +dynamics, which predicts the velocity field of the ice pack based on +a model of the material strength of the ice; a transport model that +describes advection of the areal concentration, ice volumes and other +state variables; and a ridging parameterization that transfers ice among +thickness categories based on energetic balances and +rates of strain.External routines would prepare and execute data exchanges with an +external "flux coupler," which then passes the data to other climate +model components such as POP. + +Details about this model release and a list of major changes are found +in :ref:`updates` and the model code +is available from https://github.com/CICE-Consortium/CICE. + +Please cite any use of the CICE code. More information can be found at :ref:`citing`. + +This document uses the following text conventions: +Variable names used in the code are ``typewritten``. +Subroutine names are given in *italic*. +File and directory names are in **boldface**. +A comprehensive :ref:`index`, including glossary of symbols with many of their values, appears +at the end of this guide. diff --git a/doc/source/intro/acknowledgements.rst b/doc/source/intro/acknowledgements.rst new file mode 100755 index 000000000..f2f38d8ea --- /dev/null +++ b/doc/source/intro/acknowledgements.rst @@ -0,0 +1,36 @@ +:tocdepth: 3 + +.. _acknowledgements: + +Acknowledgements +============================= + +This work has been completed through the CICE Consortium and its members with funding +through the +Department of Energy, +Department of Defense (Navy), +Department of Commerce (NOAA), +National Science Foundation +and Environment and Climate Change Canada. +Special thanks are due to the following people: + +- Elizabeth Hunke, Nicole Jeffery, Adrian Turner and Chris Newman at Los Alamos National Laboratory + +- David Bailey, Alice DuVivier and Marika Holland at the National Center for Atmospheric Research + +- Rick Allard, Matt Turner and David Hebert at the Naval Research Laboratory, Stennis Space Center, + +- Andrew Roberts of the Naval Postgraduate School, + +- Michael Winton and Anders Damsgaard of the Geophysical Fluid Dynamics Laboratory, + +- Jean-Francois Lemieux and Frederic Dupont of Environment and Climate Change Canada, + +- Tony Craig and his supporters at the National Center for Atmospheric Research, the Naval Postgraduate School, and NOAA National Weather Service, + +- Jessie Carman and Robert Grumbine of the National Oceanographic and Atmospheric Administration + +- Cecilia Bitz of the University of Washington, for her column forcing data, + +- and many others who contributed to previous versions of CICE. + diff --git a/doc/source/intro/citing.rst b/doc/source/intro/citing.rst new file mode 100644 index 000000000..cb4c648ff --- /dev/null +++ b/doc/source/intro/citing.rst @@ -0,0 +1,18 @@ +:tocdepth: 3 + +.. _citing: + +Citing the CICE code +==================== + +If you use the CICE code, please cite the version you are using with the CICE +Digital Object Identifier (DOI): + +DOI:10.5281/zenodo.1205674 (https://zenodo.org/record/1205674) + +This DOI can be used to cite all CICE versions and the URL will default to the most recent version. +However, each released version of CICE will also receive its own, unique DOI that can be +used for citations as well. + +Please also make the CICE-Consortium aware of any publications and model use. + diff --git a/doc/source/intro/copyright.rst b/doc/source/intro/copyright.rst new file mode 100755 index 000000000..dce37f8b6 --- /dev/null +++ b/doc/source/intro/copyright.rst @@ -0,0 +1,41 @@ +:tocdepth: 3 + +.. _copyright: + +Copyright +============================= + +© Copyright 2018, Los Alamos National Security LLC. All rights reserved. +This software was produced under U.S. Government contract +DE-AC52-06NA25396 for Los Alamos National Laboratory (LANL), which is +operated by Los Alamos National Security, LLC for the U.S. Department +of Energy. The U.S. Government has rights to use, reproduce, and distribute +this software. NEITHER THE GOVERNMENT NOR LOS ALAMOS NATIONAL SECURITY, LLC +MAKES ANY WARRANTY, EXPRESS OR IMPLIED, OR ASSUMES ANY LIABILITY FOR THE USE +OF THIS SOFTWARE. If software is modified to produce derivative works, such +modified software should be clearly marked, so as not to confuse it with the +version available from LANL. + +Additionally, redistribution and use in source and binary forms, with or +without modification, are permitted provided that the following conditions +are met: + +- Redistributions of source code must retain the above copyright notice, this list of conditions and the following disclaimer. + +- Redistributions in binary form must reproduce the above copyright notice, this list of conditions and the following disclaimer in the documentation and/or other materials provided with the distribution. + +- Neither the name of Los Alamos National Security, LLC, Los Alamos National Laboratory, LANL, the U.S. Government, nor the names of its contributors may be used to endorse or promote products derived from this software without specific prior written permission. + +THIS SOFTWARE IS PROVIDED BY LOS ALAMOS NATIONAL SECURITY, LLC AND +CONTRIBUTORS "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT +NOT LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR +A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL LOS ALAMOS NATIONAL +SECURITY, LLC OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, +SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED +TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR +PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF +LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING +NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS +SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. + + diff --git a/doc/source/intro/index.rst b/doc/source/intro/index.rst new file mode 100755 index 000000000..32d943e50 --- /dev/null +++ b/doc/source/intro/index.rst @@ -0,0 +1,20 @@ +.. CICE-Consortium documentation master file, created by + sphinx-quickstart on Thu Jun 29 13:47:09 2017. + You can adapt this file completely to your liking, but it should at least + contain the root `toctree` directive. + +.. _introduction: + +Introduction - CICE +----------------------- + +.. toctree:: + :maxdepth: 3 + + about.rst + quickstart.rst + major_updates.rst + acknowledgements.rst + citing.rst + copyright.rst + diff --git a/doc/source/intro/major_updates.rst b/doc/source/intro/major_updates.rst new file mode 100755 index 000000000..1d2d92d2d --- /dev/null +++ b/doc/source/intro/major_updates.rst @@ -0,0 +1,64 @@ +:tocdepth: 3 + +.. _updates: + + +Major CICE updates +============================================ + +This model release is CICE version 6.0.0.alpha. + +Please cite any use of the CICE code. More information can be found at :ref:`citing`. + +~~~~~~~~~~~~~~~~~ +CICE V6.0.0.alpha +~~~~~~~~~~~~~~~~~ +Major changes: + +- A new fast-ice parameterization +- Full vertical biogeochemistry +- Independent column physics package Icepack implemented as a git submodule +- A flexible, extensible, robust interface between the column physics modules and the driver +- A warning package that captures diagnostic and error information from within the column physics, for printing by the driver +- Restructured code and forcing data directories +- An entirely new scripting system +- A comprehensive test suite of various configuration options, with quality control and compliance tests +- Automated testing using Travis CI +- Automated test reporting organized by hash, version, machine and branch, for both the primary Consortium repository and user forks +- Online documentation +- See also updates in Icepack releases and recent changes + +Enhancements: + +- Change use of ULAT to TLAT to determine what latitudes initial ice is present in set_state_var [r970] +- Add 4d fields to history (categories, vertical ice) r1076 +- Update PIO; Universal large file support [r1094] +- Remove calendar_type from namelist options and initialize it based on the namelist flag use_leap_years. [r1098] +- Add fbot to history output [r1107] +- Add shortwave diagnostics [r1108] +- Modifications to enable ocean and ice biogeochemical coupling [r1111, r1200] +- Remove the computational overhead of coupling BGC when it is not being used [r1123] +- Change reference to char_len in stop_label [r1143] +- Add grounding scheme and tensile strength #52 +- Add new namelist options for dynamics parameters #52 +- Update Icepack version in CICE (Icepack v1.0.0 #81) +- Modifications to stress diagnostics, including principal stress normalization and internal pressure #99 + +Bug fixes: + +- Properly read and rotate ocean currents from 3D gx1 netcdf data r959 +- Correct diagnostic output 'avg salinity' [r1022] +- Bug fix for padded domains. r1031 +- Use VGRD instead of VGRDi for 3D [r1037] +- change shortwave calculation to depend on the net shortwave sum instead of cosine of the zenith angle (not BFB: in addition to the different shortwave calculation, albedo output in history is different). r1076 +- Correct available history fields. [r1082] +- Fix coupled restart bug; initialize coszen; adjust calendar_type implementation [r1094] +- Port miscellaneous changes from the various column package branches back to the trunk. BFB in the standard configuration, but the initializations and conditional changes for coszen could change the answers in other configurations. Also the flux calculation change in ice_therm_itd.F90 could change the answers in coupled simulations. 1102 +- Ensure fractions of snow, ponds and bare ice add to one r1120 +- Zero out thin-pond fraction for radiation in cesm, topo pond schemes (not BFB), and set albedo=1 where/when there is no incoming shortwave (changes the average-albedo diagnostic), and fix thin (cesm) ponds overlapping snow. [r1126, r1132] +- Fix padding when using the extended-grid functionality, to prevent arrays out of bounds. [r1128] +- Change dynamics halo update mask from icetmask to iceumask (fixes occasional exact restart problem and error in halo update) [r1133] +- Add surface flooding and surface runoff terms which increase with open water area in surface condition for update_hbrine, z_salinity, z_biogeochemistry [r1161] +- Set all tracer values to c0 over land after initialization #16 +- Remove OpenMP directives for loops that do not appear to be thread safe #25 +- Remove iblk from timer starts #98 diff --git a/doc/source/intro/quickstart.rst b/doc/source/intro/quickstart.rst new file mode 100755 index 000000000..123be8aed --- /dev/null +++ b/doc/source/intro/quickstart.rst @@ -0,0 +1,121 @@ +:tocdepth: 3 + + +.. _quickstart: + +Quick Start +=========== + +Download the model from the CICE-Consortium repository, + https://github.com/CICE-Consortium/CICE + +Instructions for working in github with CICE (and Icepack) can be +found in the `CICE Git and Workflow Guide `_. + +You will probably have to download some inputdata, see the `CICE wiki `_ or :ref:`force`. + +From your main CICE directory, execute:: + + ./cice.setup -c ~/mycase1 -g gx3 -m testmachine -s diag1,thread -p 8x1 + cd ~/mycase1 + ./cice.build + ./cice.submit + + +``testmachine`` is a generic machine name included with the cice scripts. +The local machine name will have to be substituted for ``testmachine`` and +there are working ports for several different machines. However, it may be necessary +to port the model to a new machine. See :ref:`porting` for +more information about how to port and :ref:`scripts` for more information about +how to use the cice.setup script. + +Please cite any use of the CICE code. More information can be found at :ref:`citing`. + +~~~~~~~~~~~~ +More Details +~~~~~~~~~~~~ + +``cice.setup -h`` will provide the latest information about how to use the tool. +``cice.setup --help`` will provide an extended version of the help. +There are three usage modes, + +* ``--case`` or ``-c`` creates individual stand alone cases. +* ``--test`` creates individual tests. Tests are just cases that have some extra automation in order to carry out particular tests such as exact restart. +* ``--suite`` creates a test suite. Test suites are predefined sets of tests and ``--suite`` provides the ability to quick setup, build, and run a full suite of tests. + +All modes will require use of ``--mach`` or ``-m`` to specify the machine and case and test modes +can use ``--set`` or ``-s`` to define specific options. ``--test`` and ``--suite`` will require ``--testid`` to be set +and both of the test modes can use ``--bdir``, ``--bgen``, ``--bcmp``, and ``--diff`` to generate (save) results and compare results with prior results. +Testing will be described in greater detail in the :ref:`testing` section. + +Again, ``cice.setup --help`` will show the latest usage information including +the available ``--set`` options, the current ported machines, and the test choices. + +To create a case, run **cice.setup**:: + + cice.setup -c mycase -m machine + cd mycase + +Once a case/test is created, several files are placed in the case directory + + - env.${machine} defines the environment + + - cice.settings defines many variables associated with building and running the model + + - makdep.c is a tool that will automatically generate the make dependencies + + - Macros.${machine} defines the Makefile Macros + + - Makefile is the makefile used to build the model + + - cice.build is a script that build the model + + - ice_in is the namelist file + + - cice.run is a batch run script + + - cice.submit is a simple script that submits the cice.run script + +Once the case is created, all scripts and namelist are fully resolved. Users can edit any +of the files in the case directory manually to change the model configuration. The file +dependency is indicated in the above list. For instance, if any of the files before +cice.build in the list are edited, cice.build should be rerun. + +The casescripts directory holds scripts used to create the case and can largely be ignored. + +In general, when cice.build is executed, the model will build from scratch due to the large +dependence on cpps. To change this behavior, edit the env variable ``ICE_CLEANBUILD`` in +cice.settings. + +The cice.submit script just submits the cice.run script. You can use cice.submit or just +submit the cice.run script on the command line. + +The model will run in the directory defined by the env variable ``ICE_RUNDIR`` in cice.settings. +Build and run logs will be copied into the case logs directory when complete. + +To port, an env.machine and Macros.machine file have to be added to scripts/machines and the cice.run.setup.csh file needs to be modified. + - cd to consortium/scripts/machines + - Copy an existing env and Macros file to new names for your new machine + - Edit the env and Macros file + - cd to consortium/scripts + - Edit the cice.run.setup.csh script to add a section for your machine for the batch settings and for the job launch settings + - Download and untar the 1997 dataset to the location defined by ``ICE_MACHINE_INPUTDATA`` in the env file + - Create a file in your home directory called .cice_proj and add your preferred account name to the first line. + - You can now create a case and test. If there are problems, you can manually edit the env, Macros, and cice.run files in the case directory until things are working properly. Then you can copy the env and Macros files back to consortium/scripts/machines. You will have to manually modify the cice.run.setup.csh script if there any changes needed there. + +~~~~~~~~~~~~ +Forcing data +~~~~~~~~~~~~ + +The code is currently configured to run in standalone mode on a 3 degree grid using +atmospheric data from 1997, available as detailed on the `wiki `_. +These data files are designed only for testing the code, not for use in production +runs or as observational data. Please do not publish results based on these data +sets. Module cicecore/dynamics/cicedynB/ice_forcing.F90 can be modified to change the +forcing data. + +As currently configured, the model runs on 4 processors. MPI is used for message passing +between processors, and OpenMP threading is available. The grid provided here is too +small for the code to scale well beyond about 8 processors. A 1 degree grid is provided also, +and details about this grid can be found on the `wiki `_. + diff --git a/doc/source/master_list.bib b/doc/source/master_list.bib index 5ce71f703..0e2f96cb5 100644 --- a/doc/source/master_list.bib +++ b/doc/source/master_list.bib @@ -754,6 +754,28 @@ @Article{TFSFFKLB14 pages = {1329-1353}, url = {https://doi.org/10.1175/JPO-D-13-0215.1} } + +@Article{KH2010 + author = "C. Konig Beatty and D.M. Holland", + title = "{Modeling landfast ice by adding tensile strength}", + journal = JPO, + year = "2010", + volume = {40}, + pages = {185-198}, + url = {https://doi.org/10.1175/2009JPO4105.1} +} + +@Article{Lemieux2016 + author = "J.F. Lemieux and F.Dupont and P. Blain and F. Roy and G.C. Smith and G.M. Flato", + title = "{Improving the simulation of landfast ice by combining tensile strength and a parameterization for +grounded ridges,}", + journal = JGRO, + year = "2016", + volume = {121}, + pages = {}, + url = {https://doi.org/10.1002/2016JC012006} +} + @article{Hunke2018, author = {Hunke, Elizabeth and Roberts, Andrew and Allard, Richard and Lemieux, Jean-Fran{\c{c}}ois and Turner, Matthew and Craig, Tony and DuVivier, Alice and Bailey, David and Holland, Marika and Winton, Michael and Dupont, Frederic and Grumbine, Robert}, title = {{The CICE Consortium Sea Ice Modeling Suite}}, diff --git a/doc/source/figures/EAP.png b/doc/source/science_guide/figures/EAP.png similarity index 100% rename from doc/source/figures/EAP.png rename to doc/source/science_guide/figures/EAP.png diff --git a/doc/source/figures/albedo.png b/doc/source/science_guide/figures/albedo.png similarity index 100% rename from doc/source/figures/albedo.png rename to doc/source/science_guide/figures/albedo.png diff --git a/doc/source/figures/deparr.png b/doc/source/science_guide/figures/deparr.png similarity index 100% rename from doc/source/figures/deparr.png rename to doc/source/science_guide/figures/deparr.png diff --git a/doc/source/figures/gplot.png b/doc/source/science_guide/figures/gplot.png similarity index 100% rename from doc/source/figures/gplot.png rename to doc/source/science_guide/figures/gplot.png diff --git a/doc/source/figures/topo.png b/doc/source/science_guide/figures/topo.png similarity index 100% rename from doc/source/figures/topo.png rename to doc/source/science_guide/figures/topo.png diff --git a/doc/source/figures/tracergraphic.png b/doc/source/science_guide/figures/tracergraphic.png similarity index 100% rename from doc/source/figures/tracergraphic.png rename to doc/source/science_guide/figures/tracergraphic.png diff --git a/doc/source/figures/triangles.png b/doc/source/science_guide/figures/triangles.png similarity index 100% rename from doc/source/figures/triangles.png rename to doc/source/science_guide/figures/triangles.png diff --git a/doc/source/science_guide/index.rst b/doc/source/science_guide/index.rst new file mode 100755 index 000000000..d576e56de --- /dev/null +++ b/doc/source/science_guide/index.rst @@ -0,0 +1,16 @@ +.. CICE-Consortium documentation master file, created by + sphinx-quickstart on Thu Jun 29 13:47:09 2017. + You can adapt this file completely to your liking, but it should at least + contain the root `toctree` directive. + +.. _science_guide: + +Science Guide +----------------- + +.. toctree:: + :maxdepth: 3 + + sg_coupling.rst + sg_modelcomps.rst + diff --git a/doc/source/science_guide/sg_coupling.rst b/doc/source/science_guide/sg_coupling.rst new file mode 100644 index 000000000..11108ed5c --- /dev/null +++ b/doc/source/science_guide/sg_coupling.rst @@ -0,0 +1,564 @@ +:tocdepth: 3 + +.. _coupl: + +Coupling with other climate model components +============================================ + +The sea ice model exchanges information with the other model components +via a flux coupler. CICE has been coupled into numerous climate models +with a variety of coupling techniques. This document is oriented +primarily toward the CESM Flux Coupler :cite:`KL02` +from NCAR, the first major climate model to incorporate CICE. The flux +coupler was originally intended to gather state variables from the +component models, compute fluxes at the model interfaces, and return +these fluxes to the component models for use in the next integration +period, maintaining conservation of momentum, heat, and fresh water. +However, several of these fluxes are now computed in the ice model +itself and provided to the flux coupler for distribution to the other +components, for two reasons. First, some of the fluxes depend strongly +on the state of the ice, and vice versa, implying that an implicit, +simultaneous determination of the ice state and the surface fluxes is +necessary for consistency and stability. Second, given the various ice +types in a single grid cell, it is more efficient for the ice model to +determine the net ice characteristics of the grid cell and provide the +resulting fluxes, rather than passing several values of the state +variables for each cell. These considerations are explained in more +detail below. + +The fluxes and state variables passed between the sea ice model and the +CESM flux coupler are listed in :ref:`tab-flux-cpl`. By convention, +directional fluxes are positive downward. In CESM, the sea ice model may +exchange coupling fluxes using a different grid than the computational +grid. This functionality is activated using the namelist variable +``gridcpl_file``. Another namelist variable ``highfreq``, allows the +high-frequency coupling procedure implemented in the Regional Arctic +System Model (RASM). In particular, the relative atmosphere-ice velocity +(:math:`\vec{U}_a-\vec{u}`) is used instead of the full atmospheric +velocity for computing turbulent fluxes in the atmospheric boundary +layer. + +:ref:`tab-flux-cpl`: *Data exchanged between the CESM flux coupler and the sea ice model* + +.. _tab-flux-cpl: + +.. table:: Table 1 + + =========================== ====================================== ======================================================================================= + Variable Description Interaction with flux coupler + =========================== ====================================== ======================================================================================= + :math:`z_o` Atmosphere level height From *atmosphere model* via flux coupler **to** *sea ice model* + + :math:`\vec{U}_a` Wind velocity From *atmosphere model* via flux coupler **to** *sea ice model* + + :math:`Q_a` Specific humidity From *atmosphere model* via flux coupler **to** *sea ice model* + + :math:`\rho_a` Air density From *atmosphere model* via flux coupler **to** *sea ice model* + + :math:`\Theta_a` Air potential temperature From *atmosphere model* via flux coupler **to** *sea ice model* + + :math:`T_a` Air temperature From *atmosphere model* via flux coupler **to** *sea ice model* + + :math:`F_{sw\downarrow}` Incoming shortwave radiation From *atmosphere model* via flux coupler **to** *sea ice model* + (4 bands) + + :math:`F_{L\downarrow}` Incoming longwave radiation From *atmosphere model* via flux coupler **to** *sea ice model* + + :math:`F_{rain}` Rainfall rate From *atmosphere model* via flux coupler **to** *sea ice model* + + :math:`F_{snow}` Snowfall rate From *atmosphere model* via flux coupler **to** *sea ice model* + + :math:`F_{frzmlt}` Freezing/melting potential From *ocean model* via flux coupler **to** *sea ice model* + + :math:`T_w` Sea surface temperature From *ocean model* via flux coupler **to** *sea ice model* + + :math:`S` Sea surface salinity From *ocean model* via flux coupler **to** *sea ice model* + + :math:`\nabla H_o` Sea surface slope From *ocean model* via flux coupler **to** *sea ice model* + + :math:`\vec{U}_w` Surface ocean currents From *ocean model* via flux coupler **to** *sea ice model* + + :math:`\vec{\tau}_a` Wind stress From *sea ice model* via flux coupler **to** *atmosphere model* + + :math:`F_s` Sensible heat flux From *sea ice model* via flux coupler **to** *atmosphere model* + + :math:`F_l` Latent heat flux From *sea ice model* via flux coupler **to** *atmosphere model* + + :math:`F_{L\uparrow}` Outgoing longwave radiation From *sea ice model* via flux coupler **to** *atmosphere model* + + :math:`F_{evap}` Evaporated water From *sea ice model* via flux coupler **to** *atmosphere model* + + :math:`\alpha` Surface albedo (4 bands) From *sea ice model* via flux coupler **to** *atmosphere model* + + :math:`T_{sfc}` Surface temperature From *sea ice model* via flux coupler **to** *atmosphere model* + + :math:`F_{sw\Downarrow}` Penetrating shortwave radiation From *sea ice model* via flux coupler **to** *ocean model* + + :math:`F_{water}` Fresh water flux From *sea ice model* via flux coupler **to** *ocean model* + + :math:`F_{hocn}` Net heat flux to ocean From *sea ice model* via flux coupler **to** *ocean model* + + :math:`F_{salt}` Salt flux From *sea ice model* via flux coupler **to** *ocean model* + + :math:`\vec{\tau}_w` Ice-ocean stress From *sea ice model* via flux coupler **to** *ocean model* + + :math:`F_{bio}` Biogeochemical fluxes From *sea ice model* via flux coupler **to** *ocean model* + + :math:`a_{i}` Ice fraction From *sea ice model* via flux coupler **to** both *ocean and atmosphere models* + + :math:`T^{ref}_{a}` 2m reference temperature (diagnostic) From *sea ice model* via flux coupler **to** both *ocean and atmosphere models* + + :math:`Q^{ref}_{a}` 2m reference humidity (diagnostic) From *sea ice model* via flux coupler **to** both *ocean and atmosphere models* + + :math:`F_{swabs}` Absorbed shortwave (diagnostic) From *sea ice model* via flux coupler **to** both *ocean and atmosphere models* + =========================== ====================================== ======================================================================================= + +The ice fraction :math:`a_i` (aice) is the total fractional ice +coverage of a grid cell. That is, in each cell, + +.. math:: + \begin{array}{cl} + a_{i}=0 & \mbox{if there is no ice} \\ + a_{i}=1 & \mbox{if there is no open water} \\ + 0 0 + :label: swflux + +where :math:`\cos Z` is the cosine of the solar zenith angle. + +.. _ocean: + +~~~~~ +Ocean +~~~~~ + +New sea ice forms when the ocean temperature drops below its freezing +temperature. In the Bitz and Lipscomb thermodynamics, +:cite:`BL99` :math:`T_f=-\mu S`, where :math:`S` is the +seawater salinity and :math:`\mu=0.054 \ ^\circ`/ppt is the ratio of the +freezing temperature of brine to its salinity (linear liquidus +approximation). For the mushy thermodynamics, :math:`T_f` is given by a +piecewise linear liquidus relation. The ocean model calculates the new +ice formation; if the freezing/melting potential +:math:`F_{frzmlt}` is positive, its value represents a certain +amount of frazil ice that has formed in one or more layers of the ocean +and floated to the surface. (The ocean model assumes that the amount of +new ice implied by the freezing potential actually forms.) + +If :math:`F_{frzmlt}` is negative, it is used to heat already +existing ice from below. In particular, the sea surface temperature and +salinity are used to compute an oceanic heat flux :math:`F_w` +(:math:`\left|F_w\right| \leq \left|F_{frzmlt}\right|`) which +is applied at the bottom of the ice. The portion of the melting +potential actually used to melt ice is returned to the coupler in +:math:`F_{hocn}`. The ocean model adjusts its own heat budget +with this quantity, assuming that the rest of the flux remained in the +ocean. + +In addition to runoff from rain and melted snow, the fresh water flux +:math:`F_{water}` includes ice melt water from the top surface +and water frozen (a negative flux) or melted at the bottom surface of +the ice. This flux is computed as the net change of fresh water in the +ice and snow volume over the coupling time step, excluding frazil ice +formation and newly accumulated snow. Setting the namelist option +update\_ocn\_f to true causes frazil ice to be included in the fresh +water and salt fluxes. + +There is a flux of salt into the ocean under melting conditions, and a +(negative) flux when sea water is freezing. However, melting sea ice +ultimately freshens the top ocean layer, since the ocean is much more +saline than the ice. The ice model passes the net flux of salt +:math:`F_{salt}` to the flux coupler, based on the net change +in salt for ice in all categories. In the present configuration, +ice\_ref\_salinity is used for computing the salt flux, although the ice +salinity used in the thermodynamic calculation has differing values in +the ice layers. + +A fraction of the incoming shortwave :math:`F_{sw\Downarrow}` +penetrates the snow and ice layers and passes into the ocean, as +described in Section :ref:`sfc-forcing`. + +Many ice models compute the sea surface slope :math:`\nabla H_\circ` +from geostrophic ocean currents provided by an ocean model or other data +source. In our case, the sea surface height :math:`H_\circ` is a +prognostic variable in POP—the flux coupler can provide the surface +slope directly, rather than inferring it from the currents. (The option +of computing it from the currents is provided in subroutine +*evp\_prep*.) The sea ice model uses the surface layer currents +:math:`\vec{U}_w` to determine the stress between the ocean and the ice, +and subsequently the ice velocity :math:`\vec{u}`. This stress, relative +to the ice, + +.. math:: + \begin{aligned} + \vec{\tau}_w&=&c_w\rho_w\left|{\vec{U}_w-\vec{u}}\right|\left[\left(\vec{U}_w-\vec{u}\right)\cos\theta + +\hat{k}\times\left(\vec{U}_w-\vec{u}\right)\sin\theta\right] \end{aligned} + :label: tauw + +is then passed to the flux coupler (relative to the ocean) for use by +the ocean model. Here, :math:`\theta` is the turning angle between +geostrophic and surface currents, :math:`c_w` is the ocean drag +coefficient, :math:`\rho_w` is the density of seawater, and +:math:`\hat{k}` is the vertical unit vector. The turning angle is +necessary if the top ocean model layers are not able to resolve the +Ekman spiral in the boundary layer. If the top layer is sufficiently +thin compared to the typical depth of the Ekman spiral, then +:math:`\theta=0` is a good approximation. Here we assume that the top +layer is thin enough. + +For CICE run in stand-alone mode (i.e., uncoupled), a thermodynamic slab +ocean mixed-layer parameterization is available in **ice\_ocean.F90**. +The turbulent fluxes are computed above the water surface using the same +parameterizations as for sea ice, but with parameters appropriate for +the ocean. The surface flux balance takes into account the turbulent +fluxes, oceanic heat fluxes from below the mixed layer, and shortwave +and longwave radiation, including that passing through the sea ice into +the ocean. If the resulting sea surface temperature falls below the +salinity-dependent freezing point, then new ice (frazil) forms. +Otherwise, heat is made available for melting the ice. + +.. _formdrag: + +~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ +Variable exchange coefficients +~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ + +In the default CICE setup, atmospheric and oceanic neutral drag +coefficients (:math:`c_u` and :math:`c_w`) are assumed constant in time +and space. These constants are chosen to reflect friction associated +with an effective sea ice surface roughness at the ice–atmosphere and +ice–ocean interfaces. Sea ice (in both Arctic and Antarctic) contains +pressure ridges as well as floe and melt pond edges that act as discrete +obstructions to the flow of air or water past the ice, and are a source +of form drag. Following :cite:`TFSFFKLB14` and based on +recent theoretical developments :cite:`LGHA12,LLCL11`, the +neutral drag coefficients can now be estimated from properties of the +ice cover such as ice concentration, vertical extent and area of the +ridges, freeboard and floe draft, and size of floes and melt ponds. The +new parameterization allows the drag coefficients to be coupled to the +sea ice state and therefore to evolve spatially and temporally. This +parameterization is contained in the subroutine *neutral\_drag\_coeffs* +and is accessed by setting `formdrag` = true in the namelist. + +Following :cite:`TFSFFKLB14`, consider the general case of +fluid flow obstructed by N randomly oriented obstacles of height +:math:`H` and transverse length :math:`L_y`, distributed on a domain +surface area :math:`S_T`. Under the assumption of a logarithmic fluid +velocity profile, the general formulation of the form drag coefficient +can be expressed as + +.. math:: + C_d=\frac{N c S_c^2 \gamma L_y H}{2 S_T}\left[\frac{\ln(H/z_0)}{\ln(z_{ref}/z_0)}\right]^2, + :label: formdrag + +where :math:`z_0` is a roughness length parameter at the top or bottom +surface of the ice, :math:`\gamma` is a geometric factor, :math:`c` is +the resistance coefficient of a single obstacle, and :math:`S_c` is a +sheltering function that takes into account the shielding effect of the +obstacle, + +.. math:: + S_{c}=\left(1-\exp(-s_l D/H)\right)^{1/2}, + :label: shelter + +with :math:`D` the distance between two obstacles and :math:`s_l` an +attenuation parameter. + +As in the original drag formulation in CICE (sections :ref:`atmo` and +:ref:`ocean`), :math:`c_u` and :math:`c_w` along with the transfer +coefficients for sensible heat, :math:`c_{\theta}`, and latent heat, +:math:`c_{q}`, are initialized to a situation corresponding to neutral +atmosphere–ice and ocean–ice boundary layers. The corresponding neutral +exchange coefficients are then replaced by coefficients that explicitly +account for form drag, expressed in terms of various contributions as + +.. math:: + \tt{Cdn\_atm} = \tt{Cdn\_atm\_rdg} + \tt{Cdn\_atm\_floe} + \tt{Cdn\_atm\_skin} + \tt{Cdn\_atm\_pond} , + :label: Cda + +.. math:: + \tt{Cdn\_ocn} = \tt{Cdn\_ocn\_rdg} + \tt{Cdn\_ocn\_floe} + \tt{Cdn\_ocn\_skin}. + :label: Cdw + +The contributions to form drag from ridges (and keels underneath the +ice), floe edges and melt pond edges can be expressed using the general +formulation of equation :eq:`formdrag` (see :cite:`TFSFFKLB14` for +details). Individual terms in equation :eq:`Cdw` are fully described in +:cite:`TFSFFKLB14`. Following :cite:`Arya75` +the skin drag coefficient is parametrized as + +.. math:: + { \tt{Cdn\_(atm/ocn)\_skin}}=a_{i} \left(1-m_{(s/k)} \frac{H_{(s/k)}}{D_{(s/k)}}\right)c_{s(s/k)}, \mbox{ if $\displaystyle\frac{H_{(s/k)}}{D_{(s/k)}}\ge\frac{1}{m_{(s/k)}}$,} + :label: skindrag + +where :math:`m_s` (:math:`m_k`) is a sheltering parameter that depends +on the average sail (keel) height, :math:`H_s` (:math:`H_k`), but is +often assumed constant, :math:`D_s` (:math:`D_k`) is the average +distance between sails (keels), and :math:`c_{ss}` (:math:`c_{sk}`) is +the unobstructed atmospheric (oceanic) skin drag that would be attained +in the absence of sails (keels) and with complete ice coverage, +:math:`a_{ice}=1`. + +Calculation of equations :eq:`formdrag` – :eq:`skindrag` requires that small-scale geometrical +properties of the ice cover be related to average grid cell quantities +already computed in the sea ice model. These intermediate quantities are +briefly presented here and described in more detail in +:cite:`TFSFFKLB14`. The sail height is given by + +.. math:: + H_{s} = \displaystyle 2\frac{v_{rdg}}{a_{rdg}}\left(\frac{\alpha\tan \alpha_{k} R_d+\beta \tan \alpha_{s} R_h}{\phi_r\tan \alpha_{k} R_d+\phi_k \tan \alpha_{s} R_h^2}\right), + :label: Hs + +and the distance between sails\ + +.. math:: + D_{s} = \displaystyle 2 H_s\frac{a_{i}}{a_{rdg}} \left(\frac{\alpha}{\tan \alpha_s}+\frac{\beta}{\tan \alpha_k}\frac{R_h}{R_d}\right), + :label: Ds + +where :math:`0<\alpha<1` and :math:`0<\beta<1` are weight functions, +:math:`\alpha_{s}` and :math:`\alpha_{k}` are the sail and keel slope, +:math:`\phi_s` and :math:`\phi_k` are constant porosities for the sails +and keels, and we assume constant ratios for the average keel depth and +sail height (:math:`H_k/H_s=R_h`) and for the average distances between +keels and between sails (:math:`D_k/D_s=R_d`). With the assumption of +hydrostatic equilibrium, the effective ice plus snow freeboard is +:math:`H_{f}=\bar{h_i}(1-\rho_i/\rho_w)+\bar{h_s}(1-\rho_s/\rho_w)`, +where :math:`\rho_i`, :math:`\rho_w` and :math:`\rho_s` are +respectively the densities of sea ice, water and snow, :math:`\bar{h_i}` +is the mean ice thickness and :math:`\bar{h_s}` is the mean snow +thickness (means taken over the ice covered regions). For the melt pond +edge elevation we assume that the melt pond surface is at the same level +as the ocean surface surrounding the floes +:cite:`FF07,FFT10,FSFH12` and use the simplification +:math:`H_p = H_f`. Finally to estimate the typical floe size +:math:`L_A`, distance between floes, :math:`D_F`, and melt pond size, +:math:`L_P` we use the parameterizations of :cite:`LGHA12` +to relate these quantities to the ice and pond concentrations. All of +these intermediate quantities are available as history output, along +with `Cdn\_atm`, `Cdn\_ocn` and the ratio `Cdn\_atm\_ratio\_n` between the +total atmospheric drag and the atmospheric neutral drag coefficient. + +We assume that the total neutral drag coefficients are thickness +category independent, but through their dependance on the diagnostic +variables described above, they vary both spatially and temporally. The +total drag coefficients and heat transfer coefficients will also depend +on the type of stratification of the atmosphere and the ocean, and we +use the parameterization described in section :ref:`atmo` that accounts +for both stable and unstable atmosphere–ice boundary layers. In contrast +to the neutral drag coefficients the stability effect of the atmospheric +boundary layer is calculated separately for each ice thickness category. + +The transfer coefficient for oceanic heat flux to the bottom of the ice +may be varied based on form drag considerations by setting the namelist +variable `fbot\_xfer\_type` to `Cdn\_ocn`; this is recommended when using +the form drag parameterization. Its default value of the transfer +coefficient is 0.006 (`fbot\_xfer\_type = ’constant’`). \ No newline at end of file diff --git a/doc/source/cice_2_science_guide.rst b/doc/source/science_guide/sg_modelcomps.rst similarity index 86% rename from doc/source/cice_2_science_guide.rst rename to doc/source/science_guide/sg_modelcomps.rst index 86efea57d..5accb11ae 100644 --- a/doc/source/cice_2_science_guide.rst +++ b/doc/source/science_guide/sg_modelcomps.rst @@ -1,576 +1,9 @@ :tocdepth: 3 -Science Guide -================ - -.. _coupl: - --------------------------------------------- -Coupling with other climate model components --------------------------------------------- - -The sea ice model exchanges information with the other model components -via a flux coupler. CICE has been coupled into numerous climate models -with a variety of coupling techniques. This document is oriented -primarily toward the CESM Flux Coupler :cite:`KL02` -from NCAR, the first major climate model to incorporate CICE. The flux -coupler was originally intended to gather state variables from the -component models, compute fluxes at the model interfaces, and return -these fluxes to the component models for use in the next integration -period, maintaining conservation of momentum, heat, and fresh water. -However, several of these fluxes are now computed in the ice model -itself and provided to the flux coupler for distribution to the other -components, for two reasons. First, some of the fluxes depend strongly -on the state of the ice, and vice versa, implying that an implicit, -simultaneous determination of the ice state and the surface fluxes is -necessary for consistency and stability. Second, given the various ice -types in a single grid cell, it is more efficient for the ice model to -determine the net ice characteristics of the grid cell and provide the -resulting fluxes, rather than passing several values of the state -variables for each cell. These considerations are explained in more -detail below. - -The fluxes and state variables passed between the sea ice model and the -CESM flux coupler are listed in :ref:`tab-flux-cpl`. By convention, -directional fluxes are positive downward. In CESM, the sea ice model may -exchange coupling fluxes using a different grid than the computational -grid. This functionality is activated using the namelist variable -``gridcpl_file``. Another namelist variable ``highfreq``, allows the -high-frequency coupling procedure implemented in the Regional Arctic -System Model (RASM). In particular, the relative atmosphere-ice velocity -(:math:`\vec{U}_a-\vec{u}`) is used instead of the full atmospheric -velocity for computing turbulent fluxes in the atmospheric boundary -layer. - -:ref:`tab-flux-cpl`: *Data exchanged between the CESM flux coupler and the sea ice model* - -.. _tab-flux-cpl: - -.. table:: Table 1 - - =========================== ====================================== ======================================================================================= - Variable Description Interaction with flux coupler - =========================== ====================================== ======================================================================================= - :math:`z_o` Atmosphere level height From *atmosphere model* via flux coupler **to** *sea ice model* - - :math:`\vec{U}_a` Wind velocity From *atmosphere model* via flux coupler **to** *sea ice model* - - :math:`Q_a` Specific humidity From *atmosphere model* via flux coupler **to** *sea ice model* - - :math:`\rho_a` Air density From *atmosphere model* via flux coupler **to** *sea ice model* - - :math:`\Theta_a` Air potential temperature From *atmosphere model* via flux coupler **to** *sea ice model* - - :math:`T_a` Air temperature From *atmosphere model* via flux coupler **to** *sea ice model* - - :math:`F_{sw\downarrow}` Incoming shortwave radiation From *atmosphere model* via flux coupler **to** *sea ice model* - (4 bands) - - :math:`F_{L\downarrow}` Incoming longwave radiation From *atmosphere model* via flux coupler **to** *sea ice model* - - :math:`F_{rain}` Rainfall rate From *atmosphere model* via flux coupler **to** *sea ice model* - - :math:`F_{snow}` Snowfall rate From *atmosphere model* via flux coupler **to** *sea ice model* - - :math:`F_{frzmlt}` Freezing/melting potential From *ocean model* via flux coupler **to** *sea ice model* - - :math:`T_w` Sea surface temperature From *ocean model* via flux coupler **to** *sea ice model* - - :math:`S` Sea surface salinity From *ocean model* via flux coupler **to** *sea ice model* - - :math:`\nabla H_o` Sea surface slope From *ocean model* via flux coupler **to** *sea ice model* - - :math:`\vec{U}_w` Surface ocean currents From *ocean model* via flux coupler **to** *sea ice model* - - :math:`\vec{\tau}_a` Wind stress From *sea ice model* via flux coupler **to** *atmosphere model* - - :math:`F_s` Sensible heat flux From *sea ice model* via flux coupler **to** *atmosphere model* - - :math:`F_l` Latent heat flux From *sea ice model* via flux coupler **to** *atmosphere model* - - :math:`F_{L\uparrow}` Outgoing longwave radiation From *sea ice model* via flux coupler **to** *atmosphere model* - - :math:`F_{evap}` Evaporated water From *sea ice model* via flux coupler **to** *atmosphere model* - - :math:`\alpha` Surface albedo (4 bands) From *sea ice model* via flux coupler **to** *atmosphere model* - - :math:`T_{sfc}` Surface temperature From *sea ice model* via flux coupler **to** *atmosphere model* - - :math:`F_{sw\Downarrow}` Penetrating shortwave radiation From *sea ice model* via flux coupler **to** *ocean model* - - :math:`F_{water}` Fresh water flux From *sea ice model* via flux coupler **to** *ocean model* - - :math:`F_{hocn}` Net heat flux to ocean From *sea ice model* via flux coupler **to** *ocean model* - - :math:`F_{salt}` Salt flux From *sea ice model* via flux coupler **to** *ocean model* - - :math:`\vec{\tau}_w` Ice-ocean stress From *sea ice model* via flux coupler **to** *ocean model* - - :math:`F_{bio}` Biogeochemical fluxes From *sea ice model* via flux coupler **to** *ocean model* - - :math:`a_{i}` Ice fraction From *sea ice model* via flux coupler **to** both *ocean and atmosphere models* - - :math:`T^{ref}_{a}` 2m reference temperature (diagnostic) From *sea ice model* via flux coupler **to** both *ocean and atmosphere models* - - :math:`Q^{ref}_{a}` 2m reference humidity (diagnostic) From *sea ice model* via flux coupler **to** both *ocean and atmosphere models* - - :math:`F_{swabs}` Absorbed shortwave (diagnostic) From *sea ice model* via flux coupler **to** both *ocean and atmosphere models* - =========================== ====================================== ======================================================================================= - -The ice fraction :math:`a_i` (aice) is the total fractional ice -coverage of a grid cell. That is, in each cell, - -.. math:: - \begin{array}{cl} - a_{i}=0 & \mbox{if there is no ice} \\ - a_{i}=1 & \mbox{if there is no open water} \\ - 0 0 - :label: swflux - -where :math:`\cos Z` is the cosine of the solar zenith angle. - -.. _ocean: - -~~~~~ -Ocean -~~~~~ - -New sea ice forms when the ocean temperature drops below its freezing -temperature. In the Bitz and Lipscomb thermodynamics, -:cite:`BL99` :math:`T_f=-\mu S`, where :math:`S` is the -seawater salinity and :math:`\mu=0.054 \ ^\circ`/ppt is the ratio of the -freezing temperature of brine to its salinity (linear liquidus -approximation). For the mushy thermodynamics, :math:`T_f` is given by a -piecewise linear liquidus relation. The ocean model calculates the new -ice formation; if the freezing/melting potential -:math:`F_{frzmlt}` is positive, its value represents a certain -amount of frazil ice that has formed in one or more layers of the ocean -and floated to the surface. (The ocean model assumes that the amount of -new ice implied by the freezing potential actually forms.) - -If :math:`F_{frzmlt}` is negative, it is used to heat already -existing ice from below. In particular, the sea surface temperature and -salinity are used to compute an oceanic heat flux :math:`F_w` -(:math:`\left|F_w\right| \leq \left|F_{frzmlt}\right|`) which -is applied at the bottom of the ice. The portion of the melting -potential actually used to melt ice is returned to the coupler in -:math:`F_{hocn}`. The ocean model adjusts its own heat budget -with this quantity, assuming that the rest of the flux remained in the -ocean. - -In addition to runoff from rain and melted snow, the fresh water flux -:math:`F_{water}` includes ice melt water from the top surface -and water frozen (a negative flux) or melted at the bottom surface of -the ice. This flux is computed as the net change of fresh water in the -ice and snow volume over the coupling time step, excluding frazil ice -formation and newly accumulated snow. Setting the namelist option -update\_ocn\_f to true causes frazil ice to be included in the fresh -water and salt fluxes. - -There is a flux of salt into the ocean under melting conditions, and a -(negative) flux when sea water is freezing. However, melting sea ice -ultimately freshens the top ocean layer, since the ocean is much more -saline than the ice. The ice model passes the net flux of salt -:math:`F_{salt}` to the flux coupler, based on the net change -in salt for ice in all categories. In the present configuration, -ice\_ref\_salinity is used for computing the salt flux, although the ice -salinity used in the thermodynamic calculation has differing values in -the ice layers. - -A fraction of the incoming shortwave :math:`F_{sw\Downarrow}` -penetrates the snow and ice layers and passes into the ocean, as -described in Section :ref:`sfc-forcing`. - -Many ice models compute the sea surface slope :math:`\nabla H_\circ` -from geostrophic ocean currents provided by an ocean model or other data -source. In our case, the sea surface height :math:`H_\circ` is a -prognostic variable in POP—the flux coupler can provide the surface -slope directly, rather than inferring it from the currents. (The option -of computing it from the currents is provided in subroutine -*evp\_prep*.) The sea ice model uses the surface layer currents -:math:`\vec{U}_w` to determine the stress between the ocean and the ice, -and subsequently the ice velocity :math:`\vec{u}`. This stress, relative -to the ice, - -.. math:: - \begin{aligned} - \vec{\tau}_w&=&c_w\rho_w\left|{\vec{U}_w-\vec{u}}\right|\left[\left(\vec{U}_w-\vec{u}\right)\cos\theta - +\hat{k}\times\left(\vec{U}_w-\vec{u}\right)\sin\theta\right] \end{aligned} - :label: tauw - -is then passed to the flux coupler (relative to the ocean) for use by -the ocean model. Here, :math:`\theta` is the turning angle between -geostrophic and surface currents, :math:`c_w` is the ocean drag -coefficient, :math:`\rho_w` is the density of seawater, and -:math:`\hat{k}` is the vertical unit vector. The turning angle is -necessary if the top ocean model layers are not able to resolve the -Ekman spiral in the boundary layer. If the top layer is sufficiently -thin compared to the typical depth of the Ekman spiral, then -:math:`\theta=0` is a good approximation. Here we assume that the top -layer is thin enough. - -For CICE run in stand-alone mode (i.e., uncoupled), a thermodynamic slab -ocean mixed-layer parameterization is available in **ice\_ocean.F90**. -The turbulent fluxes are computed above the water surface using the same -parameterizations as for sea ice, but with parameters appropriate for -the ocean. The surface flux balance takes into account the turbulent -fluxes, oceanic heat fluxes from below the mixed layer, and shortwave -and longwave radiation, including that passing through the sea ice into -the ocean. If the resulting sea surface temperature falls below the -salinity-dependent freezing point, then new ice (frazil) forms. -Otherwise, heat is made available for melting the ice. - -.. _formdrag: - -~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ -Variable exchange coefficients -~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ - -In the default CICE setup, atmospheric and oceanic neutral drag -coefficients (:math:`c_u` and :math:`c_w`) are assumed constant in time -and space. These constants are chosen to reflect friction associated -with an effective sea ice surface roughness at the ice–atmosphere and -ice–ocean interfaces. Sea ice (in both Arctic and Antarctic) contains -pressure ridges as well as floe and melt pond edges that act as discrete -obstructions to the flow of air or water past the ice, and are a source -of form drag. Following :cite:`TFSFFKLB14` and based on -recent theoretical developments :cite:`LGHA12,LLCL11`, the -neutral drag coefficients can now be estimated from properties of the -ice cover such as ice concentration, vertical extent and area of the -ridges, freeboard and floe draft, and size of floes and melt ponds. The -new parameterization allows the drag coefficients to be coupled to the -sea ice state and therefore to evolve spatially and temporally. This -parameterization is contained in the subroutine *neutral\_drag\_coeffs* -and is accessed by setting `formdrag` = true in the namelist. - -Following :cite:`TFSFFKLB14`, consider the general case of -fluid flow obstructed by N randomly oriented obstacles of height -:math:`H` and transverse length :math:`L_y`, distributed on a domain -surface area :math:`S_T`. Under the assumption of a logarithmic fluid -velocity profile, the general formulation of the form drag coefficient -can be expressed as - -.. math:: - C_d=\frac{N c S_c^2 \gamma L_y H}{2 S_T}\left[\frac{\ln(H/z_0)}{\ln(z_{ref}/z_0)}\right]^2, - :label: formdrag - -where :math:`z_0` is a roughness length parameter at the top or bottom -surface of the ice, :math:`\gamma` is a geometric factor, :math:`c` is -the resistance coefficient of a single obstacle, and :math:`S_c` is a -sheltering function that takes into account the shielding effect of the -obstacle, - -.. math:: - S_{c}=\left(1-\exp(-s_l D/H)\right)^{1/2}, - :label: shelter - -with :math:`D` the distance between two obstacles and :math:`s_l` an -attenuation parameter. - -As in the original drag formulation in CICE (sections :ref:`atmo` and -:ref:`ocean`), :math:`c_u` and :math:`c_w` along with the transfer -coefficients for sensible heat, :math:`c_{\theta}`, and latent heat, -:math:`c_{q}`, are initialized to a situation corresponding to neutral -atmosphere–ice and ocean–ice boundary layers. The corresponding neutral -exchange coefficients are then replaced by coefficients that explicitly -account for form drag, expressed in terms of various contributions as - -.. math:: - \tt{Cdn\_atm} = \tt{Cdn\_atm\_rdg} + \tt{Cdn\_atm\_floe} + \tt{Cdn\_atm\_skin} + \tt{Cdn\_atm\_pond} , - :label: Cda - -.. math:: - \tt{Cdn\_ocn} = \tt{Cdn\_ocn\_rdg} + \tt{Cdn\_ocn\_floe} + \tt{Cdn\_ocn\_skin}. - :label: Cdw - -The contributions to form drag from ridges (and keels underneath the -ice), floe edges and melt pond edges can be expressed using the general -formulation of equation :eq:`formdrag` (see :cite:`TFSFFKLB14` for -details). Individual terms in equation :eq:`Cdw` are fully described in -:cite:`TFSFFKLB14`. Following :cite:`Arya75` -the skin drag coefficient is parametrized as - -.. math:: - { \tt{Cdn\_(atm/ocn)\_skin}}=a_{i} \left(1-m_{(s/k)} \frac{H_{(s/k)}}{D_{(s/k)}}\right)c_{s(s/k)}, \mbox{ if $\displaystyle\frac{H_{(s/k)}}{D_{(s/k)}}\ge\frac{1}{m_{(s/k)}}$,} - :label: skindrag - -where :math:`m_s` (:math:`m_k`) is a sheltering parameter that depends -on the average sail (keel) height, :math:`H_s` (:math:`H_k`), but is -often assumed constant, :math:`D_s` (:math:`D_k`) is the average -distance between sails (keels), and :math:`c_{ss}` (:math:`c_{sk}`) is -the unobstructed atmospheric (oceanic) skin drag that would be attained -in the absence of sails (keels) and with complete ice coverage, -:math:`a_{ice}=1`. - -Calculation of equations :eq:`formdrag` – :eq:`skindrag` requires that small-scale geometrical -properties of the ice cover be related to average grid cell quantities -already computed in the sea ice model. These intermediate quantities are -briefly presented here and described in more detail in -:cite:`TFSFFKLB14`. The sail height is given by - -.. math:: - H_{s} = \displaystyle 2\frac{v_{rdg}}{a_{rdg}}\left(\frac{\alpha\tan \alpha_{k} R_d+\beta \tan \alpha_{s} R_h}{\phi_r\tan \alpha_{k} R_d+\phi_k \tan \alpha_{s} R_h^2}\right), - :label: Hs - -and the distance between sails\ - -.. math:: - D_{s} = \displaystyle 2 H_s\frac{a_{i}}{a_{rdg}} \left(\frac{\alpha}{\tan \alpha_s}+\frac{\beta}{\tan \alpha_k}\frac{R_h}{R_d}\right), - :label: Ds - -where :math:`0<\alpha<1` and :math:`0<\beta<1` are weight functions, -:math:`\alpha_{s}` and :math:`\alpha_{k}` are the sail and keel slope, -:math:`\phi_s` and :math:`\phi_k` are constant porosities for the sails -and keels, and we assume constant ratios for the average keel depth and -sail height (:math:`H_k/H_s=R_h`) and for the average distances between -keels and between sails (:math:`D_k/D_s=R_d`). With the assumption of -hydrostatic equilibrium, the effective ice plus snow freeboard is -:math:`H_{f}=\bar{h_i}(1-\rho_i/\rho_w)+\bar{h_s}(1-\rho_s/\rho_w)`, -where :math:`\rho_i`, :math:`\rho_w` and :math:`\rho_s` are -respectively the densities of sea ice, water and snow, :math:`\bar{h_i}` -is the mean ice thickness and :math:`\bar{h_s}` is the mean snow -thickness (means taken over the ice covered regions). For the melt pond -edge elevation we assume that the melt pond surface is at the same level -as the ocean surface surrounding the floes -:cite:`FF07,FFT10,FSFH12` and use the simplification -:math:`H_p = H_f`. Finally to estimate the typical floe size -:math:`L_A`, distance between floes, :math:`D_F`, and melt pond size, -:math:`L_P` we use the parameterizations of :cite:`LGHA12` -to relate these quantities to the ice and pond concentrations. All of -these intermediate quantities are available as history output, along -with `Cdn\_atm`, `Cdn\_ocn` and the ratio `Cdn\_atm\_ratio\_n` between the -total atmospheric drag and the atmospheric neutral drag coefficient. - -We assume that the total neutral drag coefficients are thickness -category independent, but through their dependance on the diagnostic -variables described above, they vary both spatially and temporally. The -total drag coefficients and heat transfer coefficients will also depend -on the type of stratification of the atmosphere and the ocean, and we -use the parameterization described in section :ref:`atmo` that accounts -for both stable and unstable atmosphere–ice boundary layers. In contrast -to the neutral drag coefficients the stability effect of the atmospheric -boundary layer is calculated separately for each ice thickness category. - -The transfer coefficient for oceanic heat flux to the bottom of the ice -may be varied based on form drag considerations by setting the namelist -variable `fbot\_xfer\_type` to `Cdn\_ocn`; this is recommended when using -the form drag parameterization. Its default value of the transfer -coefficient is 0.006 (`fbot\_xfer\_type = ’constant’`). - - ----------------- Model components ----------------- +================ The Arctic and Antarctic sea ice packs are mixtures of open water, thin first-year ice, thicker multiyear ice, and thick pressure ridges. The @@ -2001,12 +1434,14 @@ vertical direction: .. math:: m{\partial {\bf u}\over\partial t} = \nabla\cdot{\bf \sigma} - + \vec{\tau}_a+\vec{\tau}_w - \hat{k}\times mf{\bf u} - mg\nabla H_\circ, + + \vec{\tau}_a+\vec{\tau}_w + \vec{\tau}_b - \hat{k}\times mf{\bf u} - mg\nabla H_\circ, :label: vpmom where :math:`m` is the combined mass of ice and snow per unit area and :math:`\vec{\tau}_a` and :math:`\vec{\tau}_w` are wind and ocean -stresses, respectively. The strength of the ice is represented by the +stresses, respectively. The term :math:`\vec{\tau}_b` is a +seabed stress (also referred to as basal stress) that represents the grounding of pressure +ridges in shallow water :cite:`Lemieux2016`. The strength of the ice is represented by the internal stress tensor :math:`\sigma_{ij}`, and the other two terms on the right hand side are stresses due to Coriolis effects and the sea surface slope. The parameterization for the wind and ice–ocean stress @@ -2024,20 +1459,21 @@ EVP approach. First, for clarity, the two components of Equation :eq:`vpmom` are m{\partial u\over\partial t} &=& {\partial\sigma_{1j}\over\partial x_j} + \tau_{ax} + a_i c_w \rho_w \left|{\bf U}_w - {\bf u}\right| \left[\left(U_w-u\right)\cos\theta - \left(V_w-v\right)\sin\theta\right] - +mfv - mg{\partial H_\circ\over\partial x}, \\ + -C_bu +mfv - mg{\partial H_\circ\over\partial x}, \\ m{\partial v\over\partial t} &=& {\partial\sigma_{2j}\over\partial x_j} + \tau_{ay} + a_i c_w \rho_w \left|{\bf U}_w - {\bf u}\right| \left[\left(U_w-u\right)\sin\theta - \left(V_w-v\right)\cos\theta\right] - -mfu - mg{\partial H_\circ\over\partial y}. \end{aligned} + -C_bv-mfu - mg{\partial H_\circ\over\partial y}. \end{aligned} In the code, -:math:`{\tt vrel}=a_i c_w \rho_w\left|{\bf U}_w - {\bf u}^k\right|`, +:math:`{\tt vrel}=a_i c_w \rho_w\left|{\bf U}_w - {\bf u}^k\right|` and +:math:`C_b=T_b \left( \sqrt{(u^k)^2+(v^k)^2}+u_0 \right)`, where :math:`k` denotes the subcycling step. The following equations illustrate the time discretization and define some of the other variables used in the code. .. math:: - \underbrace{\left({m\over\Delta t_e}+{\tt vrel} \cos\theta\right)}_{\tt cca} u^{k+1} + \underbrace{\left({m\over\Delta t_e}+{\tt vrel} \cos\theta\ + C_b \right)}_{\tt cca} u^{k+1} - \underbrace{\left(mf+{\tt vrel}\sin\theta\right)}_{\tt ccb}v^{k+1} = \underbrace{{\partial\sigma_{1j}^{k+1}\over\partial x_j}}_{\tt strintx} + \underbrace{\tau_{ax} - mg{\partial H_\circ\over\partial x} }_{\tt forcex} @@ -2046,7 +1482,7 @@ variables used in the code. .. math:: \underbrace{\left(mf+{\tt vrel}\sin\theta\right)}_{\tt ccb} u^{k+1} - + \underbrace{\left({m\over\Delta t_e}+{\tt vrel} \cos\theta\right)}_{\tt cca}v^{k+1} + + \underbrace{\left({m\over\Delta t_e}+{\tt vrel} \cos\theta + C_b \right)}_{\tt cca}v^{k+1} = \underbrace{{\partial\sigma_{2j}^{k+1}\over\partial x_j}}_{\tt strinty} + \underbrace{\tau_{ay} - mg{\partial H_\circ\over\partial y} }_{\tt forcey} + {\tt vrel}\underbrace{\left(U_w\sin\theta+V_w\cos\theta\right)}_{\tt watery} + {m\over\Delta t_e}v^k, @@ -2069,8 +1505,8 @@ where :math:`{\bf F} = \nabla\cdot\sigma^{k+1}`. Then .. math:: \begin{aligned} - \left({m\over\Delta t_e} +{\tt vrel}\cos\theta\right)u^{k+1} - \left(mf + {\tt vrel}\sin\theta\right) v^{k+1} &=& \hat{u} \\ - \left(mf + {\tt vrel}\sin\theta\right) u^{k+1} + \left({m\over\Delta t_e} +{\tt vrel}\cos\theta\right)v^{k+1} &=& \hat{v}.\end{aligned} + \left({m\over\Delta t_e} +{\tt vrel}\cos\theta\ + C_b \right)u^{k+1} - \left(mf + {\tt vrel}\sin\theta\right) v^{k+1} &=& \hat{u} \\ + \left(mf + {\tt vrel}\sin\theta\right) u^{k+1} + \left({m\over\Delta t_e} +{\tt vrel}\cos\theta + C_b \right)v^{k+1} &=& \hat{v}.\end{aligned} Solving simultaneously for :math:`u^{k+1}` and :math:`v^{k+1}`, @@ -2082,7 +1518,7 @@ Solving simultaneously for :math:`u^{k+1}` and :math:`v^{k+1}`, where .. math:: - a = {m\over\Delta t_e} + {\tt vrel}\cos\theta \\ + a = {m\over\Delta t_e} + {\tt vrel}\cos\theta + C_b \\ :label: cevpa .. math:: @@ -2097,6 +1533,46 @@ The Hibler-Bryan form for the ice-ocean stress :cite:`HB87` is included in **ice\_dyn\_shared.F90** but is currently commented out, pending further testing. +.. _seabed-stress: + +*************** +Seabed stress +*************** + +The parameterization for the seabed stress is described in :cite:`Lemieux2016`. The components of the basal seabed stress are +:math:`\tau_{bx}=C_bu` and :math:`\tau_{by}=C_bv`, where :math:`C_b` is a coefficient expressed as + +.. math:: + C_b= k_2 \max [0,(h_u - h_{cu})] e^{-\alpha_b * (1 - a_u)} (\sqrt{u^2+v^2}+u_0)^{-1}, \\ + :label: Cb + +where :math:`k_2` determines the maximum seabed stress that can be sustained by the grounded parameterized ridge(s), :math:`u_0` +is a small residual velocity and :math:`\alpha_b=20` is a parameter to ensure that the seabed stress quickly drops when +the ice concentration is smaller than 1. In the code, :math:`k_2 \max [0,(h_u - h_{cu})] e^{-\alpha_b * (1 - a_u)}` is defined as +:math:`T_b`. The quantities :math:`h_u`, :math:`a_{u}` and :math:`h_{cu}` are calculated at +the 'u' point based on local ice conditions (surrounding tracer points). They are respectively given by + +.. math:: + h_u=\max[v_i(i,j),v_i(i+1,j),v_i(i,j+1),v_i(i+1,j+1)], \\ + :label: hu + +.. math:: + a_u=\max[a_i(i,j),a_i(i+1,j),a_i(i,j+1),a_i(i+1,j+1)]. \\ + :label: au + +.. math:: + h_{cu}=a_u h_{wu} / k_1, \\ + :label: hcu + +where the :math:`a_i` and :math:`v_i` are the total ice concentrations and ice volumes around the :math:`u` point :math:`i,j` and +:math:`k_1` is a parameter that defines the critical ice thickness :math:`h_{cu}` at which the parameterized +ridge(s) reaches the seafloor for a water depth :math:`h_{wu}=\min[h_w(i,j),h_w(i+1,j),h_w(i,j+1),h_w(i+1,j+1)]`. + +Given the formulation of :math:`C_b` in equation :eq:`Cb`, the seabed stress components are non-zero only when :math:`h_u > h_{cu}`, which means +that the parameterized ridge is thick enough to reach the seafloor. The maximum seabed stress depends on the weigth of the ridge +above hydrostatic balance and the value of :math:`k_2`. Note that the user must provide a bathymetry field for using this grounding +scheme. + .. _internal-stress: *************** @@ -2109,14 +1585,21 @@ terms of :math:`\sigma_1=\sigma_{11}+\sigma_{22}`, divergence, :math:`D_D`, and the horizontal tension and shearing strain rates, :math:`D_T` and :math:`D_S` respectively. +CICE now outputs the internal ice pressure which is an important field to support navigation in ice-infested water. +The internal ice pressure :math:`(sigP)` is the average of the normal stresses multiplied by :math:`-1` and +is therefore simply equal to :math:`-\sigma_1/2`. + *Elastic-Viscous-Plastic* In the EVP model the internal stress tensor is determined from a -regularized version of the VP constitutive law, +regularized version of the VP constitutive law. Following the approach of :cite:`KH2010` (see also :cite:`Lemieux2016`), the +elliptical yield curve can be modified such that the ice has isotropic tensile strength. +The tensile strength :math:`T_p` is expressed as a fraction of the ice strength :math:`P`, that is :math:`T_p=k_t P` +where :math:`k_t` should be set to a value between 0 and 1. The constitutive law is therefore .. math:: {1\over E}{\partial\sigma_1\over\partial t} + {\sigma_1\over 2\zeta} - + {P\over 2\zeta} = D_D, \\ + + {P_R(1-k_t)\over 2\zeta} = D_D, \\ :label: sig1 .. math:: @@ -2143,19 +1626,19 @@ where \dot{\epsilon}_{ij} = {1\over 2}\left({{\partial u_i}\over{\partial x_j}} + {{\partial u_j}\over{\partial x_i}}\right), .. math:: - \zeta = {P\over 2\Delta}, + \zeta = {P(1+k_t)\over 2\Delta}, .. math:: - \eta = {P\over {2\Delta e^2}}, + \eta = {P(1+k_t)\over {2\Delta e^2}}, .. math:: \Delta = \left[D_D^2 + {1\over e^2}\left(D_T^2 + D_S^2\right)\right]^{1/2}, -and :math:`P` is a function of the ice thickness and concentration, -described in Section :ref:`mech-red`. The dynamics component -employs a “replacement pressure” (see :cite:`GHA98`, for +and :math:`P_R` is a “replacement pressure” (see :cite:`GHA98`, for example), which serves to prevent residual ice motion due to spatial -variations of :math:`P` when the rates of strain are exactly zero. +variations of :math:`P` when the rates of strain are exactly zero. The ice strength :math:`P` +is a function of the ice thickness and concentration +as it is described in Section :ref:`mech-red`. Viscosities are updated during the subcycling, so that the entire dynamics component is subcycled within the time step, and the elastic @@ -2172,15 +1655,14 @@ become .. math:: \begin{aligned} {\partial\sigma_1\over\partial t} + {\sigma_1\over 2T} - + {P\over 2T} &=& {P\over 2T\Delta} D_D, \\ - {\partial\sigma_2\over\partial t} + {e^2\sigma_2\over 2T} &=& {P\over + + {P_R(1-k_t)\over 2T} &=& {P(1+k_t)\over 2T\Delta} D_D, \\ + {\partial\sigma_2\over\partial t} + {e^2\sigma_2\over 2T} &=& {P(1+k_t)\over 2T\Delta} D_T,\\ {\partial\sigma_{12}\over\partial t} + {e^2\sigma_{12}\over 2T} &=& - {P\over 4T\Delta}D_S.\end{aligned} + {P(1+k_t)\over 4T\Delta}D_S.\end{aligned} All coefficients on the left-hand side are constant except for -:math:`P`, which changes only on the longer time step :math:`\Delta t`. -This modification compensates for the decreased efficiency of including +:math:`P_R`. This modification compensates for the decreased efficiency of including the viscosity terms in the subcycling. (Note that the viscosities do not appear explicitly.) Choices of the parameters used to define :math:`E`, :math:`T` and :math:`\Delta t_e` are discussed in diff --git a/doc/source/figures/distrb.png b/doc/source/user_guide/figures/distrb.png similarity index 100% rename from doc/source/figures/distrb.png rename to doc/source/user_guide/figures/distrb.png diff --git a/doc/source/figures/extra/distrb_cart_X1_20x24_16.png b/doc/source/user_guide/figures/extra/distrb_cart_X1_20x24_16.png similarity index 100% rename from doc/source/figures/extra/distrb_cart_X1_20x24_16.png rename to doc/source/user_guide/figures/extra/distrb_cart_X1_20x24_16.png diff --git a/doc/source/figures/extra/distrb_cart_X2_20x24_16.png b/doc/source/user_guide/figures/extra/distrb_cart_X2_20x24_16.png similarity index 100% rename from doc/source/figures/extra/distrb_cart_X2_20x24_16.png rename to doc/source/user_guide/figures/extra/distrb_cart_X2_20x24_16.png diff --git a/doc/source/figures/extra/distrb_cart_sqr_20x24_16.png b/doc/source/user_guide/figures/extra/distrb_cart_sqr_20x24_16.png similarity index 100% rename from doc/source/figures/extra/distrb_cart_sqr_20x24_16.png rename to doc/source/user_guide/figures/extra/distrb_cart_sqr_20x24_16.png diff --git a/doc/source/figures/extra/distrb_rake_block_20x24_16.png b/doc/source/user_guide/figures/extra/distrb_rake_block_20x24_16.png similarity index 100% rename from doc/source/figures/extra/distrb_rake_block_20x24_16.png rename to doc/source/user_guide/figures/extra/distrb_rake_block_20x24_16.png diff --git a/doc/source/figures/extra/distrb_rake_lat_20x24_16.png b/doc/source/user_guide/figures/extra/distrb_rake_lat_20x24_16.png similarity index 100% rename from doc/source/figures/extra/distrb_rake_lat_20x24_16.png rename to doc/source/user_guide/figures/extra/distrb_rake_lat_20x24_16.png diff --git a/doc/source/figures/extra/distrb_sfc_lat_20x24_16.png b/doc/source/user_guide/figures/extra/distrb_sfc_lat_20x24_16.png similarity index 100% rename from doc/source/figures/extra/distrb_sfc_lat_20x24_16.png rename to doc/source/user_guide/figures/extra/distrb_sfc_lat_20x24_16.png diff --git a/doc/source/figures/extra/topo2.png b/doc/source/user_guide/figures/extra/topo2.png similarity index 100% rename from doc/source/figures/extra/topo2.png rename to doc/source/user_guide/figures/extra/topo2.png diff --git a/doc/source/figures/extra/topo3.png b/doc/source/user_guide/figures/extra/topo3.png similarity index 100% rename from doc/source/figures/extra/topo3.png rename to doc/source/user_guide/figures/extra/topo3.png diff --git a/doc/source/figures/grid.png b/doc/source/user_guide/figures/grid.png similarity index 100% rename from doc/source/figures/grid.png rename to doc/source/user_guide/figures/grid.png diff --git a/doc/source/figures/histograms.png b/doc/source/user_guide/figures/histograms.png similarity index 100% rename from doc/source/figures/histograms.png rename to doc/source/user_guide/figures/histograms.png diff --git a/doc/source/figures/pdf/EAP.pdf b/doc/source/user_guide/figures/pdf/EAP.pdf similarity index 100% rename from doc/source/figures/pdf/EAP.pdf rename to doc/source/user_guide/figures/pdf/EAP.pdf diff --git a/doc/source/figures/pdf/albedo.pdf b/doc/source/user_guide/figures/pdf/albedo.pdf similarity index 100% rename from doc/source/figures/pdf/albedo.pdf rename to doc/source/user_guide/figures/pdf/albedo.pdf diff --git a/doc/source/figures/pdf/craig_ciceperf_ehunke.pdf b/doc/source/user_guide/figures/pdf/craig_ciceperf_ehunke.pdf similarity index 100% rename from doc/source/figures/pdf/craig_ciceperf_ehunke.pdf rename to doc/source/user_guide/figures/pdf/craig_ciceperf_ehunke.pdf diff --git a/doc/source/figures/pdf/deparr.pdf b/doc/source/user_guide/figures/pdf/deparr.pdf similarity index 100% rename from doc/source/figures/pdf/deparr.pdf rename to doc/source/user_guide/figures/pdf/deparr.pdf diff --git a/doc/source/figures/pdf/distrb_cart_X1_20x24_16.pdf b/doc/source/user_guide/figures/pdf/distrb_cart_X1_20x24_16.pdf similarity index 100% rename from doc/source/figures/pdf/distrb_cart_X1_20x24_16.pdf rename to doc/source/user_guide/figures/pdf/distrb_cart_X1_20x24_16.pdf diff --git a/doc/source/figures/pdf/distrb_cart_X2_20x24_16.pdf b/doc/source/user_guide/figures/pdf/distrb_cart_X2_20x24_16.pdf similarity index 100% rename from doc/source/figures/pdf/distrb_cart_X2_20x24_16.pdf rename to doc/source/user_guide/figures/pdf/distrb_cart_X2_20x24_16.pdf diff --git a/doc/source/figures/pdf/distrb_cart_sqr_20x24_16.pdf b/doc/source/user_guide/figures/pdf/distrb_cart_sqr_20x24_16.pdf similarity index 100% rename from doc/source/figures/pdf/distrb_cart_sqr_20x24_16.pdf rename to doc/source/user_guide/figures/pdf/distrb_cart_sqr_20x24_16.pdf diff --git a/doc/source/figures/pdf/distrb_rake_block_20x24_16.pdf b/doc/source/user_guide/figures/pdf/distrb_rake_block_20x24_16.pdf similarity index 100% rename from doc/source/figures/pdf/distrb_rake_block_20x24_16.pdf rename to doc/source/user_guide/figures/pdf/distrb_rake_block_20x24_16.pdf diff --git a/doc/source/figures/pdf/distrb_rake_lat_20x24_16.pdf b/doc/source/user_guide/figures/pdf/distrb_rake_lat_20x24_16.pdf similarity index 100% rename from doc/source/figures/pdf/distrb_rake_lat_20x24_16.pdf rename to doc/source/user_guide/figures/pdf/distrb_rake_lat_20x24_16.pdf diff --git a/doc/source/figures/pdf/distrb_sfc_lat_20x24_16.pdf b/doc/source/user_guide/figures/pdf/distrb_sfc_lat_20x24_16.pdf similarity index 100% rename from doc/source/figures/pdf/distrb_sfc_lat_20x24_16.pdf rename to doc/source/user_guide/figures/pdf/distrb_sfc_lat_20x24_16.pdf diff --git a/doc/source/figures/pdf/gplot.pdf b/doc/source/user_guide/figures/pdf/gplot.pdf similarity index 100% rename from doc/source/figures/pdf/gplot.pdf rename to doc/source/user_guide/figures/pdf/gplot.pdf diff --git a/doc/source/figures/pdf/grid.pdf b/doc/source/user_guide/figures/pdf/grid.pdf similarity index 100% rename from doc/source/figures/pdf/grid.pdf rename to doc/source/user_guide/figures/pdf/grid.pdf diff --git a/doc/source/figures/pdf/histograms.pdf b/doc/source/user_guide/figures/pdf/histograms.pdf similarity index 100% rename from doc/source/figures/pdf/histograms.pdf rename to doc/source/user_guide/figures/pdf/histograms.pdf diff --git a/doc/source/figures/pdf/timings.pdf b/doc/source/user_guide/figures/pdf/timings.pdf similarity index 100% rename from doc/source/figures/pdf/timings.pdf rename to doc/source/user_guide/figures/pdf/timings.pdf diff --git a/doc/source/figures/pdf/topo2.pdf b/doc/source/user_guide/figures/pdf/topo2.pdf similarity index 100% rename from doc/source/figures/pdf/topo2.pdf rename to doc/source/user_guide/figures/pdf/topo2.pdf diff --git a/doc/source/figures/pdf/topo3.pdf b/doc/source/user_guide/figures/pdf/topo3.pdf similarity index 100% rename from doc/source/figures/pdf/topo3.pdf rename to doc/source/user_guide/figures/pdf/topo3.pdf diff --git a/doc/source/figures/pdf/tracergraphic.pdf b/doc/source/user_guide/figures/pdf/tracergraphic.pdf similarity index 100% rename from doc/source/figures/pdf/tracergraphic.pdf rename to doc/source/user_guide/figures/pdf/tracergraphic.pdf diff --git a/doc/source/figures/pdf/triangles.pdf b/doc/source/user_guide/figures/pdf/triangles.pdf similarity index 100% rename from doc/source/figures/pdf/triangles.pdf rename to doc/source/user_guide/figures/pdf/triangles.pdf diff --git a/doc/source/figures/scorecard.png b/doc/source/user_guide/figures/scorecard.png similarity index 100% rename from doc/source/figures/scorecard.png rename to doc/source/user_guide/figures/scorecard.png diff --git a/doc/source/user_guide/index.rst b/doc/source/user_guide/index.rst new file mode 100644 index 000000000..0b9656e20 --- /dev/null +++ b/doc/source/user_guide/index.rst @@ -0,0 +1,19 @@ +.. CICE-Consortium documentation master file, created by + sphinx-quickstart on Thu Jun 29 13:47:09 2017. + You can adapt this file completely to your liking, but it should at least + contain the root `toctree` directive. + +.. _user_guide: + +User Guide +----------------- + +.. toctree:: + :maxdepth: 3 + + ug_implementation.rst + ug_running.rst + ug_testing.rst + ug_case_settings.rst + ug_troubleshooting.rst + diff --git a/doc/source/user_guide/ug_case_settings.rst b/doc/source/user_guide/ug_case_settings.rst new file mode 100755 index 000000000..c3280e475 --- /dev/null +++ b/doc/source/user_guide/ug_case_settings.rst @@ -0,0 +1,422 @@ +:tocdepth: 3 + +.. _case_settings: + +Case Settings +===================== + +There are two important files that define the case, **cice.settings** and +**ice_in**. **cice.settings** is a list of env variables that define many +values used to setup, build and run the case. **ice_in** is the input namelist file +for CICE. Variables in both files are described below. + +.. _tabsettings: + +Table of CICE Settings +-------------------------- + +The **cice.settings** file is reasonably well self documented. Several of +the variables defined in the file are not used in CICE. They exist +to support the CICE model. + +.. csv-table:: *CICE settings* + :header: "variable", "options/format", "description", "recommended value" + :widths: 15, 15, 25, 20 + + "ICE_CASENAME", " ", "case name", "set by cice.setup" + "ICE_SANDBOX", " ", "sandbox directory", "set by cice.setup" + "ICE_MACHINE", " ", "machine name", "set by cice.setup" + "ICE_COMPILER", " ", "environment name", "set by cice.setup" + "ICE_MACHCOMP", " ", "machine_environment name", "set by cice.setup" + "ICE_SCRIPTS", " ", "scripts directory", "set by cice.setup" + "ICE_CASEDIR", " ", "case directory", "set by cice.setup" + "ICE_RUNDIR", " ", "run directory", "set by cice.setup" + "ICE_OBJDIR", " ", "compile directory", "${ICE_RUNDIR}/compile" + "ICE_RSTDIR", " ", "unused", "${ICE_RUNDIR}/restart" + "ICE_HSTDIR", " ", "unused", "${ICE_RUNDIR}/history" + "ICE_LOGDIR", " ", "log directory", "${ICE_CASEDIR}/logs" + "ICE_DRVOPT", " ", "unused", "cice" + "ICE_CONSTOPT", " ", "unused", "cice" + "ICE_IOTYPE", " ", "unused", "netcdf" + "ICE_CLEANBUILD", "true,false", "automatically clean before building", "true" + "ICE_GRID", "col", "grid", "set by cice.setup" + "ICE_NXGLOB", "integer", "number of gridcells", "set by cice.setup" + "ICE_NTASKS", "integer", "number of tasks, must be set to 1", "set by cice.setup" + "ICE_NTHRDS", "integer", "number of threads per task, must be set to 1", "set by cice.setup" + "ICE_BLCKX", "integer", "block size in x direction", "set by cice.setup" + "ICE_BLCKY", "integer", "block size in y direction", "set by cice.setup" + "ICE_MXBLCKS", "integer", "maximum number of blocks per task", "set by cice.setup" + "ICE_TEST", " ", "test setting if using a test", "set by cice.setup" + "ICE_TESTNAME", " ", "test name if using a test", "set by cice.setup" + "ICE_BASELINE", " ", "baseline directory name, associated with cice.setup -bdir ", "set by cice.setup" + "ICE_BASEGEN", " ", "baseline directory name for regression generation, associated with cice.setup -bgen ", "set by cice.setup" + "ICE_BASECOM", " ", "baseline directory name for regression comparison, associated with cice.setup -bcmp ", "set by cice.setup" + "ICE_BFBCOMP", " ", "location of case for comparison, associated with cice.setup -td", "set by cice.setup" + "ICE_SPVAL", " ", "special value for cice.settings strings", "set by cice.setup" + "ICE_RUNLENGTH", " ", "batch run length default", "set by cice.setup" + "ICE_ACCOUNT", " ", "batch account number", "set by cice.setup or by default" + "ICE_THREADED", "true,false", "force threading in compile, will always compile threaded if NTHRDS is gt 1", "false" + "NICELYR", "integer", "number of vertical layers in the ice", "7" + "NSNWLYR", "integer", "number of vertical layers in the snow", "1" + "NICECAT", "integer", "number of ice thickness categories", "5" + "TRAGE", "0,1", "ice age tracer", "1" + "TRFY", "0,1", "first-year ice area tracer", "1" + "TRLVL", "0,1", "deformed ice tracer", "1" + "TRPND", "0,1", "melt pond tracer", "1" + "NTRAERO", "integer", "number of aerosol tracers", "1" + "TRBRI", "0,1", "brine height tracer", "0" + "TRZS", "0,1", "zsalinity tracer, needs TRBRI=1", "0" + "TRBGCS", "0,1", "skeletal layer tracer, needs TRBGCZ=0", "0" + "TRBGCZ", "0,1", "zbgc tracers, needs TRBGCS=0 and TRBRI=1", "0" + "NBGCLYR", "integer", "number of zbgc layers", "7" + "TRZAERO", "0-6", "number of z aerosol tracers", "0" + "TRALG", "0,1,2,3", "number of algal tracers", "0" + "TRDOC", "0,1,2,3", "number of dissolved organic carbon", "0" + "TRDIC", "0,1", "number of dissolved inorganic carbon", "0" + "TRDON", "0,1", "number of dissolved organic nitrogen", "0" + "TRFEP", "0,1,2", "number of particulate iron tracers", "0" + "TRFED", "0,1,2", "number of dissolved iron tracers", "0" + "CAM_ICE", " ", "unused", "no" + "DITTO", "no,yes", "turn on bit-for-bit global sums via real16", "no" + "BARRIERS", "no,yes", "turn on barriers between global scatters and gathers", "no" + "ICE_BLDDEBUG", "true,false", "turn on compile debug flags", "false" + "NUMIN", "integer", "smallest unit number assigned to CICE files", "11" + "NUMAX", "integer", "largest unit number assigned to CICE files", "99" + + + +.. _tabnamelist: + + +Table of namelist options +------------------------------- + +.. csv-table:: Table 8 + :header: "variable", "options/format", "description", "recommended value" + :widths: 15, 15, 30, 15 + + "*setup_nml*", "", "", "" + "", "", "*Time, Diagnostics*", "" + "``days_per_year``", "``360`` or ``365``", "number of days in a model year", "365" + "``use_leap_years``", "true/false", "if true, include leap days", "" + "``year_init``", "yyyy", "the initial year, if not using restart", "" + "``istep0``", "integer", "initial time step number", "0" + "``dt``", "seconds", "thermodynamics time step length", "3600." + "``npt``", "integer", "total number of time steps to take", "" + "``ndtd``", "integer", "number of dynamics/advection/ridging/steps per thermo timestep", "1" + "", "", "*Initialization/Restarting*", "" + "``runtype``", "``initial``", "start from ``ice_ic``", "" + "", "``continue``", "restart using ``pointer_file``", "" + "``ice_ic``", "``default``", "latitude and sst dependent", "default" + "", "``none``", "no ice", "" + "", "path/file", "restart file name", "" + "``restart``", "true/false", "initialize using restart file", "``.true.``" + "``use_restart_time``", "true/false", "set initial date using restart file", "``.true.``" + "``restart_format``", "nc", "read/write  restart files (use with PIO)", "" + "", "bin", "read/write binary restart files", "" + "``lcdf64``", "true/false", "if true, use 64-bit  format", "" + "``restart_dir``", "path/", "path to restart directory", "" + "``restart_ext``", "true/false", "read/write halo cells in restart files", "" + "``restart_file``", "filename prefix", "output file for restart dump", "‘iced’" + "``pointer_file``", "pointer filename", "contains restart filename", "" + "``dumpfreq``", "``y``", "write restart every ``dumpfreq_n`` years", "y" + "", "``m``", "write restart every ``dumpfreq_n`` months", "" + "", "``d``", "write restart every ``dumpfreq_n`` days", "" + "``dumpfreq_n``", "integer", "frequency restart data is written", "1" + "``dump_last``", "true/false", "if true, write restart on last time step of simulation", "" + "", "", "*Model Output*", "" + "``bfbflag``", "true/false", "for bit-for-bit diagnostic output", "" + "``diagfreq``", "integer", "frequency of diagnostic output in ``dt``", "24" + "", "*e.g.*, 10", "once every 10 time steps", "" + "``diag_type``", "``stdout``", "write diagnostic output to stdout", "" + "", "``file``", "write diagnostic output to file", "" + "``diag_file``", "filename", "diagnostic output file (script may reset)", "" + "``print_global``", "true/false", "print diagnostic data, global sums", "``.false.``" + "``print_points``", "true/false", "print diagnostic data for two grid points", "``.false.``" + "``latpnt``", "real", "latitude of (2) diagnostic points", "" + "``lonpnt``", "real", "longitude of (2) diagnostic points", "" + "``dbug``", "true/false", "if true, write extra diagnostics", "``.false.``" + "``histfreq``", "string array", "defines output frequencies", "" + "", "``y``", "write history every ``histfreq_n`` years", "" + "", "``m``", "write history every ``histfreq_n`` months", "" + "", "``d``", "write history every ``histfreq_n`` days", "" + "", "``h``", "write history every ``histfreq_n`` hours", "" + "", "``1``", "write history every time step", "" + "", "``x``", "unused frequency stream (not written)", "" + "``histfreq_n``", "integer array", "frequency history output is written", "" + "", "0", "do not write to history", "" + "``hist_avg``", "true", "write time-averaged data", "``.true.``" + "", "false", "write snapshots of data", "" + "``history_dir``", "path/", "path to history output directory", "" + "``history_file``", "filename prefix", "output file for history", "‘iceh’" + "``write_ic``", "true/false", "write initial condition", "" + "``incond_dir``", "path/", "path to initial condition directory", "" + "``incond_file``", "filename prefix", "output file for initial condition", "‘iceh’" + "``runid``", "string", "label for run (currently CESM only)", "" + "", "", "", "" + "*grid_nml*", "", "", "" + "", "", "*Grid*", "" + "``grid_format``", "``nc``", "read  grid and kmt files", "‘bin’" + "", "``bin``", "read direct access, binary file", "" + "``grid_type``", "``rectangular``", "defined in *rectgrid*", "" + "", "``displaced_pole``", "read from file in *popgrid*", "" + "", "``tripole``", "read from file in *popgrid*", "" + "", "``regional``", "read from file in *popgrid*", "" + "``grid_file``", "filename", "name of grid file to be read", "‘grid’" + "``kmt_file``", "filename", "name of land mask file to be read", "‘kmt’" + "``gridcpl_file``", "filename", "input file for coupling grid info", "" + "``kcatbound``", "``0``", "original category boundary formula", "0" + "", "``1``", "new formula with round numbers", "" + "", "``2``", "WMO standard categories", "" + "", "``-1``", "one category", "" + "", "", "", "" + "*domain_nml*", "", "", "" + "", "", "*Domain*", "" + "``nprocs``", "integer", "number of processors to use", "" + "``processor_shape``", "``slenderX1``", "1 processor in the y direction (tall, thin)", "" + "", "``slenderX2``", "2 processors in the y direction (thin)", "" + "", "``square-ice``", "more processors in x than y, :math:`\sim` square", "" + "", "``square-pop``", "more processors in y than x, :math:`\sim` square", "" + "``distribution_type``", "``cartesian``", "distribute blocks in 2D Cartesian array", "" + "", "``roundrobin``", "1 block per proc until blocks are used", "" + "", "``sectcart``", "blocks distributed to domain quadrants", "" + "", "``sectrobin``", "several blocks per proc until used", "" + "", "``rake``", "redistribute blocks among neighbors", "" + "", "``spacecurve``", "distribute blocks via space-filling curves", "" + "``distribution_weight``", "``block``", "full block size sets ``work_per_block``", "" + "", "``latitude``", "latitude/ocean sets ``work_per_block``", "" + "``ew_boundary_type``", "``cyclic``", "periodic boundary conditions in x-direction", "" + "", "``open``", "Dirichlet boundary conditions in x", "" + "``ns_boundary_type``", "``cyclic``", "periodic boundary conditions in y-direction", "" + "", "``open``", "Dirichlet boundary conditions in y", "" + "", "``tripole``", "U-fold tripole boundary conditions in y", "" + "", "``tripoleT``", "T-fold tripole boundary conditions in y", "" + "``maskhalo_dyn``", "true/false", "mask unused halo cells for dynamics", "" + "``maskhalo_remap``", "true/false", "mask unused halo cells for transport", "" + "``maskhalo_bound``", "true/false", "mask unused halo cells for boundary updates", "" + "", "", "", "" + "*tracer_nml*", "", "", "" + "", "", "*Tracers*", "" + "``tr_iage``", "true/false", "ice age", "" + "``restart_age``", "true/false", "restart tracer values from file", "" + "``tr_FY``", "true/false", "first-year ice area", "" + "``restart_FY``", "true/false", "restart tracer values from file", "" + "``tr_lvl``", "true/false", "level ice area and volume", "" + "``restart_lvl``", "true/false", "restart tracer values from file", "" + "``tr_pond_cesm``", "true/false", "CESM melt ponds", "" + "``restart_pond_cesm``", "true/false", "restart tracer values from file", "" + "``tr_pond_topo``", "true/false", "topo melt ponds", "" + "``restart_pond_topo``", "true/false", "restart tracer values from file", "" + "``tr_pond_lvl``", "true/false", "level-ice melt ponds", "" + "``restart_pond_lvl``", "true/false", "restart tracer values from file", "" + "``tr_aero``", "true/false", "aerosols", "" + "``restart_aero``", "true/false", "restart tracer values from file", "" + "*thermo_nml*", "", "", "" + "", "", "*Thermodynamics*", "" + "``kitd``", "``0``", "delta function ITD approximation", "1" + "", "``1``", "linear remapping ITD approximation", "" + "``ktherm``", "``0``", "zero-layer thermodynamic model", "" + "", "``1``", "Bitz and Lipscomb thermodynamic model", "" + "", "``2``", "mushy-layer thermodynamic model", "" + "``conduct``", "``MU71``", "conductivity :cite:`MU71`", "" + "", "``bubbly``", "conductivity :cite:`PETB07`", "" + "``a_rapid_mode``", "real", "brine channel diameter", "0.5x10 :math:`^{-3}` m" + "``Rac_rapid_mode``", "real", "critical Rayleigh number", "10" + "``aspect_rapid_mode``", "real", "brine convection aspect ratio", "1" + "``dSdt_slow_mode``", "real", "drainage strength parameter", "-1.5x10 :math:`^{-7}` m/s/K" + "``phi_c_slow_mode``", ":math:`0<\phi_c < 1`", "critical liquid fraction", "0.05" + "``phi_i_mushy``", ":math:`0<\phi_i < 1`", "solid fraction at lower boundary", "0.85" + "", "", "", "" + "*dynamics_nml*", "", "", "" + "", "", "*Dynamics*", "" + "``kdyn``", "``0``", "dynamics OFF", "1" + "", "``1``", "EVP dynamics", "" + "", "``2``", "EAP dynamics", "" + "``revised_evp``", "true/false", "use revised EVP formulation", "" + "``ndte``", "integer", "number of EVP subcycles", "120" + "``advection``", "``remap``", "linear remapping advection", "‘remap’" + "", "``upwind``", "donor cell advection", "" + "``kstrength``", "``0``", "ice strength formulation :cite:`Hibler79`", "1" + "", "``1``", "ice strength formulation :cite:`Rothrock75`", "" + "``krdg_partic``", "``0``", "old ridging participation function", "1" + "", "``1``", "new ridging participation function", "" + "``krdg_redist``", "``0``", "old ridging redistribution function", "1" + "", "``1``", "new ridging redistribution function", "" + "``mu_rdg``", "real", "e-folding scale of ridged ice", "" + "``Cf``", "real", "ratio of ridging work to PE change in ridging", "17." + "", "", "", "" + "*shortwave_nml*", "", "", "" + "", "", "*Shortwave*", "" + "``shortwave``", "``default``", "NCAR CCSM3 distribution method", "" + "", "``dEdd``", "Delta-Eddington method", "" + "``albedo_type``", "``default``", "NCAR CCSM3 albedos", "‘default’" + "", "``constant``", "four constant albedos", "" + "``albicev``", ":math:`0<\alpha <1`", "visible ice albedo for thicker ice", "" + "``albicei``", ":math:`0<\alpha <1`", "near infrared ice albedo for thicker ice", "" + "``albsnowv``", ":math:`0<\alpha <1`", "visible, cold snow albedo", "" + "``albsnowi``", ":math:`0<\alpha <1`", "near infrared, cold snow albedo", "" + "``ahmax``", "real", "albedo is constant above this thickness", "0.3 m" + "``R_ice``", "real", "tuning parameter for sea ice albedo from Delta-Eddington shortwave", "" + "``R_pnd``", "real", "... for ponded sea ice albedo …", "" + "``R_snw``", "real", "... for snow (broadband albedo) …", "" + "``dT_mlt``", "real", ":math:`\Delta` temperature per :math:`\Delta` snow grain radius", "" + "``rsnw_mlt``", "real", "maximum melting snow grain radius", "" + "``kalg``", "real", "absorption coefficient for algae", "" + "", "", "", "" + "*ponds_nml*", "", "", "" + "", "", "*Melt Ponds*", "" + "``hp1``", "real", "critical ice lid thickness for topo ponds", "0.01 m" + "``hs0``", "real", "snow depth of transition to bare sea ice", "0.03 m" + "``hs1``", "real", "snow depth of transition to pond ice", "0.03 m" + "``dpscale``", "real", "time scale for flushing in permeable ice", ":math:`1\times 10^{-3}`" + "``frzpnd``", "``hlid``", "Stefan refreezing with pond ice thickness", "‘hlid’" + "", "``cesm``", "CESM refreezing empirical formula", "" + "``rfracmin``", ":math:`0 \le r_{min} \le 1`", "minimum melt water added to ponds", "0.15" + "``rfracmax``", ":math:`0 \le r_{max} \le 1`", "maximum melt water added to ponds", "1.0" + "``pndaspect``", "real", "aspect ratio of pond changes (depth:area)", "0.8" + "", "", "", "" + "*zbgc_nml*", "", "", "" + "", "", "*Biogeochemistry*", "" + "``tr_brine``", "true/false", "brine height tracer", "" + "``tr_zaero``", "true/false", "vertical aerosol tracers", "" + "``modal_aero``", "true/false", "modal aersols", "" + "``restore_bgc``", "true/false", "restore bgc to data", "" + "``solve_zsal`", "true/false", "update salinity tracer profile", "" + "``bgc_data_dir``", "path/", "data directory for bgc", "" + "``skl_bgc``", "true/false", "biogeochemistry", "" + "``sil_data_type``", "``default``", "default forcing value for silicate", "" + "", "``clim``", "silicate forcing from ocean climatology :cite:`GLBA06`", "" + "``nit_data_type``", "``default``", "default forcing value for nitrate", "" + "", "``clim``", "nitrate forcing from ocean climatology :cite:`GLBA06`", "" + "", "``sss``", "nitrate forcing equals salinity", "" + "``fe_data_type``", "``default``", "default forcing value for iron", "" + "", "``clim``", "iron forcing from ocean climatology", "" + "``bgc_flux_type``", "``Jin2006``", "ice–ocean flux velocity of :cite:`JDWSTWLG06`", "" + "", "``constant``", "constant ice–ocean flux velocity", "" + "``restart_bgc``", "true/false", "restart tracer values from file", "" + "``tr_bgc_C_sk``", "true/false", "algal carbon tracer", "" + "``tr_bgc_chl_sk``", "true/false", "algal chlorophyll tracer", "" + "``tr_bgc_Am_sk``", "true/false", "ammonium tracer", "" + "``tr_bgc_Sil_sk``", "true/false", "silicate tracer", "" + "``tr_bgc_DMSPp_sk``", "true/false", "particulate DMSP tracer", "" + "``tr_bgc_DMSPd_sk``", "true/false", "dissolved DMSP tracer", "" + "``tr_bgc_DMS_sk``", "true/false", "DMS tracer", "" + "``phi_snow``", "real", "snow porosity for brine height tracer", "" + "", "", "", "" + "*forcing_nml*", "", "", "" + "", "", "*Forcing*", "" + "``formdrag``", "true/false", "calculate form drag", "" + "``atmbndy``", "``default``", "stability-based boundary layer", "‘default’" + "", "``constant``", "bulk transfer coefficients", "" + "``fyear_init``", "yyyy", "first year of atmospheric forcing data", "" + "``ycycle``", "integer", "number of years in forcing data cycle", "" + "``atm_data_format``", "``nc``", "read  atmo forcing files", "" + "", "``bin``", "read direct access, binary files", "" + "``atm_data_type``", "``default``", "constant values defined in the code", "" + "", "``LYq``", "AOMIP/Large-Yeager forcing data", "" + "", "``monthly``", "monthly forcing data", "" + "", "``ncar``", "NCAR bulk forcing data", "" + "", "``oned``", "column forcing data", "" + "``atm_data_dir``", "path/", "path to atmospheric forcing data directory", "" + "``calc_strair``", "true", "calculate wind stress and speed", "" + "", "false", "read wind stress and speed from files", "" + "``highfreq``", "true/false", "high-frequency atmo coupling", "" + "``natmiter``", "integer", "number of atmo boundary layer iterations", "" + "``calc_Tsfc``", "true/false", "calculate surface temperature", "``.true.``" + "``precip_units``", "``mks``", "liquid precipitation data units", "" + "", "``mm_per_month``", "", "" + "", "``mm_per_sec``", "(same as MKS units)", "" + "``tfrz_option``", "``minus1p8``", "constant ocean freezing temperature (:math:`-1.8^{\circ} C`)", "" + "", "``linear_salt``", "linear function of salinity (ktherm=1)", "" + "", "``mushy_layer``", "matches mushy-layer thermo (ktherm=2)", "" + "``ustar_min``", "real", "minimum value of ocean friction velocity", "0.0005 m/s" + "``fbot_xfer_type``", "``constant``", "constant ocean heat transfer coefficient", "" + "", "``Cdn_ocn``", "variable ocean heat transfer coefficient", "" + "``update_ocn_f``", "true", "include frazil water/salt fluxes in ocn fluxes", "" + "", "false", "do not include (when coupling with POP)", "" + "``l_mpond_fresh``", "true", "retain (topo) pond water until ponds drain", "" + "", "false", "release (topo) pond water immediately to ocean", "" + "``oceanmixed_ice``", "true/false", "active ocean mixed layer calculation", "``.true.`` (if uncoupled)" + "``ocn_data_format``", "``nc``", "read  ocean forcing files", "" + "", "``bin``", "read direct access, binary files", "" + "``sss_data_type``", "``default``", "constant values defined in the code", "" + "", "``clim``", "climatological data", "" + "", "``near``", "POP ocean forcing data", "" + "``sst_data_type``", "``default``", "constant values defined in the code", "" + "", "``clim``", "climatological data", "" + "", "``ncar``", "POP ocean forcing data", "" + "``ocn_data_dir``", "path/", "path to oceanic forcing data directory", "" + "``oceanmixed_file``", "filename", "data file containing ocean forcing data", "" + "``restore_sst``", "true/false", "restore sst to data", "" + "``trestore``", "integer", "sst restoring time scale (days)", "" + "``restore_ice``", "true/false", "restore ice state along lateral boundaries", "" + "", "", "", "" + "*icefields_tracer_nml*", "", "", "" + "", "", "*History Fields*", "" + "``f_``", "string", "frequency units for writing ```` to history", "" + "", "``y``", "write history every ``histfreq_n`` years", "" + "", "``m``", "write history every ``histfreq_n`` months", "" + "", "``d``", "write history every ``histfreq_n`` days", "" + "", "``h``", "write history every ``histfreq_n`` hours", "" + "", "``1``", "write history every time step", "" + "", "``x``", "do not write ```` to history", "" + "", "``md``", "*e.g.,* write both monthly and daily files", "" + "``f__ai``", "", "grid cell average of ```` (:math:`\times a_i`)", "" + + + +.. _tuning: + +BGC Tuning Parameters +------------------------ + +Biogeochemical tuning parameters are specified as namelist options in +**ice\_in**. Table :ref:`tab-bio-tracers2` provides a list of parameters +used in the reaction equations, their representation in the code, a +short description of each and the default values. Please keep in mind +that there has only been minimal tuning of the model. + +.. _tab-bio-tracers2: + +.. csv-table:: *Biogeochemical Reaction Parameters* + :header: "Text Variable", "Variable in code", "Description", "Value", "units" + :widths: 7, 20, 15, 15, 15 + + ":math:`f_{graze}`", "fr\_graze(1:3)", "fraction of growth grazed", "0, 0.1, 0.1", "1" + ":math:`f_{res}`", "fr\_resp", "fraction of growth respired", "0.05", "1" + ":math:`l_{max}`", "max\_loss", "maximum tracer loss fraction", "0.9", "1" + ":math:`m_{pre}`", "mort\_pre(1:3)", "maximum mortality rate", "0.007, 0.007, 0.007", "day\ :math:`^{-1}`" + ":math:`m_{T}`", "mort\_Tdep(1:3)", "mortality temperature decay", "0.03, 0.03, 0.03", ":math:`^o`\ C\ :math:`^{-1}`" + ":math:`T_{max}`", "T\_max", "maximum brine temperature", "0", ":math:`^o`\ C" + ":math:`k_{nitr}`", "k\_nitrif", "nitrification rate", "0", "day\ :math:`^{-1}`" + ":math:`f_{ng}`", "fr\_graze\_e", "fraction of grazing excreted", "0.5", "1" + ":math:`f_{gs}`", "fr\_graze\_s", "fraction of grazing spilled", "0.5", "1" + ":math:`f_{nm}`", "fr\_mort2min", "fraction of mortality to :math:`{\mbox{NH$_4$}}`", "0.5", "1" + ":math:`f_{dg}`", "f\_don", "frac. spilled grazing to :math:`{\mbox{DON}}`", "0.6", "1" + ":math:`k_{nb}`", "kn\_bac :math:`^a`", "bacterial degradation of :math:`{\mbox{DON}}`", "0.03", "day\ :math:`^{-1}`" + ":math:`f_{cg}`", "f\_doc(1:3)", "fraction of mortality to :math:`{\mbox{DOC}}`", "0.4, 0.4, 0.2 ", "1" + ":math:`R_{c:n}^c`", "R\_C2N(1:3)", "algal carbon to nitrogen ratio", "7.0, 7.0, 7.0", "mol/mol" + ":math:`k_{cb}`", "k\_bac1:3\ :math:`^a`", "bacterial degradation of DOC", "0.03, 0.03, 0.03", "day\ :math:`^{-1}`" + ":math:`\tau_{fe}`", "t\_iron\_conv", "conversion time pFe :math:`\leftrightarrow` dFe", "3065.0 ", "day" + ":math:`r^{max}_{fed:doc}`", "max\_dfe\_doc1", "max ratio of dFe to saccharids", "0.1852", "nM Fe\ :math:`/\mu`\ M C" + ":math:`f_{fa}`", "fr\_dFe ", "fraction of remin. N to dFe", "0.3", "1" + ":math:`R_{fe:n}`", "R\_Fe2N(1:3)", "algal Fe to N ratio", "0.023, 0.023, 0.7", "mmol/mol" + ":math:`R_{s:n}`", "R\_S2N(1:3)", "algal S to N ratio", "0.03, 0.03, 0.03", "mol/mol" + ":math:`f_{sr}`", "fr\_resp\_s", "resp. loss as DMSPd", "0.75", "1" + ":math:`\tau_{dmsp}`", "t\_sk\_conv", "Stefels rate", "3.0", "day" + ":math:`\tau_{dms}`", "t\_sk\_ox", "DMS oxidation rate", "10.0", "day" + ":math:`y_{dms}`", "y\_sk\_DMS", "yield for DMS conversion", "0.5", "1" + ":math:`K_{{\mbox{NO$_3$}}}`", "K\_Nit(1:3)", ":math:`{\mbox{NO$_3$}}` half saturation constant", "1,1,1", "mmol/m\ :math:`^{3}`" + ":math:`K_{{\mbox{NH$_4$}}}`", "K\_Am(1:3)", ":math:`{\mbox{NH$_4$}}` half saturation constant", "0.3, 0.3, 0.3", "mmol/m\ :math:`^{-3}`" + ":math:`K_{{\mbox{SiO$_3$}}}`", "K\_Sil(1:3)", "silicate half saturation constant", "4.0, 0, 0", "mmol/m\ :math:`^{-3}`" + ":math:`K_{{\mbox{fed}}}`", "K\_Fe(1:3)", "iron half saturation constant", "1.0, 0.2, 0.1", ":math:`\mu`\ mol/m\ :math:`^{-3}`" + ":math:`op_{min}`", "op\_dep\_min", "boundary for light attenuation", "0.1", "1" + ":math:`chlabs`", "chlabs(1:3)", "light absorption length per chla conc.", "0.03, 0.01, 0.05", "1\ :math:`/`\ m\ :math:`/`\ (mg\ :math:`/`\ m\ :math:`^{3}`)" + ":math:`\alpha`", "alpha2max\_low(1:3)", "light limitation factor", "0.25, 0.25, 0.25", "m\ :math:`^2`/W" + ":math:`\beta`", "beta2max(1:3)", "light inhibition factor", "0.018, 0.0025, 0.01", "m\ :math:`^2`/W" + ":math:`\mu_{max}`", "mu\_max(1:3)", "maximum algal growth rate", "1.44, 0.851, 0.851", "day\ :math:`^{-1}`" + ":math:`\mu_T`", "grow\_Tdep(1:3)", "temperature growth factor", "0.06, 0.06, 0.06", "day\ :math:`^{-1}`" + ":math:`f_{sal}`", "fsal", "salinity growth factor", "1", "1" + ":math:`R_{si:n}`", "R\_Si2N(1:3)", "algal silicate to nitrogen", "1.8, 0, 0", "mol/mol" + +:math:`^a` only (1:2) of DOC and DOC parameters have physical meaning diff --git a/doc/source/user_guide/ug_implementation.rst b/doc/source/user_guide/ug_implementation.rst new file mode 100644 index 000000000..113794a52 --- /dev/null +++ b/doc/source/user_guide/ug_implementation.rst @@ -0,0 +1,936 @@ +:tocdepth: 3 + + +Implementation +======================== + +CICE is written in FORTRAN90 and runs on platforms using UNIX, LINUX, +and other operating systems. The code is based on a two-dimensional +horizontal orthogonal grid that is broken into two-dimensional horizontal +blocks and parallelized over blocks +with MPI and OpenMP threads. The code also includes some optimizations +for vector architectures. + +CICE consists of source code under the **cicecore/** directory that supports +model dynamics and top-level control. The column physics source code is +under the Icepack directory and this is implemented as a submodule in +github from a separate repository (`CICE `) +There is also a **configuration/** directory that includes scripts +for configuring CICE cases. + +.. _coupling: + +.. _dirstructure: + +~~~~~~~~~~~~~~~~~~~ +Directory structure +~~~~~~~~~~~~~~~~~~~ + +The present code distribution includes source code and scripts. Forcing +data is available from the ftp site. The directory structure of CICE is +as follows + +**LICENSE.pdf** + license and policy for using and sharing the code + +**DistributionPolicy.pdf** + license and policy for using and sharing the code + +**README.md** + basic information and pointers + +**icepack/** + subdirectory for the Icepack model. The Icepack subdirectory includes Icepack specific scripts, drivers, and documentation. CICE only uses the columnphysics source code under **icepack/columnphysics/**. + +**cicecore/** + directory for CICE source code. + +**cicecore/cicedynB/** + directory for routines associated with the dynamics core. + +**cicecore/driver/** + directory for top level CICE drivers and coupling layers. + +**cicecore/shared/** + directory for CICE source code that is independent of the dynamical core. + +**cicecore/version.txt** + file that indicates the CICE model version. + +**configuration/scripts/** + directory of support scripts, see :ref:`dev_scripts` + +**doc/** + documentation + +**cice.setup** + main CICE script for creating cases + +A case (compile) directory is created upon initial execution of the script +**icepack.setup** at the user-specified location provided after the -c flag. +Executing the command ``./icepack.setup -h`` provides helpful information for +this tool. + + +~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ +Grid, boundary conditions and masks +~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ + +The spatial discretization is specialized for a generalized orthogonal +B-grid as in :cite:`Murray96` or +:cite:`SKM95`. The ice and snow area, volume and energy are +given at the center of the cell, velocity is defined at the corners, and +the internal ice stress tensor takes four different values within a grid +cell; bilinear approximations are used for the stress tensor and the ice +velocity across the cell, as described in :cite:`HD02`. +This tends to avoid the grid decoupling problems associated with the +B-grid. EVP is available on the C-grid through the MITgcm code +distribution, http://mitgcm.org/viewvc/MITgcm/MITgcm/pkg/seaice/. + +Since ice thickness and thermodynamic variables such as temperature are given +in the center of each cell, the grid cells are referred to as “T cells.” +We also occasionally refer to “U cells,” which are centered on the +northeast corner of the corresponding T cells and have velocity in the +center of each. The velocity components are aligned along grid lines. + +The user has several choices of grid routines: *popgrid* reads grid +lengths and other parameters for a nonuniform grid (including tripole +and regional grids), and *rectgrid* creates a regular rectangular grid, +including that used for the column configuration. The input files +**global\_gx3.grid** and **global\_gx3.kmt** contain the +:math:`\left<3^\circ\right>` POP grid and land mask; +**global\_gx1.grid** and **global\_gx1.kmt** contain the +:math:`\left<1^\circ\right>` grid and land mask. These are binary +unformatted, direct access files produced on an SGI (Big Endian). If you +are using an incompatible (Little Endian) architecture, choose +`rectangular` instead of `displaced\_pole` in **ice\_in**, or follow +procedures as for conejo +(:math:`\langle`\ **OS**\ :math:`\rangle.\langle`\ **SITE**\ :math:`\rangle.\langle`\ **machine**\ :math:`\rangle` += Linux.LANL.conejo). There are versions of the gx3 grid files +available. + +In CESM, the sea ice model may exchange coupling fluxes using a +different grid than the computational grid. This functionality is +activated using the namelist variable `gridcpl\_file`. + +*********************** +Grid domains and blocks +*********************** + +In general, the global gridded domain is +`nx\_global` :math:`\times`\ `ny\_global`, while the subdomains used in the +block distribution are `nx\_block` :math:`\times`\ `ny\_block`. The +physical portion of a subdomain is indexed as [`ilo:ihi`, `jlo:jhi`], with +nghost “ghost” or “halo" cells outside the domain used for boundary +conditions. These parameters are illustrated in :ref:`fig-grid` in one +dimension. The routines *global\_scatter* and *global\_gather* +distribute information from the global domain to the local domains and +back, respectively. If MPI is not being used for grid decomposition in +the ice model, these routines simply adjust the indexing on the global +domain to the single, local domain index coordinates. Although we +recommend that the user choose the local domains so that the global +domain is evenly divided, if this is not possible then the furthest east +and/or north blocks will contain nonphysical points (“padding”). These +points are excluded from the computation domain and have little effect +on model performance. + +.. _fig-grid: + +.. figure:: ./figures/grid.png + :align: center + :scale: 20% + + Figure 8 + +:ref:`fig-grid` : Grid parameters for a sample one-dimensional, 20-cell +global domain decomposed into four local subdomains. Each local +domain has one ghost (halo) cell on each side, and the physical +portion of the local domains are labeled `ilo:ihi`. The parameter +`nx\_block` is the total number of cells in the local domain, including +ghost cells, and the same numbering system is applied to each of the +four subdomains. + +The user chooses a block size `BLCKX` :math:`\times`\ `BLCKY` and the +number of processors `NTASK` in **comp\_ice**. Parameters in the +*domain\_nml* namelist in **ice\_in** determine how the blocks are +distributed across the processors, and how the processors are +distributed across the grid domain. Recommended combinations of these +parameters for best performance are given in Section :ref:`performance`. +The script **comp\_ice** computes the maximum number of blocks on each +processor for typical Cartesian distributions, but for non-Cartesian +cases `MXBLCKS` may need to be set in the script. The code will print this +information to the log file before aborting, and the user will need to +adjust `MXBLCKS` in **comp\_ice** and recompile. The code will also print +a warning if the maximum number of blocks is too large. Although this is +not fatal, it does require excess memory. + +A loop at the end of routine *create\_blocks* in module +**ice\_blocks.F90** will print the locations for all of the blocks on +the global grid if dbug is set to be true. Likewise, a similar loop at +the end of routine *create\_local\_block\_ids* in module +**ice\_distribution.F90** will print the processor and local block +number for each block. With this information, the grid decomposition +into processors and blocks can be ascertained. The dbug flag must be +manually set in the code in each case (independently of the dbug flag in +**ice\_in**), as there may be hundreds or thousands of blocks to print +and this information should be needed only rarely. This information is +much easier to look at using a debugger such as Totalview. + +Alternatively, a new variable is provided in the history files, `blkmask`, +which labels the blocks in the grid decomposition according to `blkmask` = +`my\_task` + `iblk/100`. + +************* +Tripole grids +************* + +The tripole grid is a device for constructing a global grid with a +normal south pole and southern boundary condition, which avoids placing +a physical boundary or grid singularity in the Arctic Ocean. Instead of +a single north pole, it has two “poles” in the north, both located on +land, with a line of grid points between them. This line of points is +called the “fold,” and it is the “top row” of the physical grid. One +pole is at the left-hand end of the top row, and the other is in the +middle of the row. The grid is constructed by “folding” the top row, so +that the left-hand half and the right-hand half of it coincide. Two +choices for constructing the tripole grid are available. The one first +introduced to CICE is called “U-fold”, which means that the poles and +the grid cells between them are U cells on the grid. Alternatively the +poles and the cells between them can be grid T cells, making a “T-fold.” +Both of these options are also supported by the OPA/NEMO ocean model, +which calls the U-fold an “f-fold” (because it uses the Arakawa C-grid +in which U cells are on T-rows). The choice of tripole grid is given by +the namelist variable `ns\_boundary\_type`, ‘tripole’ for the U-fold and +‘tripoleT’ for the T-fold grid. + +In the U-fold tripole grid, the poles have U-index +:math:`{\tt nx\_global}/2` and `nx\_global` on the top U-row of the +physical grid, and points with U-index i and :math:`{\tt nx\_global-i}` +are coincident. Let the fold have U-row index :math:`n` on the global +grid; this will also be the T-row index of the T-row to the south of the +fold. There are ghost (halo) T- and U-rows to the north, beyond the +fold, on the logical grid. The point with index i along the ghost T-row +of index :math:`n+1` physically coincides with point +:math:`{\tt nx\_global}-{\tt i}+1` on the T-row of index :math:`n`. The +ghost U-row of index :math:`n+1` physically coincides with the U-row of +index :math:`n-1`. + +In the T-fold tripole grid, the poles have T-index 1 and and +:math:`{\tt nx\_global}/2+1` on the top T-row of the physical grid, and +points with T-index i and :math:`{\tt nx\_global}-{\tt i}+2` are +coincident. Let the fold have T-row index :math:`n` on the global grid. +It is usual for the northernmost row of the physical domain to be a +U-row, but in the case of the T-fold, the U-row of index :math:`n` is +“beyond” the fold; although it is not a ghost row, it is not physically +independent, because it coincides with U-row :math:`n-1`, and it +therefore has to be treated like a ghost row. Points i on U-row +:math:`n` coincides with :math:`{\tt nx\_global}-{\tt i}+1` on U-row +:math:`n-1`. There are still ghost T- and U-rows :math:`n+1` to the +north of U-row :math:`n`. Ghost T-row :math:`n+1` coincides with T-row +:math:`n-1`, and ghost U-row :math:`n+1` coincides with U-row +:math:`n-2`. + +The tripole grid thus requires two special kinds of treatment for +certain rows, arranged by the halo-update routines. First, within rows +along the fold, coincident points must always have the same value. This +is achieved by averaging them in pairs. Second, values for ghost rows +and the “quasi-ghost” U-row on the T-fold grid are reflected copies of +the coincident physical rows. Both operations involve the tripole +buffer, which is used to assemble the data for the affected rows. +Special treatment is also required in the scattering routine, and when +computing global sums one of each pair of coincident points has to be +excluded. + +.. _bio-grid: + +******** +Bio-grid +******** + +The bio-grid is a vertical grid used for solving the brine height +variable :math:`h_b`. In the future, it will also be used for +discretizing the vertical transport equations of biogeochemical tracers. +The bio-grid is a non-dimensional vertical grid which takes the value +zero at :math:`h_b` and one at the ice–ocean interface. The number of +grid levels is specified during compilation in **comp\_ice** by setting +the variable `NBGCLYR` equal to an integer (:math:`n_b`) . + +Ice tracers and microstructural properties defined on the bio-grid are +referenced in two ways: as `bgrid` :math:`=n_b+2` points and as +igrid\ :math:`=n_b+1` points. For both bgrid and igrid, the first and +last points reference :math:`h_b` and the ice–ocean interface, +respectively, and so take the values :math:`0` and :math:`1`, +respectively. For bgrid, the interior points :math:`[2, n_b+1]` are +spaced at :math:`1/n_b` intervals beginning with `bgrid(2)` :math:` = +1/(2n_b)`. The `igrid` interior points :math:`[2, n_b]` are also +equidistant with the same spacing, but physically coincide with points +midway between those of `bgrid`. + +******************** +Column configuration +******************** + +A column modeling capability is available. Because of the boundary +conditions and other spatial assumptions in the model, this is not a +single column, but a small array of columns (minimum grid size is 5x5). +However, the code is set up so that only the single, central column is +used (all other columns are designated as land). The column is located +near Barrow (71.35N, 156.5W). Options for choosing the column +configuration are given in **comp\_ice** (choose `RES col`) and in the +namelist file, **input\_templates/col/ice\_in**. Here, `istep0` and the +initial conditions are set such that the run begins September 1 with no +ice. The grid type is rectangular, dynamics are turned off (`kdyn` = 0) and +one processor is used. + +History variables available for column output are ice and snow +temperature, `Tinz` and `Tsnz`. These variables also include thickness +category as a fourth dimension. + +******************* +Boundary conditions +******************* + +Much of the infrastructure used in CICE, including the boundary +routines, is adopted from POP. The boundary routines perform boundary +communications among processors when MPI is in use and among blocks +whenever there is more than one block per processor. + +Open/cyclic boundary conditions are the default in CICE; the physical +domain can still be closed using the land mask. In our bipolar, +displaced-pole grids, one row of grid cells along the north and south +boundaries is located on land, and along east/west domain boundaries not +masked by land, periodic conditions wrap the domain around the globe. +CICE can be run on regional grids with open boundary conditions; except +for variables describing grid lengths, non-land halo cells along the +grid edge must be filled by restoring them to specified values. The +namelist variable `restore\_ice` turns this functionality on and off; the +restoring timescale `trestore` may be used (it is also used for restoring +ocean sea surface temperature in stand-alone ice runs). This +implementation is only intended to provide the “hooks" for a more +sophisticated treatment; the rectangular grid option can be used to test +this configuration. The ‘displaced\_pole’ grid option should not be used +unless the regional grid contains land all along the north and south +boundaries. The current form of the boundary condition routines does not +allow Neumann boundary conditions, which must be set explicitly. This +has been done in an unreleased branch of the code; contact Elizabeth for +more information. + +For exact restarts using restoring, set `restart\_ext` = true in namelist +to use the extended-grid subroutines. + +On tripole grids, the order of operations used for calculating elements +of the stress tensor can differ on either side of the fold, leading to +round-off differences. Although restarts using the extended grid +routines are exact for a given run, the solution will differ from +another run in which restarts are written at different times. For this +reason, explicit halo updates of the stress tensor are implemented for +the tripole grid, both within the dynamics calculation and for restarts. +This has not been implemented yet for tripoleT grids, pending further +testing. + +***** +Masks +***** + +A land mask hm (:math:`M_h`) is specified in the cell centers, with 0 +representing land and 1 representing ocean cells. A corresponding mask +uvm (:math:`M_u`) for velocity and other corner quantities is given by + +.. math:: + M_u(i,j)=\min\{M_h(l),\,l=(i,j),\,(i+1,j),\,(i,j+1),\,(i+1,j+1)\}. + +The logical masks `tmask` and `umask` (which correspond to the real masks +`hm` and `uvm`, respectively) are useful in conditional statements. + +In addition to the land masks, two other masks are implemented in +*evp\_prep* in order to reduce the dynamics component’s work on a global +grid. At each time step the logical masks `ice\_tmask` and `ice\_umask` are +determined from the current ice extent, such that they have the value +“true” wherever ice exists. They also include a border of cells around +the ice pack for numerical purposes. These masks are used in the +dynamics component to prevent unnecessary calculations on grid points +where there is no ice. They are not used in the thermodynamics +component, so that ice may form in previously ice-free cells. Like the +land masks `hm` and `uvm`, the ice extent masks `ice\_tmask` and `ice\_umask` +are for T cells and U cells, respectively. + +Improved parallel performance may result from utilizing halo masks for +boundary updates of the full ice state, incremental remapping transport, +or for EVP or EAP dynamics. These options are accessed through the +logical namelist flags `maskhalo\_bound`, `maskhalo\_remap`, and +`maskhalo\_dyn`, respectively. Only the halo cells containing needed +information are communicated. + +Two additional masks are created for the user’s convenience: `lmask\_n` +and `lmask\_s` can be used to compute or write data only for the northern +or southern hemispheres, respectively. Special constants (`spval` and +`spval\_dbl`, each equal to :math:`10^{30}`) are used to indicate land +points in the history files and diagnostics. + + +.. _performance: + +*************** +Performance +*************** + +Namelist options (*domain\_nml*) provide considerable flexibility for +finding the most efficient processor and block configuration. Some of +these choices are illustration in :ref:`fig-distrb`. `processor\_shape` +chooses between tall, thin processor domains (`slenderX1` or `slenderX2`, +often better for sea ice simulations on global grids where nearly all of +the work is at the top and bottom of the grid with little to do in +between) and close-to-square domains, which maximize the volume to +surface ratio (and therefore on-processor computations to message +passing, if there were ice in every grid cell). In cases where the +number of processors is not a perfect square (4, 9, 16...), the +`processor\_shape` namelist variable allows the user to choose how the +processors are arranged. Here again, it is better in the sea ice model +to have more processors in x than in y, for example, 8 processors +arranged 4x2 (`square-ice`) rather than 2x4 (`square-pop`). The latter +option is offered for direct-communication compatibility with POP, in +which this is the default. + +The user provides the total number of processors and the block +dimensions in the setup script (**comp\_ice**). When moving toward +smaller, more numerous blocks, there is a point where the code becomes +less efficient; blocks should not have fewer than about 20 grid cells in +each direction. Squarish blocks optimize the volume-to-surface ratio for +communications. + +.. _fig-distrb: + +.. figure:: ./figures/distrb.png + :scale: 50% + + Figure 9 + +:ref:`fig-distrb` : Distribution of 256 blocks across 16 processors, +represented by colors, on the gx1 grid: (a) cartesian, slenderX1, (b) +cartesian, slenderX2, (c) cartesian, square-ice (square-pop is +equivalent here), (d) rake with block weighting, (e) rake with +latitude weighting, (f) spacecurve. Each block consists of 20x24 grid +cells, and white blocks consist entirely of land cells. + +The `distribution\_type` options allow standard Cartesian distribution of +blocks, redistribution via a ‘rake’ algorithm for improved load +balancing across processors, and redistribution based on space-filling +curves. There are also three additional distribution types +(‘roundrobin,’ ‘sectrobin,’ ‘sectcart’) that improve land-block +elimination rates and also allow more flexibility in the number of +processors used. The rake and space-filling curve algorithms are +primarily helpful when using squarish processor domains where some +processors (located near the equator) would otherwise have little work +to do. Processor domains need not be rectangular, however. + +`distribution\_wght` chooses how the work-per-block estimates are +weighted. The ‘block’ option is the default in POP, which uses a lot of +array syntax requiring calculations over entire blocks (whether or not +land is present), and is provided here for direct-communication +compatibility with POP. The ‘latitude’ option weights the blocks based +on latitude and the number of ocean grid cells they contain. + +The rake distribution type is initialized as a standard, Cartesian +distribution. Using the work-per-block estimates, blocks are “raked" +onto neighboring processors as needed to improve load balancing +characteristics among processors, first in the x direction and then in +y. + +Space-filling curves reduce a multi-dimensional space (2D, in our case) +to one dimension. The curve is composed of a string of blocks that is +snipped into sections, again based on the work per processor, and each +piece is placed on a processor for optimal load balancing. This option +requires that the block size be chosen such that the number of blocks in +the x direction equals the number of blocks in the y direction, and that +number must be factorable as :math:`2^n 3^m 5^p` where :math:`n, m, p` +are integers. For example, a 16x16 array of blocks, each containing +20x24 grid cells, fills the gx1 grid (:math:`n=4, m=p=0`). If either of +these conditions is not met, a Cartesian distribution is used instead. + +While the Cartesian distribution groups sets of blocks by processor, the +‘roundrobin’ distribution loops through the blocks and processors +together, putting one block on each processor until the blocks are gone. +This provides good load balancing but poor communication characteristics +due to the number of neighbors and the amount of data needed to +communicate. The ‘sectrobin’ and ‘sectcart’ algorithms loop similarly, +but put groups of blocks on each processor to improve the communication +characteristics. In the ‘sectcart’ case, the domain is divided into two +(east-west) halves and the loops are done over each, sequentially. +:ref:`fig-distribscorecard` provides an overview of the pros and cons +for the distribution types. + +.. _fig-distribscorecard: + +.. figure:: ./figures/scorecard.png + :scale: 20% + + Figure 10 + +:ref:`fig-distribscorecard` : Scorecard for block distribution choices in +CICE, courtesy T. Craig. For more information, see +http://www.cesm.ucar.edu/events/ws.2012/Presentations/SEWG2/craig.pdf + +The `maskhalo` options in the namelist improve performance by removing +unnecessary halo communications where there is no ice. There is some +overhead in setting up the halo masks, which is done during the +timestepping procedure as the ice area changes, but this option +usually improves timings even for relatively small processor counts. +T. Craig has found that performance improved by more than 20% for +combinations of updated decompositions and masked haloes, in CESM’s +version of CICE. A practical guide for choosing a CICE grid +decomposition, based on experience in CESM, is available: +http://oceans11.lanl.gov/drupal/CICE/DecompositionGuide + +Throughout the code, (i, j) loops have been combined into a single loop, +often over just ocean cells or those containing sea ice. This was done +to reduce unnecessary operations and to improve vector performance. + +:ref:`fig-timings` illustrates the computational expense of various +options, relative to the total time (excluding initialization) of a +7-layer configuration using BL99 thermodynamics, EVP dynamics, and the +‘ccsm3’ shortwave parameterization on the gx1 grid, run for one year +from a no-ice initial condition. The block distribution consisted of +20 \ :math:`\times` 192 blocks spread over 32 processors (‘slenderX2’) +with no threads and -O2 optimization. Timings varied by about +:math:`\pm3`\ % in identically configured runs due to machine load. +Extra time required for tracers has two components, that needed to carry +the tracer itself (advection, category conversions) and that needed for +the calculations associated with the particular tracer. The age tracers +(FY and iage) require very little extra calculation, so their timings +represent essentially the time needed just to carry an extra tracer. The +topo melt pond scheme is slightly faster than the others because it +calculates pond area and volume once per grid cell, while the others +calculate it for each thickness category. + +.. _fig-timings: + +.. figure:: ./figures/histograms.png + :scale: 20% + + Figure 11 + +:ref:`fig-timings` : Change in ‘TimeLoop’ timings from the 7-layer +configuration using BL99 thermodynamics and EVP dynamics. Timings +were made on a nondedicated machine, with variations of about +:math:`\pm3`\ % in identically configured runs (light grey). Darker +grey indicates the time needed for extra required options; The +Delta-Eddington radiation scheme is required for all melt pond +schemes and the aerosol tracers, and the level-ice pond +parameterization additionally requires the level-ice tracers. + + + +.. _init: + +~~~~~~~~~~~~~~~~~~~~~~~~~~~ +Initialization and coupling +~~~~~~~~~~~~~~~~~~~~~~~~~~~ + +The ice model’s parameters and variables are initialized in several +steps. Many constants and physical parameters are set in +**ice\_constants.F90**. Namelist variables (:ref:`tabnamelist`), +whose values can be altered at run time, are handled in *input\_data* +and other initialization routines. These variables are given default +values in the code, which may then be changed when the input file +**ice\_in** is read. Other physical constants, numerical parameters, and +variables are first set in initialization routines for each ice model +component or module. Then, if the ice model is being restarted from a +previous run, core variables are read and reinitialized in +*restartfile*, while tracer variables needed for specific configurations +are read in separate restart routines associated with each tracer or +specialized parameterization. Finally, albedo and other quantities +dependent on the initial ice state are set. Some of these parameters +will be described in more detail in :ref:`tabnamelist`. + +The restart files supplied with the code release include the core +variables on the default configuration, that is, with seven vertical +layers and the ice thickness distribution defined by `kcatbound` = 0. +Restart information for some tracers is also included in the  restart +files. + +Three namelist variables control model initialization, `ice\_ic`, `runtype`, +and `restart`, as described in :ref:`tab-ic`. It is possible to do an +initial run from a file **filename** in two ways: (1) set runtype = +‘initial’, restart = true and ice\_ic = **filename**, or (2) runtype = +‘continue’ and pointer\_file = **./restart/ice.restart\_file** where +**./restart/ice.restart\_file** contains the line +“./restart/[filename]". The first option is convenient when repeatedly +starting from a given file when subsequent restart files have been +written. With this arrangement, the tracer restart flags can be set to +true or false, depending on whether the tracer restart data exist. With +the second option, tracer restart flags are set to ‘continue’ for all +active tracers. + +An additional namelist option, `restart\_ext` specifies whether halo cells +are included in the restart files. This option is useful for tripole and +regional grids, but can not be used with PIO. + +MPI is initialized in *init\_communicate* for both coupled and +stand-alone MPI runs. The ice component communicates with a flux coupler +or other climate components via external routiines that handle the +variables listed in :ref:`tab-flux-cpl`. For stand-alone runs, +routines in **ice\_forcing.F90** read and interpolate data from files, +and are intended merely to provide guidance for the user to write his or +her own routines. Whether the code is to be run in stand-alone or +coupled mode is determined at compile time, as described below. + +:ref:`tab-ic` : *Ice initial state resulting from combinations of* +`ice\_ic`, `runtype` and `restart`. :math:`^a`\ *If false, restart is reset to +true.* :math:`^b`\ *restart is reset to false.* :math:`^c`\ ice\_ic *is +reset to ‘none.’* + +.. _tab-ic: + +.. table:: Table 4 + + +----------------+--------------------------+--------------------------------------+----------------------------------------+ + | ice\_ic | | | | + +================+==========================+======================================+========================================+ + | | initial/false | initial/true | continue/true (or false\ :math:`^a`) | + +----------------+--------------------------+--------------------------------------+----------------------------------------+ + | none | no ice | no ice\ :math:`^b` | restart using **pointer\_file** | + +----------------+--------------------------+--------------------------------------+----------------------------------------+ + | default | SST/latitude dependent | SST/latitude dependent\ :math:`^b` | restart using **pointer\_file** | + +----------------+--------------------------+--------------------------------------+----------------------------------------+ + | **filename** | no ice\ :math:`^c` | start from **filename** | restart using **pointer\_file** | + +----------------+--------------------------+--------------------------------------+----------------------------------------+ + +.. _parameters: + +~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ +Choosing an appropriate time step +~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ + +The time step is chosen based on stability of the transport component +(both horizontal and in thickness space) and on resolution of the +physical forcing. CICE allows the dynamics, advection and ridging +portion of the code to be run with a shorter timestep, +:math:`\Delta t_{dyn}` (`dt\_dyn`), than the thermodynamics timestep +:math:`\Delta t` (`dt`). In this case, `dt` and the integer ndtd are +specified, and `dt\_dyn` = `dt/ndtd`. + +A conservative estimate of the horizontal transport time step bound, or +CFL condition, under remapping yields + +.. math:: + \Delta t_{dyn} < {\min\left(\Delta x, \Delta y\right)\over 2\max\left(u, v\right)}. + +Numerical estimates for this bound for several POP grids, assuming +:math:`\max(u, v)=0.5` m/s, are as follows: + +.. csv-table:: + :widths: 20,40,40,40,40 + + grid label,N pole singularity,dimensions,min :math:`\sqrt{\Delta x\cdot\Delta y}`,max :math:`\Delta t_{dyn}` + gx3,Greenland,:math:`100\times 116`,:math:`39\times 10^3` m,10.8hr + gx1,Greenland,:math:`320\times 384`,:math:`18\times 10^3` m,5.0hr + p4,Canada,:math:`900\times 600`,:math:`6.5\times 10^3` m,1.8hr + +As discussed in section :ref:`mech-red` and +:cite:`LHMJ07`, the maximum time step in practice is +usually determined by the time scale for large changes in the ice +strength (which depends in part on wind strength). Using the strength +parameterization of :cite:`Rothrock75`, as in +Equation :eq:`roth-strength0`, limits the time step to :math:`\sim`\ 30 +minutes for the old ridging scheme (`krdg\_partic` = 0), and to +:math:`\sim`\ 2 hours for the new scheme (`krdg\_partic` = 1), assuming +:math:`\Delta x` = 10 km. Practical limits may be somewhat less, +depending on the strength of the atmospheric winds. + +Transport in thickness space imposes a similar restraint on the time +step, given by the ice growth/melt rate and the smallest range of +thickness among the categories, +:math:`\Delta t<\min(\Delta H)/2\max(f)`, where :math:`\Delta H` is the +distance between category boundaries and :math:`f` is the thermodynamic +growth rate. For the 5-category ice thickness distribution used as the +default in this distribution, this is not a stringent limitation: +:math:`\Delta t < 19.4` hr, assuming :math:`\max(f) = 40` cm/day. + +In the classic EVP or EAP approach (`kdyn` = 1 or 2, `revised\_evp` = false), +the dynamics component is subcycled ndte (:math:`N`) times per dynamics +time step so that the elastic waves essentially disappear before the +next time step. The subcycling time step (:math:`\Delta +t_e`) is thus + +.. math:: + dte = dt\_dyn/ndte. + +A second parameter, :math:`E_\circ` (`eyc`), defines the elastic wave +damping timescale :math:`T`, described in Section :ref:`dynam`, as +`eyc`\ * `dt\_dyn`. The forcing terms are not updated during the subcycling. +Given the small step (`dte`) at which the EVP dynamics model is subcycled, +the elastic parameter :math:`E` is also limited by stability +constraints, as discussed in :cite:`HD97`. Linear stability +analysis for the dynamics component shows that the numerical method is +stable as long as the subcycling time step :math:`\Delta t_e` +sufficiently resolves the damping timescale :math:`T`. For the stability +analysis we had to make several simplifications of the problem; hence +the location of the boundary between stable and unstable regions is +merely an estimate. In practice, the ratio +:math:`\Delta t_e ~:~ T ~:~ \Delta t`  = 1 : 40 : 120 provides both +stability and acceptable efficiency for time steps (:math:`\Delta t`) on +the order of 1 hour. + +For the revised EVP approach (`kdyn` = 1, `revised\_evp` = true), the +relaxation parameter `arlx1i` effectively sets the damping timescale in +the problem, and `brlx` represents the effective subcycling +:cite:`BFLM13`. In practice the parameters :math:`S_e>0.5` +and :math:`\xi<1` are set, along with an estimate of the ice strength +per unit mass, and the damping and subcycling parameters are then +calculated. With the addition of the revised EVP approach to CICE, the +code now uses these parameters internally for both classic and revised +EVP configurations (see Section :ref:`revp`). + +Note that only :math:`T` and :math:`\Delta t_e` figure into the +stability of the dynamics component; :math:`\Delta t` does not. Although +the time step may not be tightly limited by stability considerations, +large time steps (*e.g.,* :math:`\Delta t=1` day, given daily forcing) +do not produce accurate results in the dynamics component. The reasons +for this error are discussed in :cite:`HD97`; see +:cite:`HZ99` for its practical effects. The thermodynamics +component is stable for any time step, as long as the surface +temperature :math:`T_{sfc}` is computed internally. The +numerical constraint on the thermodynamics time step is associated with +the transport scheme rather than the thermodynamic solver. + +~~~~~~~~~~~~ +Model output +~~~~~~~~~~~~ + +.. _history: + +************* +History files +************* + +Model output data is averaged over the period(s) given by `histfreq` and +`histfreq\_n`, and written to binary or  files prepended by `history\_file` +in **ice\_in**. That is, if `history\_file` = ‘iceh’ then the filenames +will have the form **iceh.[timeID].nc** or **iceh.[timeID].da**, +depending on the output file format chosen in **comp\_ice** (set +`IO\_TYPE`). The  history files are CF-compliant; header information for +data contained in the  files is displayed with the command `ncdump -h +filename.nc`. Parallel  output is available using the PIO library; the +attribute `io\_flavor` distinguishes output files written with PIO from +those written with standard netCDF. With binary files, a separate header +file is written with equivalent information. Standard fields are output +according to settings in the **icefields\_nml** namelist in **ice\_in**. +The user may add (or subtract) variables not already available in the +namelist by following the instructions in section :ref:`addhist`. + +With this release, the history module has been divided into several +modules based on the desired formatting and on the variables +themselves. Parameters, variables and routines needed by multiple +modules is in **ice\_history\_shared.F90**, while the primary routines +for initializing and accumulating all of the history variables are in +**ice\_history.F90**. These routines call format-specific code in the +**io\_binary**, **io\_netcdf** and **io\_pio** directories. History +variables specific to certain components or parameterizations are +collected in their own history modules (**ice\_history\_bgc.F90**, +**ice\_history\_drag.F90**, **ice\_history\_mechred.F90**, +**ice\_history\_pond.F90**). + +The history modules allow output at different frequencies. Five output +frequencies (1, `h`, `d`, `m`, `y`) are available simultaneously during a run. +The same variable can be output at different frequencies (say daily and +monthly) via its namelist flag, `f\_` :math:`\left<{var}\right>`, which +is now a character string corresponding to `histfreq` or ‘x’ for none. +(Grid variable flags are still logicals, since they are written to all +files, no matter what the frequency is.) If there are no namelist flags +with a given `histfreq` value, or if an element of `histfreq\_n` is 0, then +no file will be written at that frequency. The output period can be +discerned from the filenames. + +For example, in namelist: + +:: + + `histfreq` = ’1’, ’h’, ’d’, ’m’, ’y’ + `histfreq\_n` = 1, 6, 0, 1, 1 + `f\_hi` = ’1’ + `f\_hs` = ’h’ + `f\_Tsfc` = ’d’ + `f\_aice` = ’m’ + `f\_meltb` = ’mh’ + `f\_iage` = ’x’ + +Here, `hi` will be written to a file on every timestep, `hs` will be +written once every 6 hours, `aice` once a month, `meltb` once a month AND +once every 6 hours, and `Tsfc` and `iage` will not be written. + +From an efficiency standpoint, it is best to set unused frequencies in +`histfreq` to ‘x’. Having output at all 5 frequencies takes nearly 5 times +as long as for a single frequency. If you only want monthly output, the +most efficient setting is `histfreq` = ’m’,’x’,’x’,’x’,’x’. The code counts +the number of desired streams (`nstreams`) based on `histfreq`. + +The history variable names must be unique for netcdf, so in cases where +a variable is written at more than one frequency, the variable name is +appended with the frequency in files after the first one. In the example +above, `meltb` is called `meltb` in the monthly file (for backward +compatibility with the default configuration) and `meltb\_h` in the +6-hourly file. + +Using the same frequency twice in `histfreq` will have unexpected +consequences and currently will cause the code to abort. It is not +possible at the moment to output averages once a month and also once +every 3 months, for example. + +If `write\_ic` is set to true in **ice\_in**, a snapshot of the same set +of history fields at the start of the run will be written to the history +directory in **iceh\_ic.[timeID].nc(da)**. Several history variables are +hard-coded for instantaneous output regardless of the averaging flag, at +the frequency given by their namelist flag. + +The normalized principal components of internal ice stress are computed +in *principal\_stress* and written to the history file. This calculation +is not necessary for the simulation; principal stresses are merely +computed for diagnostic purposes and included here for the user’s +convenience. + +Several history variables are available in two forms, a value +representing an average over the sea ice fraction of the grid cell, and +another that is multiplied by :math:`a_i`, representing an average over +the grid cell area. Our naming convention attaches the suffix “\_ai" to +the grid-cell-mean variable names. + +**************** +Diagnostic files +**************** + +Like `histfreq`, the parameter `diagfreq` can be used to regulate how often +output is written to a log file. The log file unit to which diagnostic +output is written is set in **ice\_fileunits.F90**. If `diag\_type` = +‘stdout’, then it is written to standard out (or to **ice.log.[ID]** if +you redirect standard out as in **run\_ice**); otherwise it is written +to the file given by `diag\_file`. In addition to the standard diagnostic +output (maximum area-averaged thickness, velocity, average albedo, total +ice area, and total ice and snow volumes), the namelist options +`print\_points` and `print\_global` cause additional diagnostic information +to be computed and written. `print\_global` outputs global sums that are +useful for checking global conservation of mass and energy. +`print\_points` writes data for two specific grid points. Currently, one +point is near the North Pole and the other is in the Weddell Sea; these +may be changed in **ice\_in**. + +Timers are declared and initialized in **ice\_timers.F90**, and the code +to be timed is wrapped with calls to *ice\_timer\_start* and +*ice\_timer\_stop*. Finally, *ice\_timer\_print* writes the results to +the log file. The optional “stats" argument (true/false) prints +additional statistics. Calling *ice\_timer\_print\_all* prints all of +the timings at once, rather than having to call each individually. +Currently, the timers are set up as in :ref:`timers`. +Section :ref:`addtimer` contains instructions for adding timers. + +The timings provided by these timers are not mutually exclusive. For +example, the column timer (5) includes the timings from 6–10, and +subroutine *bound* (timer 15) is called from many different places in +the code, including the dynamics and advection routines. + +The timers use *MPI\_WTIME* for parallel runs and the F90 intrinsic +*system\_clock* for single-processor runs. + +:ref:`timers` : *CICE timers* + +.. _timers: + +.. table:: Table 5 + + +--------------+-------------+----------------------------------------------------+ + | **Timer** | | | + +--------------+-------------+----------------------------------------------------+ + | **Index** | **Label** | | + +--------------+-------------+----------------------------------------------------+ + | 1 | Total | the entire run | + +--------------+-------------+----------------------------------------------------+ + | 2 | Step | total minus initialization and exit | + +--------------+-------------+----------------------------------------------------+ + | 3 | Dynamics | EVP | + +--------------+-------------+----------------------------------------------------+ + | 4 | Advection | horizontal transport | + +--------------+-------------+----------------------------------------------------+ + | 5 | Column | all vertical (column) processes | + +--------------+-------------+----------------------------------------------------+ + | 6 | Thermo | vertical thermodynamics | + +--------------+-------------+----------------------------------------------------+ + | 7 | Shortwave | SW radiation and albedo | + +--------------+-------------+----------------------------------------------------+ + | 8 | Meltponds | melt ponds | + +--------------+-------------+----------------------------------------------------+ + | 9 | Ridging | mechanical redistribution | + +--------------+-------------+----------------------------------------------------+ + | 10 | Cat Conv | transport in thickness space | + +--------------+-------------+----------------------------------------------------+ + | 11 | Coupling | sending/receiving coupler messages | + +--------------+-------------+----------------------------------------------------+ + | 12 | ReadWrite | reading/writing files | + +--------------+-------------+----------------------------------------------------+ + | 13 | Diags | diagnostics (log file) | + +--------------+-------------+----------------------------------------------------+ + | 14 | History | history output | + +--------------+-------------+----------------------------------------------------+ + | 15 | Bound | boundary conditions and subdomain communications | + +--------------+-------------+----------------------------------------------------+ + | 16 | BGC | biogeochemistry | + +--------------+-------------+----------------------------------------------------+ + +************* +Restart files +************* + +CICE now provides restart data in binary unformatted or  formats, via +the `IO\_TYPE` flag in **comp\_ice** and namelist variable +`restart\_format`. Restart and history files must use the same format. As +with the history output, there is also an option for writing parallel +restart files using PIO. + +The restart files created by CICE contain all of the variables needed +for a full, exact restart. The filename begins with the character string +‘iced.’, and the restart dump frequency is given by the namelist +variables `dumpfreq` and `dumpfreq\_n`. The pointer to the filename from +which the restart data is to be read for a continuation run is set in +`pointer\_file`. The code assumes that auxiliary binary tracer restart +files will be identified using the same pointer and file name prefix, +but with an additional character string in the file name that is +associated with each tracer set. All variables are included in  restart +files. + +Additional namelist flags provide further control of restart behavior. +`dump\_last` = true causes a set of restart files to be written at the end +of a run when it is otherwise not scheduled to occur. The flag +`use\_restart\_time` enables the user to choose to use the model date +provided in the restart files. If `use\_restart\_time` = false then the +initial model date stamp is determined from the namelist parameters. +lcdf64 = true sets 64-bit  output, allowing larger file sizes with +version 3. + +Routines for gathering, scattering and (unformatted) reading and writing +of the “extended" global grid, including the physical domain and ghost +(halo) cells around the outer edges, allow exact restarts on regional +grids with open boundary conditions, and they will also simplify +restarts on the various tripole grids. They are accessed by setting +`restart\_ext` = true in namelist. Extended grid restarts are not +available when using PIO; in this case extra halo update calls fill +ghost cells for tripole grids (do not use PIO for regional grids). + +Two restart files are included with the CICE v5 code distribution, for +the gx3 and gx1 grids. The were created using the default model +configuration (settings as in **comp\_ice** and **ice\_in**), but +initialized with no ice. The gx3 case was run for 1 year using the 1997 +forcing data provided with the code. The gx1 case was run for 20 years, +so that the date of restart in the file is 1978-01-01. Note that the +restart dates provided in the restart files can be overridden using the +namelist variables `use\_restart\_time`, `year\_init` and `istep0`. The +forcing time can also be overridden using `fyear\_init`. + +Several changes in CICE v5 have made restarting from v4.1 restart files +difficult. First, the ice and snow enthalpy state variables are now +carried as tracers instead of separate arrays, and salinity has been +added as a necessary restart field. Second, the default number of ice +layers has been increased from 4 to 7. Third, netcdf format is now used +for all I/O; it is no longer possible to have history output as  and +restart output in binary format. However, some facilities are included +with CICE v5 for converting v4.1 restart files to the new file structure +and format, provided that the same number of ice layers and basic +physics packages will be used for the new runs. See Section +:ref:`restarttrouble` for details. \ No newline at end of file diff --git a/doc/source/user_guide/ug_running.rst b/doc/source/user_guide/ug_running.rst new file mode 100644 index 000000000..49cbd11ac --- /dev/null +++ b/doc/source/user_guide/ug_running.rst @@ -0,0 +1,435 @@ +:tocdepth: 3 + +.. _running_cice: + +Running CICE +==================== + +Quick-start instructions are provided in the :ref:`quickstart` section. + +.. _scripts: + +Scripts +------- + +The CICE scripts are written to allow quick setup of cases and tests. Once a case is +generated, users can manually modify the namelist and other files to custom configure +the case. Several settings are available via scripts as well. + +Overview +~~~~~~~~ + +Most of the scripts that configure, build and run CICE are contained in +the directory **configuration/scripts/**, except for **cice.setup**, which is +in the main directory. **cice.setup** is the main script that generates a case. + +Users may need to port the scripts to their local machine. +Specific instructions for porting are provided in :ref:`porting`. + +``cice.setup -h`` will provide the latest information about how to use the tool. +``cice.setup --help`` will provide an extended version of the help. +There are three usage modes, + +* ``--case`` or ``-c`` creates individual stand alone cases. +* ``--test`` creates individual tests. Tests are just cases that have some extra automation in order to carry out particular tests such as exact restart. +* ``--suite`` creates a test suite. Test suites are predefined sets of tests and ``--suite`` provides the ability to quickly setup, build, and run a full suite of tests. + +All modes will require use of ``--mach`` or ``-m`` to specify the machine and case and test modes +can use ``--set`` or ``-s`` to define specific options. ``--test`` and ``--suite`` will require ``--testid`` to be set +and both of the test modes can use ``--bdir``, ``--bgen``, ``--bcmp``, and ``--diff`` to generate (save) results and compare results with prior results. +Testing will be described in greater detail in the :ref:`testing` section. + +Again, ``cice.setup --help`` will show the latest usage information including +the available ``--set`` options, the current ported machines, and the test choices. + +To create a case, run **cice.setup**:: + + cice.setup -c mycase -m machine + cd mycase + +Once a case/test is created, several files are placed in the case directory + +- **env.[machine]** defines the environment +- **cice.settings** defines many variables associated with building and running the model +- **makdep.c** is a tool that will automatically generate the make dependencies +- **Macros.[machine]** defines the Makefile macros +- **Makefile** is the makefile used to build the model +- **cice.build** is a script that builds and compiles the model +- **ice\_in** is the namelist input file +- **setup\_run\_dirs.csh** is a script that will create the run directories. This will be called automatically from the **cice.run** script if the user does not invoke it. +- **cice.run** is a batch run script +- **cice.submit** is a simple script that submits the cice.run script + +All scripts and namelist are fully resolved in the case. Users can edit any +of the files in the case directory manually to change the model configuration, +build options, or batch settings. The file +dependency is indicated in the above list. For instance, if any of the files before +**cice.build** in the list are edited, **cice.build** should be rerun. + +The **casescripts/** directory holds scripts used to create the case and can +largely be ignored. Once a case is created, the **cice.build** script should be run +interactively and then the case should be submitted by executing the +**cice.submit** script interactively. The **cice.submit** script +simply submits the **cice.run script**. +You can also submit the **cice.run** script on the command line. + +Some hints: + +- To change the tracer numbers or block sizes required at build time, edit the **cice.settings** file. +- To change namelist, manually edit the **ice_in** file +- To change batch settings, manually edit the top of the **cice.run** or **cice.test** (if running a test) file +- To turn on the debug compiler flags, set ``ICE_BLDDEBUG`` in **cice.setttings** to true +- To change compiler options, manually edit the Macros file +- To clean the build before each compile, set ``ICE_CLEANBUILD`` in **cice.settings** to true. To not clean before the build, set ``ICE_CLEANBUILD`` in **cice.settings** to false + +To build and run:: + + ./cice.build + ./cice.submit + +The build and run log files will be copied into the logs directory in the case directory. +Other model output will be in the run directory. The run directory is set in **cice.settings** +via the ``ICE_RUNDIR`` variable. To modify the case setup, changes should be made in the +case directory, NOT the run directory. + +.. _case_options: + +Command Line Options +~~~~~~~~~~~~~~~~~~~~ + +``cice.setup -h`` provides a summary of the command line options. There are three different modes, ``--case``, ``--test``, and ``--suite``. This section provides details about the relevant options for setting up cases with examples. +Testing will be described in greater detail in the :ref:`testing` section. + +``--help``, ``-h`` + prints ``cice.setup`` help information to the terminal and exits. + +``--version`` + prints the CICE version to the terminal and exits. + +``--setvers VERSION`` + updates the CICE version in your sandbox. The version should be some like n.m.p.string. + +``--case``, ``-c`` CASE + specifies the case name. This can be either a relative path of an absolute path. This cannot be used with --test or --suite. Either ``--case``, ``--test``, or ``--suite`` is required. + +``--mach``, ``-m`` MACHINE + specifies the machine name. This should be consistent with the name defined in the Macros and env files in **configurations/scripts/machines**. This is required in all modes. + +``--env``, ``-e`` ENVIRONMENT1,ENVIRONMENT2,ENVIRONMENT3 + specifies the environment or compiler associated with the machine. This should be consistent with the name defined in the Macros and env files in **configurations/scripts/machines**. Each machine can have multiple supported environments including support for different compilers or other system setups. When used with ``--suite`` or ``--test``, the ENVIRONMENT can be a set of comma deliminated values with no spaces and the tests will then be run for all of those environments. With ``--case``, only one ENVIRONMENT should be specified. (default is intel) + +``--pes``, ``-p`` MxN[[xBXxBY[xMB] + specifies the number of tasks and threads the case should be run on. This only works with ``--case``. The format is tasks x threads or "M"x"N" where M is tasks and N is threads and both are integers. BX, BY, and MB can also be set via this option where BX is the x-direction blocksize, BY is the y-direction blocksize, and MB is the max-blocks setting. If BX, BY, and MB are not set, they will be computed automatically based on the grid size and the task/thread count. More specifically, this option has three modes, --pes MxN, --pes MxNxBXxBY, and --pes MxNxBXxBYxMB. (default is 4x1) + +``--acct`` ACCOUNT + specifies a batch account number. This is optional. See :ref:`account` for more information. + +``--grid``, ``-g`` GRID + specifies the grid. This is a string and for the current CICE driver, gx1 and gx3 are supported. (default = gx3) + +``--set``, ``-s`` SET1,SET2,SET3 + specifies the optional settings for the case. The settings for ``--suite`` are defined in the suite file. Multiple settings can be specified by providing a comma deliminated set of values without spaces between settings. The available settings are in **configurations/scripts/options** and ``cice.setup --help`` will also list them. These settings files can change either the namelist values or overall case settings (such as the debug flag). + +For CICE, when setting up cases, the ``--case`` and ``--mach`` must be specified. +It's also recommended that ``--env`` be set explicitly as well. +``--pes`` and ``--grid`` can be very useful. +``--acct`` is not normally used. A more convenient method +is to use the **~/cice\_proj** file, see :ref:`account`. The ``--set`` option can be +extremely handy. The ``--set`` options are documented in :ref:`settings`. + +.. _settings: + +Preset Options +~~~~~~~~~~~~~~ + +There are several preset options. These are hardwired in +**configurations/scripts/options** and are specfied for a case or test by +the ``--set`` command line option. You can see the full list of settings +by doing ``cice.setup --help``. + +The default CICE namelist and CICE settings are specified in the +files **configuration/scripts/ice_in** and +**configuration/scripts/cice.settings** respectively. When picking a +preset setting (option), the set_env.setting and set_nml.setting will be used to +change the defaults. This is done as part of the ``cice.setup`` and the +modifications are resolved in the **cice.settings** and **ice_in** file placed in +the case directory. If multiple options are chosen and then conflict, then the last +option chosen takes precedent. Not all options are compatible with each other. + +Some of the options are + +``debug`` which turns on the compiler debug flags + +``short``, ``medium``, ``long`` which change the batch time limit + +``gx3`` and ``gx1`` are associate with grid specific settings + +``diag1`` which turns on diagnostics each timestep + +``leap`` which turns on the leap year + +``pondcesm``, ``pondlvl``, ``pondtopo`` which turn on the various pond schemes + +``run10day``, ``run1year``, etc which specifies a run length + +``dslenderX1``, ``droundrobin``, ``dspacecurve``, etc specify decomposition options + +``dynevp``, ``dyneap``, ``dynoff``, and ``dynrevp`` specify dynamics choices + +``therm0``, ``thermBL``, and ``thermmushy`` are thermodynamics settings + +``swccsm3`` which turns on the ccsm3 shortwave and albedo computation + +``bgc*`` which turns of various bgc configurations + +and there are others. These may change as needed. Use ``cice.setup --help`` to see the latest. To add a new option, just add the appropriate file in **configuration/scripts/options**. For more information, see :ref:`dev_options` + +Examples +~~~~~~~~~ + +The simplest case is just to setup a default configurations specifying the +case name, machine, and environment:: + + cice.setup --case mycase1 --mach spirit --env intel + +To add some optional settings, one might do:: + + cice.setup --case mycase2 --mach spirit --env intel --set debug,diag1,run1year,pondtopo + +Once the cases are created, users are free to modify the cice.settings and ice_in namelist to further modify their setup. + +.. _porting: + +Porting +------- + +To port, an **env.[machine]_[environment]** and **Macros.[machine]_[environment}** file have to be added to the +**configuration/scripts/machines/** directory and the +**configuration/scripts/cice.batch.csh** file needs to be modified. +In general, the machine is specified in ``cice.setup`` with ``--mach`` +and the environment (compiler) is specified with ``--env``. + +- cd to **configuration/scripts/machines/** + +- Copy an existing env and a Macros file to new names for your new machine + +- Edit your env and Macros files + +- cd .. to **configuration/scripts/** + +- Edit the **cice.batch.csh** script to add a section for your machine + with batch settings and job launch settings + +- Download and untar a forcing dataset to the location defined by + ``ICE_MACHINE_INPUTDATA`` in the env file + +In fact, this process almost certainly will require some iteration. The easiest way +to carry this out is to create an initial set of changes as described above, then +create a case and manually modify the **env.[machine]** file and **Macros.[machine]** +file until the case can build and run. Then copy the files from the case +directory back to **configuration/scripts/machines/** and update +the **configuration/scripts/cice.batch.csh** file, retest, +and then add and commit the updated machine files to the repository. + +.. _account: + +Machine Account Settings +~~~~~~~~~~~~~~~~~~~~~~~~ + +The machine account default is specified by the variable ``ICE_MACHINE_ACCT`` in +the **env.[machine]** file. The easiest way to change a user's default is to +create a file in your home directory called **.cice\_proj** and add your +preferred account name to the first line. +There is also an option (``--acct``) in **cice.setup** to define the account number. +The order of precedent is **cice.setup** command line option, +**.cice\_proj** setting, and then value in the **env.[machine]** file. + +.. _force: + +Forcing data +------------ + +The input data space is defined on a per machine basis by the ``ICE_MACHINE_INPUTDATA`` +variable in the **env.[machine]** file. That file space is often shared among multiple +users, and it can be desirable to consider using a common file space with group read +and write permissions such that a set of users can update the inputdata area as +new datasets are available. + +CICE input datasets are stored on an anonymous ftp server. More information about +how to download the input data can be found at https://github.com/CICE-Consortium/CICE/wiki. +Test forcing datasets are available for various grids at the ftp site. +These data files are designed only for testing the code, not for use in production runs +or as observational data. Please do not publish results based on these data sets. + + +Run Directories +--------------- + +The **cice.setup** script creates a case directory. However, the model +is actually built and run under the ``ICE_OBJDIR`` and ``ICE_RUNDIR`` directories +as defined in the **cice.settings** file. + +Build and run logs will be copied from the run directory into the case **logs/** +directory when complete. + + +Local modifications +------------------- + +Scripts and other case settings can be changed manually in the case directory and +used. Source code can be modified in the main sandbox. When changes are made, the code +should be rebuilt before being resubmitted. It is always recommended that users +modify the scripts and input settings in the case directory, NOT the run directory. +In general, files in the run directory are overwritten by versions in the case +directory when the model is built, submitted, and run. + + + +Old Notes +------------ + +To compile and execute the code: in the source directory, + +#. Download the forcing data used for testing from the CICE-Consortium github page, + https://github.com/CICE-Consortium . + +#. Create **Macros.\*** and **run\_ice.\*** files for your particular + platform, if they do not already exist (type ‘uname -s’ at the prompt + to get :math:`\langle`\ OS\ :math:`\rangle`). + +#. Alter directories in the script **comp\_ice**. + +#. Run **comp\_ice** to set up the run directory and make the executable + ‘**cice**’. + +#. | To clean the compile directory and start fresh, simply execute + ‘/bin/rm -rf compile’ from the run directory. + +In the run directory, + +#. Alter `atm\_data\_dir` and `ocn\_data\_dir` in the namelist file + **ice\_in**. + +#. Alter the script **run\_ice** for your system. + +#. Execute **run\_ice**. + +If this fails, see Section :ref:`setup`. + +This procedure creates the output log file **ice.log.[ID]**, and if +`npt` is long enough compared with `dumpfreq` and `histfreq`, dump files +**iced.[timeID]** and   (or binary) history output files +**iceh\_[timeID].nc (.da)**. Using the :math:`\left<3^\circ\right>` +grid, the log file should be similar to +**ice.log.\ :math:`\langle`\ OS\ :math:`\rangle`**, provided for the +user’s convenience. These log files were created using MPI on 4 +processors on the :math:`\left<3^\circ\right>` grid. + +Several options are available in **comp\_ice** for configuring the run, +shown in :ref:`comp-ice`. If `NTASK` = 1, then the **serial/** +code is used, otherwise the code in **mpi/** is used. Loops over blocks +have been threaded throughout the code, so that their work will be +divided among `OMP\_NUM\_THREADS` if `THRD` is ‘yes.’ Note that the value of +`NTASK` in **comp\_ice** must equal the value of `nprocs` in **ice\_in**. +Generally the value of `MXBLCKS` computed by **comp\_ice** is sufficient, +but sometimes it will need to be set explicitly, as discussed in +Section :ref:`performance`. To conserve memory, match the tracer requests +in **comp\_ice** with those in **ice\_in**. CESM uses 3 aerosol tracers; +the number given in **comp\_ice** must be less than or equal to the +maximum allowed in **ice\_domain\_size.F90**. + +The scripts define a number of environment variables, mostly as +directories that you will need to edit for your own environment. +`$SYSTEM\_USERDIR`, which on machines at Oak Ridge National Laboratory +points automatically to scratch space, is intended to be a disk where +the run directory resides. `SHRDIR` is a path to the CESM shared code. + +:ref:`comp-ice` : Configuration options available in **comp_ice**. + +.. _comp-ice: + +.. table:: Table 6 + + +---------------------+--------------------------------------+------------------------------------------------------------------------------------+ + | variable | options | description | + +=====================+======================================+====================================================================================+ + |RES | col, gx3, gx1 | grid resolution | + +---------------------+--------------------------------------+------------------------------------------------------------------------------------+ + |NTASK | (integer) | total number of processors | + +---------------------+--------------------------------------+------------------------------------------------------------------------------------+ + |BLCKX | (integer) | number of grid cells on each block in the x-direction :math:`^\dagger` | + +---------------------+--------------------------------------+------------------------------------------------------------------------------------+ + |BLCKY | (integer) | number of grid cells on each block in the y-direction :math:`^\dagger` | + +---------------------+--------------------------------------+------------------------------------------------------------------------------------+ + |MXBLCKS | (integer) | maximum number of blocks per processor | + +---------------------+--------------------------------------+------------------------------------------------------------------------------------+ + |NICELYR | (integer) | number of vertical layers in the ice | + +---------------------+--------------------------------------+------------------------------------------------------------------------------------+ + |NSNWLYR | (integer) | number of vertical layers in the snow | + +---------------------+--------------------------------------+------------------------------------------------------------------------------------+ + |NICECAT | (integer) | number of ice thickness categories | + +---------------------+--------------------------------------+------------------------------------------------------------------------------------+ + |TRAGE | 0 or 1 | set to 1 for ice age tracer | + +---------------------+--------------------------------------+------------------------------------------------------------------------------------+ + |TRFY | 0 or 1 | set to 1 for first-year ice age tracer | + +---------------------+--------------------------------------+------------------------------------------------------------------------------------+ + |TRLVL | 0 or 1 | set to 1 for level and deformed ice tracers | + +---------------------+--------------------------------------+------------------------------------------------------------------------------------+ + |TRPND | 0 or 1 | set to 1 for melt pond tracers | + +---------------------+--------------------------------------+------------------------------------------------------------------------------------+ + |NTRAERO | 0 or 1 | number of aerosol tracers | + +---------------------+--------------------------------------+------------------------------------------------------------------------------------+ + |TRBRINE | set to 1 for brine height tracer | | + +---------------------+--------------------------------------+------------------------------------------------------------------------------------+ + |NBGCLYR | (integer) | number of vertical layers for biogeochemical transport | + +---------------------+--------------------------------------+------------------------------------------------------------------------------------+ + |IO_TYPE | none/netcdf/pio | use ‘none’ if  library is unavailable,‘pio’ for PIO | + +---------------------+--------------------------------------+------------------------------------------------------------------------------------+ + |DITTO | yes/no | for reproducible diagnostics | + +---------------------+--------------------------------------+------------------------------------------------------------------------------------+ + |BARRIERS | yes/no | flushes MPI buffers during global scatters and gathers | + +---------------------+--------------------------------------+------------------------------------------------------------------------------------+ + |THRD | yes/no | set to yes for OpenMP threaded parallelism | + +---------------------+--------------------------------------+------------------------------------------------------------------------------------+ + |OMP_NUM_THREADS | (integer) | the number of OpenMP threads requested | + +---------------------+--------------------------------------+------------------------------------------------------------------------------------+ + |NUMIN | (integer) | smallest unit number assigned to CICE files | + +---------------------+--------------------------------------+------------------------------------------------------------------------------------+ + |NUMAX | (integer) | largest unit number assigned to CICE files | + +---------------------+--------------------------------------+------------------------------------------------------------------------------------+ + +The ‘reproducible’ option (`DITTO`) makes diagnostics bit-for-bit when +varying the number of processors. (The simulation results are +bit-for-bit regardless, because they do not require global sums or +max/mins as do the diagnostics.) This was done mainly by increasing the +precision for the global reduction calculations, except for regular +double-precision (r8) calculations involving MPI; MPI can not handle +MPI\_REAL16 on some architectures. Instead, these cases perform sums or +max/min calculations across the global block structure, so that the +results are bit-for-bit as long as the block distribution is the same +(the number of processors can be different). + +A more flexible option is available for double-precision MPI +calculations, using the namelist variable `bfbflag`. When true, this flag +produces bit-for-bit identical diagnostics with different tasks, +threads, blocks and grid decompositions. + +CICE namelist variables available for changes after compile time appear +in **ice.log.\*** with values read from the file **ice\_in**; their +definitions are given in Section :ref:`index`. For example, to run for a +different length of time, say three days, set `npt` = 72 in **ice\_in**. +At present, the user supplies the time step `dt`, the number of +dynamics/advection/ridging subcycles `ndtd`, and for classic EVP, the +number of EVP subcycles `ndte`; `dte` is then calculated in subroutine +*init\_evp*. The primary reason for doing it this way is to ensure that +`ndte` is an integer. (This is done differently for `revised\_evp` = true.; +see Section :ref:`dynam`). + +To restart from a previous run, set restart = true in **ice\_in**. There +are two ways of restarting from a given file. The restart pointer file +**ice.restart\_file** (created by the previous run) contains the name of +the last written data file (**iced.[timeID]**). Alternatively, a +filename can be assigned to ice\_ic in **ice\_in**. Consult +Section :ref:`init` for more details. Restarts are exact for MPI or +single processor runs. + diff --git a/doc/source/user_guide/ug_testing.rst b/doc/source/user_guide/ug_testing.rst new file mode 100644 index 000000000..775811967 --- /dev/null +++ b/doc/source/user_guide/ug_testing.rst @@ -0,0 +1,882 @@ +:tocdepth: 3 + +.. _testing: + +Testing CICE +================ + +This section documents primarily how to use the CICE scripts to carry +out CICE testing. Exactly what to test is a separate question and +depends on the kinds of code changes being made. Prior to merging +changes to the CICE Consortium master, changes will be reviewed and +developers will need to provide a summary of the tests carried out. + +There is a base suite of tests provided by default with CICE and this +may be a good starting point for testing. + +The testing scripts support several features + - Ability to test individual (via ``--test``)or multiple tests (via ``--suite``) + using an input file to define the suite + - Ability to use test suites defined in the package or test suites defined by the user + - Ability to store test results for regresssion testing (``--bgen``) + - Ability to compare results to prior baselines to verify bit-for-bit (``--bcmp``) + - Ability to define where baseline tests are stored (``--bdir``) + - Ability to compare tests against each other (``--diff``) + +.. _indtests: + +Individual Tests +---------------- + +The CICE scripts support both setup of individual tests as well as test suites. Individual +tests are run from the command line:: + + ./cice.setup --test smoke --mach conrad --env cray --set diag1,debug --testid myid + +Tests are just like cases but have some additional scripting around them. Individual +tests can be created and manually modified just like cases. +Many of the command line arguments for individual tests +are similar to :ref:`case_options` for ``--case``. +For individual tests, the following command line options can be set + +``--test`` TESTNAME + specifies the test type. This is probably either smoke or restart but see `cice.setup --help` for the latest. This is required instead of ``--case``. + +``--testid`` ID + specifies the testid. This is required for every use of ``--test`` and ``--suite``. This is a user defined string that will allow each test to have a unique case and run directory name. This is also required. + +``--mach`` MACHINE (see :ref:`case_options`) + +``--env`` ENVIRONMENT1 (see :ref:`case_options`) + +``--set`` SET1,SET2,SET3 (see :ref:`case_options`) + +``--acct`` ACCOUNT (see :ref:`case_options`) + +``--grid`` GRID (see :ref:`case_options`) + +``--pes`` MxNxBXxBYxMB (see :ref:`case_options`) + +There are several additional options that come with ``--test`` that are not available +with ``--case`` for regression and comparision testing, + +``--bdir`` DIR + specifies the top level location of the baseline results. This is used in conjuction with ``--bgen`` and ``--bcmp``. The default is set by ICE_MACHINE_BASELINE in the env.[machine]_[environment] file. + +``--bgen`` DIR + specifies the name of the directory under [bdir] where test results will be stored. When this flag is set, it automatically creates that directory and stores results from the test under that directory. If DIR is set to ``default``, then the scripts will automatically generate a directory name based on the CICE hash and the date and time. This can be useful for tracking the baselines by hash. + +``--bcmp`` DIR + specifies the name of the directory under [bdir] that the current tests will be compared to. When this flag is set, it automatically invokes regression testing and compares results from the current test to those prior results. If DIR is set to ``default``, then the script will automatically generate the last directory name in the [bdir] directory. This can be useful for automated regression testing. + +``--diff`` LONG_TESTNAME + invokes a comparison against another local test. This allows different tests to be compared to each other for bit-for-bit-ness. This is different than ``--bcmp``. ``--bcmp`` is regression testing, comparing identical test results between different model versions. ``--diff`` allows comparison of two different test cases against each other. For instance, different block sizes, decompositions, and other model features are expected to produced identical results and ``--diff`` supports that testing. The restrictions for use of ``--diff`` are that the test has to already be completed and the testid has to match. The LONG_TESTNAME string should be of format [test]_[grid]_[pes]_[sets]. The [machine], [env], and [testid] will be added to that string to complete the testname being compared. (See also :ref:`examplediff`) + +The format of the case directory name for a test will always be +``[machine]_[env]_[test]_[grid]_[pes]_[sets].[testid]`` +The [sets] will always be sorted alphabetically by the script so ``--set debug,diag1`` and +``--set diag1,debug`` produces the same testname and test with _debug_diag1 in that order. + +To build and run a test, the process is the same as a case. cd to the +test directory, run the build script, and run the submit script:: + + cd [test_case] + ./cice.build + ./cice.submit + +The test results will be generated in a local file called **test_output**. +To check those results:: + + cat test_output + +Tests are defined under **configuration/scripts/tests/**. Some tests currently supported are: + +- smoke - Runs the model for default length. The length and options can + be set with the ``--set`` command line option. The test passes if the + model completes successfully. +- restart - Runs the model for 10 days, writing a restart file at the end of day 5 and + again at the end of the run. Runs the model a second time starting from the + day 5 restart and writes a restart at then end of day 10 of the model run. + The test passes if both runs complete and + if the restart files at the end of day 10 from both runs are bit-for-bit identical. +- decomp - Runs a set of different decompositions on a given configuration + +Please run ``./cice.setup --help`` for the latest information. + + +Adding a new test +~~~~~~~~~~~~~~~~~~~~~~~~ + +See :ref:`dev_testing` + + +Example. Basic default single test +~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ + +Define the test, mach, env, and testid. +:: + + ./cice.setup --test smoke --mach wolf --env gnu --testid t00 + cd wolf_gnu_smoke_col_1x1.t00 + ./cice.build + ./cice.submit + ./cat test_output + + +Example. Simple test with some options +~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ + +Add ``--set`` +:: + + ./cice.setup --test smoke --mach wolf --env gnu --set diag1,debug --testid t00 + cd wolf_gnu_smoke_col_1x1_debug_diag1.t00 + ./cice.build + ./cice.submit + ./cat test_output + + +Example. Single test, generate a baseline dataset +~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ + +Add ``--bgen`` +:: + + ./cice.setup --test smoke --mach wolf -env gnu --bgen cice.v01 --testid t00 --set diag1 + cd wolf_gnu_smoke_col_1x1_diag1.t00 + ./cice.build + ./cice.submit + ./cat test_output + + +Example. Single test, compare results to a prior baseline. +~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ + +Add ``--bcmp``. For this to work, +the prior baseline must exist and have the exact same base testname +[machine]_[env]_[test]_[grid]_[pes]_[sets] +:: + + ./cice.setup --test smoke --mach wolf -env gnu --bcmp cice.v01 --testid t01 --set diag1 + cd wolf_gnu_smoke_col_1x1_diag1.t01 + ./cice.build + ./cice.submit + ./cat test_output + + +Example. Simple test, generate a baseline dataset and compare to a prior baseline +~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ + +Use ``--bgen`` and ``--bcmp``. The prior baseline must exist already. +:: + + ./cice.setup --test smoke --mach wolf -env gnu --bgen cice.v02 --bcmp cice.v01 --testid t02 --set diag1 + cd wolf_gnu_smoke_col_1x1_diag1.t02 + ./cice.build + ./cice.submit + ./cat test_output + +.. _examplediff: + +Example. Simple test, comparison against another test +~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ + +``--diff`` provides a way to compare tests with each other. +For this to work, the tests have to be run in a specific order and +the testids need to match. The test +is always compared relative to the current case directory. + +To run the first test, +:: + + ./cice.setup --test smoke --mach wolf -env gnu --testid tx01 --set debug + cd wolf_gnu_smoke_col_1x1_debug.tx01 + ./cice.build + ./cice.submit + ./cat test_output + +Then to run the second test and compare to the results from the first test +:: + + ./cice.setup --test smoke --mach wolf -env gnu --testid tx01 --diff smoke_col_1x1_debug + cd wolf_gnu_smoke_col_1x1.tx01 + ./cice.build + ./cice.submit + ./cat test_output + +The scripts will add a [machine]_[environment] to the beginning of the diff +argument and the same testid to the end of the diff argument. Then the runs +will be compared for bit-for-bit and a result will be produced in test_output. + + +.. _testsuites: + +Test suites +------------ + +Test suites support running multiple tests specified via +an input file. When invoking the test suite option (``--suite``) with **cice.setup**, +all tests will be created, built, and submitted automatically under +a local directory called testsuite.[testid] as part of involing the suite.:: + + ./cice.setup --suite base_suite --mach wolf --env gnu --testid myid + +Like an individual test, the ``--testid`` option must be specified and can be any +string. Once the tests are complete, results can be checked by running the +results.csh script in the [suite_name].[testid]:: + + cd testsuite.[testid] + ./results.csh + +Multiple suites are supported on the command line as comma separated arguments:: + + ./cice.setup --suite base_suite,decomp_suite --mach wolf --env gnu --testid myid + +If a user adds ``--set`` to the suite, all tests in that suite will add that option:: + + ./cice.setup --suite base_suite,decomp_suite --mach wolf --env gnu --testid myid -s debug + +The option settings defined in the suite have precendent over the command line +values if there are conflicts. + +The predefined test suites are defined under **configuration/scripts/tests** and +the files defining the suites +have a suffix of .ts in that directory. The format for the test suite file +is relatively simple. +It is a text file with white space delimited +columns that define a handful of values in a specific order. +The first column is the test name, the second the grid, the third the pe count, +the fourth column is +the ``--set`` options and the fifth column is the ``--diff`` argument. +The fourth and fifth columns are optional. +Lines that begin with # or are blank are ignored. For example, +:: + + #Test Grid PEs Sets Diff + smoke col 1x1 diag1 + smoke col 1x1 diag1,run1year smoke_col_1x1_diag1 + smoke col 1x1 debug,run1year + restart col 1x1 debug + restart col 1x1 diag1 + restart col 1x1 pondcesm + restart col 1x1 pondlvl + restart col 1x1 pondtopo + +The argument to ``--suite`` defines the test suite (.ts) filename and that argument +can contain a path. +**cice.setup** +will look for the filename in the local directory, in **configuration/scripts/tests/**, +or in the path defined by the ``--suite`` option. + +Because many of the command line options are specified in the input file, ONLY the +following options are valid for suites, + +``--suite`` filename + required, input filename with list of suites + +``--mach`` MACHINE + required + +``--env`` ENVIRONMENT1,ENVIRONMENT2 + strongly recommended + +``--set`` SET1,SET2 + optional + +``--acct`` ACCOUNT + optional + +``--testid`` ID + required + +``--bdir`` DIR + optional, top level baselines directory and defined by default by ICE_MACHINE_BASELINE in **env.[machine]_[environment]**. + +``--bgen`` DIR + recommended, test output is copied to this directory under [bdir] + +``--bcmp`` DIR + recommended, test output are compared to prior results in this directory under [bdir] + +``--report`` + This is only used by ``--suite`` and when set, invokes a script that sends the test results to the results page when all tests are complete. Please see :ref:`testreporting` for more information. + +Please see :ref:`case_options` and :ref:`indtests` for more details about how these options are used. + + +Example. Basic test suite +~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ + +Specify suite, mach, env, testid. +:: + + ./cice.setup --suite base_suite --mach conrad --env cray --testid v01a + cd base_suite.v01a + #wait for runs to complete + ./results.csh + + +Example. Basic test suite on multiple environments +~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ + +Specify multiple envs. +:: + + ./cice.setup --suite base_suite --mach conrad --env cray,pgi,intel,gnu --testid v01a + cd base_suite.v01a + #wait for runs to complete + ./results.csh + +Each env can be run as a separate invokation of `cice.setup` but if that +approach is taken, it is recommended that different testids be used. + + +Example. Basic test suite with generate option defined +~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ + +Add ``--set`` +:: + + ./cice.setup --suite base_suite --mach conrad --env gnu --testid v01b --set diag1 + cd base_suite.v01b + #wait for runs to complete + ./results.csh + +If there are conflicts between the ``--set`` options in the suite and on the command line, +the suite will take precedent. + + +Example. Multiple test suites from a single command line +~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ + +Add comma delimited list of suites +:: + + ./cice.setup --suite base_suite,decomp_suite --mach conrad --env gnu --testid v01c + cd base_suite.v01c + #wait for runs to complete + ./results.csh + +If there are redundant tests in multiple suites, the scripts will understand that and only +create one test. + + +Example. Basic test suite, store baselines in user defined name +~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ + +Add ``--bgen`` +:: + + ./cice.setup --suite base_suite --mach conrad --env cray --testid v01a --bgen cice.v01a + cd base_suite.v01a + #wait for runs to complete + ./results.csh + +This will store the results in the default [bdir] directory under the subdirectory cice.v01a. + +Example. Basic test suite, store baselines in user defined top level directory +~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ + +Add ``--bgen`` and ``--bdir`` +:: + + ./cice.setup --suite base_suite --mach conrad --env cray --testid v01a --bgen cice.v01a --bdir /tmp/user/CICE_BASELINES + cd base_suite.v01a + #wait for runs to complete + ./results.csh + +This will store the results in /tmp/user/CICE_BASELINES/cice.v01a. + + +Example. Basic test suite, store baselines in auto-generated directory +~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ + +Add ``--bgen default`` +:: + + ./cice.setup --suite base_suite --mach conrad --env cray --testid v01a --bgen default + cd base_suite.v01a + #wait for runs to complete + ./results.csh + +This will store the results in the default [bdir] directory under a directory name generated by the script that includes the hash and date. + + +Example. Basic test suite, compare to prior baselines +~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ + +Add ``--bcmp`` +:: + + ./cice.setup --suite base_suite --mach conrad --env cray --testid v02a --bcmp cice.v01a + cd base_suite.v02a + #wait for runs to complete + ./results.csh + +This will compare to results saved in the baseline [bdir] directory under +the subdirectory cice.v01a. You can use other regression options as well +(``--bdir`` and ``--bgen``) + + +Example. Basic test suite, use of default string in regression testing +~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ + +default is a special argument to ``--bgen`` and ``--bcmp``. When used, the +scripts will automate generation of the directories. In the case of ``--bgen``, +a unique directory name consisting of the hash and a date will be created. +In the case of ``--bcmp``, the latest directory in [bdir] will automatically +be used. This provides a number of useful features + + - the ``--bgen`` directory will be named after the hash automatically + - the ``--bcmp`` will always find the most recent set of baselines + - the ``--bcmp`` reporting will include information about the comparison directory + name which will include hash information + - automation can be invoked easily, especially if ``--bdir`` is used to create separate + baseline directories as needed. + +Imagine the case where the default settings are used and ``--bdir`` is used to +create a unique location. You could easily carry out regular builds automatically via, +:: + + set mydate = `date -u "+%Y%m%d"` + git clone https://github.com/myfork/cice cice.$mydate --recursive + cd cice.$mydate + ./cice.setup --suite base_suite --mach conrad --env cray,gnu,intel,pgi --testid $mydate --bcmp default --bgen default --bdir /tmp/work/user/CICE_BASELINES_MASTER + +When this is invoked, a new set of baselines will be generated and compared to the prior +results each time without having to change the arguments. + + +Example. Create and test a custom suite +~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ + +Create your own input text file consisting of 5 columns of data, + - Test + - Grid + - pes + - sets (optional) + - diff test (optional) + +such as +:: + + > cat mysuite + smoke col 1x1 diag1,debug + restart col 1x1 + restart col 1x1 diag1,debug restart_col_1x1 + restart col 1x1 mynewoption,diag1,debug + +then use that input file, mysuite +:: + + ./cice.setup --suite mysuite --mach conrad --env cray --testid v01a --bgen default + cd mysuite.v01a + #wait for runs to complete + ./results.csh + +You can use all the standard regression testing options (``--bgen``, ``--bcmp``, +``--bdir``). Make sure any "diff" testing that goes on is on tests that +are created earlier in the test list, as early as possible. Unfortunately, +there is still no absolute guarantee the tests will be completed in the correct +sequence. + + +.. _testreporting: + +Test Reporting +--------------- + +The CICE testing scripts have the capability to post test results +to the official `wiki page `_. +You may need write permission on the wiki. If you are interested in using the +wiki, please contact the consortium. + +To post results, once a test suite is complete, run ``results.csh`` and +``report_results.csh`` from the suite directory, +:: + + ./cice.setup --suite base_suite --mach conrad --env cray --testid v01a + cd base_suite.v01a + #wait for runs to complete + ./results.csh + ./report_results.csh + +The reporting can also be automated by adding ``--report`` to ``cice.setup`` +:: + + ./cice.setup --suite base_suite --mach conrad --env cray --testid v01a --report + +With ``--report``, the suite will create all the tests, build and submit them, +wait for all runs to be complete, and run the results and report_results scripts. + + +.. _compliance: + +Code Compliance Test (non bit-for-bit validation) +---------------------------------------------------- + +A core tenet of CICE dycore and CICE innovations is that they must not change +the physics and biogeochemistry of existing model configurations, notwithstanding +obsolete model components. Therefore, alterations to existing CICE Consortium code +must only fix demonstrable numerical or scientific inaccuracies or bugs, or be +necessary to introduce new science into the code. New physics and biogeochemistry +introduced into the model must not change model answers when switched off, and in +that case CICEcore and CICE must reproduce answers bit-for-bit as compared to +previous simulations with the same namelist configurations. This bit-for-bit +requirement is common in Earth System Modeling projects, but often cannot be achieved +in practice because model additions may require changes to existing code. In this +circumstance, bit-for-bit reproducibility using one compiler may not be unachievable +on a different computing platform with a different compiler. Therefore, tools for +scientific testing of CICE code changes have been developed to accompany bit-for-bit +testing. These tools exploit the statistical properties of simulated sea ice thickness +to confirm or deny the null hypothesis, which is that new additions to the CICE dycore +and CICE have not significantly altered simulated ice volume using previous model +configurations. Here we describe the CICE testing tools, which are applies to output +from five-year gx-1 simulations that use the standard CICE atmospheric forcing. +A scientific justification of the testing is provided in +:cite:`Hunke2018`. + +.. _paired: + + +Two-Stage Paired Thickness Test +~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ + +The first quality check aims to confirm the null hypotheses +:math:`H_0\!:\!\mu_d{=}0` at every model grid point, given the mean +thickness difference :math:`\mu_d` between paired CICE simulations +‘:math:`a`’ and ‘:math:`b`’ that should be identical. :math:`\mu_d` is +approximated as +:math:`\bar{h}_{d}=\tfrac{1}{n}\sum_{i=1}^n (h_{ai}{-}h_{bi})` for +:math:`n` paired samples of ice thickness :math:`h_{ai}` and +:math:`h_{bi}` in each grid cell of the gx-1 mesh. Following +:cite:`Wilks2006`, the associated :math:`t`-statistic +expects a zero mean, and is therefore + +.. math:: + t=\frac{\bar{h}_{d}}{\sigma_d/\sqrt{n_{eff}}} + :label: t-distribution + +given variance +:math:`\sigma_d^{\;2}=\frac{1}{n-1}\sum_{i=1}^{n}(h_{di}-\bar{h}_d)^2` +of :math:`h_{di}{=}(h_{ai}{-}h_{bi})` and effective sample size + +.. math:: + n_{eff}{=}n\frac{({1-r_1})}{({1+r_1})} + :label: neff + +for lag-1 autocorrelation: + +.. math:: + r_1=\frac{\sum\limits_{i=1}^{n-1}\big[(h_{di}-\bar{h}_{d1:n-1})(h_{di+1}-\bar{h}_{d2:n})\big]}{\sqrt{\sum\limits_{i=1}^{n-1} (h_{di}-\bar{h}_{d1:n-1})^2 \sum\limits_{i=2}^{n} (h_{di}-\bar{h}_{d2:n})^2 }}. + :label: r1 + +Here, :math:`\bar{h}_{d1:n-1}` is the mean of all samples except the +last, and :math:`\bar{h}_{d2:n}` is the mean of samples except the +first, and both differ from the overall mean :math:`\bar{h}_d` in +equations (:eq:`t-distribution`). That is: + +.. math:: + \bar{h}_{d1:n-1}=\frac{1}{n{-}1} \sum \limits_{i=1}^{n-1} h_{di},\quad + \bar{h}_{d2:n}=\frac{1}{n{-}1} \sum \limits_{i=2}^{n} h_{di},\quad + \bar{h}_d=\frac{1}{n} \sum \limits_{i=1}^{n} {h}_{di} + :label: short-means + +Following :cite:`Zwiers1995`, the effective sample size is +limited to :math:`n_{eff}\in[2,n]`. This definition of :math:`n_{eff}` +assumes ice thickness evolves as an AR(1) process +:cite:`VonStorch1999`, which can be justified by analyzing +the spectral density of daily samples of ice thickness from 5-year +records in CICE Consortium member models :cite:`Hunke2018`. +The AR(1) approximation is inadmissible for paired velocity samples, +because ice drift possesses periodicity from inertia and tides +:cite:`Hibler2006,Lepparanta2012,Roberts2015`. Conversely, +tests of paired ice concentration samples may be less sensitive to ice +drift than ice thickness. In short, ice thickness is the best variable +for CICE Consortium quality control (QC), and for the test of the mean +in particular. + +Care is required in analyzing mean sea ice thickness changes using +(:eq:`t-distribution`) with +:math:`N{=}n_{eff}{-}1` degrees of freedom. +:cite:`Zwiers1995` demonstrate that the :math:`t`-test in +(:eq:`t-distribution`) becomes conservative when +:math:`n_{eff} < 30`, meaning that :math:`H_0` may be erroneously +confirmed for highly auto-correlated series. Strong autocorrelation +frequently occurs in modeled sea ice thickness, and :math:`r_1>0.99` is +possible in parts of the gx-1 domain for the five-year QC simulations. +In the event that :math:`H_0` is confirmed but :math:`2\leq n_{eff}<30`, +the :math:`t`-test progresses to the ‘Table Lookup Test’ of +:cite:`Zwiers1995`, to check that the first-stage test +using (:eq:`t-distribution`) was not +conservative. The Table Lookup Test chooses critical :math:`t` values +:math:`|t| + + # Check out the updated code, or clone from a pull request + + # Run the test with the new code + ./cice.setup -m onyx -ts base_suite -testid test0 -bc cicev6.0.0 -a + + # Check the results + cd base_suite.test0 + ./results.csh + + #### If the BFB tests fail, perform the compliance testing #### + # Create a QC baseline + ./cice.setup -m onyx -t smoke -g gx1 -p 44x1 -testid qc_base -s qc,medium -a + cd onyx_smoke_gx1_44x1_medium_qc.qc_base + ./cice.build + ./cice.submit + + # Check out the updated code or clone from a pull request + + # Create the t-test testing data + ./cice.setup -m onyx -t smoke -g gx1 -p 44x1 -testid qc_test -s qc,medium -a + cd onyx_smoke_gx1_44x1_medium_qc.qc_test + ./cice.build + ./cice.submit + + # Wait for runs to finish + + # Perform the QC test + cp configuration/scripts/tests/QC/cice.t-test.py + ./cice.t-test.py /p/work/turner/CICE_RUNS/onyx_smoke_gx1_44x1_medium_qc.qc_base \ + /p/work/turner/CICE_RUNS/onyx_smoke_gx1_44x1_medium_qc.qc_test + + # Example output: + INFO:__main__:Number of files: 1825 + INFO:__main__:Two-Stage Test Passed + INFO:__main__:Quadratic Skill Test Passed for Northern Hemisphere + INFO:__main__:Quadratic Skill Test Passed for Southern Hemisphere + INFO:__main__: + INFO:__main__:Quality Control Test PASSED + + +.. _testplotting: + +Test Plotting +---------------- + +The CICE scripts include a script (``timeseries.csh``) that will generate a timeseries +figure from the diagnostic output file. +When running a test suite, the ``timeseries.csh`` script is automatically copied to the suite directory. +If the ``timeseries.csh`` script is to be used on a test / case that is not a part of a test suite, +users will need to run the ``timeseries.csh`` script from the tests directory +(``./configuration/scripts/tests/timeseries.csh``), or copy it to a local directory and run it +locally (``cp configuration/scripts/tests/timeseries.csh .`` followed by +``./timeseries.csh /path/to/ice_diag.full_ITD``. The plotting script can be run +on any of the output files - icefree, slab, full_ITD, land). To generate the figure, +run the ``timeseries.csh`` script and pass the full path to the ice_diag file as an argument. + +For example: + +Run the test suite. :: + +$ ./cice.setup -m conrad -e intel --suite base_suite -acct --testid t00 + +Wait for suite to finish then go to the directory. :: + +$ cd base_suite.t00 + +Run the timeseries script on the desired case. :: + +$ ./timeseries.csh /p/work1/turner/CICE_RUNS/conrad_intel_smoke_col_1x1_diag1_run1year.t00/ice_diag.full_ITD + +The output figures are placed in the directory where the ice_diag file is located. + +This plotting script can be used to plot the following variables: + + - area fraction + - average ice thickness (m) + - average snow depth (m) + - air temperature (C) + - shortwave radiation (:math:`W/m^2`) + - longwave radiation (:math:`W/m^2`) + - snowfall + - average salinity (ppt) + - surface temperature (C) + - outward longwave flux (:math:`W/m^2`) + - sensible heat flux (:math:`W/m^2`) + - latent heat flux (:math:`W/m^2`) + - top melt (m) + - bottom melt (m) + - lateral melt (m) + - new ice (m) + - congelation (m) + - snow-ice (m) + - initial energy change (:math:`W/m^2`) + diff --git a/doc/source/user_guide/ug_troubleshooting.rst b/doc/source/user_guide/ug_troubleshooting.rst new file mode 100644 index 000000000..3ab5d7b40 --- /dev/null +++ b/doc/source/user_guide/ug_troubleshooting.rst @@ -0,0 +1,263 @@ +:tocdepth: 3 + +.. _troubleshooting: + +Troubleshooting +=============== + +Check the FAQ: https://github.com/CICE-Consortium/Icepack/wiki + +.. _setup: + +Initial setup +------------- + +If there are problems, you can manually edit +the env, Macros, and **cice.run** files in the case directory until things are +working properly. Then you can copy the env and Macros files back to +**configuration/scripts/machines**. + +Changes made directly in the run directory, e.g. to the namelist file, will be overwritten +if scripts in the case directory are run again later. + +If changes are needed in the **cice.run.setup.csh** script, it must be manually modified. + +Ensure that the block size ``ICE_BLCKX``, ``ICE_BLCKY``, and ``ICE_MXBLCKS`` in **cice.settings** is +compatible with the processor\_shape and other domain options in **ice\_in** + +If using the rake or space-filling curve algorithms for block +distribution (`distribution\_type` in **ice\_in**) the code will abort +if `MXBLCKS` is not large enough. The correct value is provided in the +diagnostic output. Also, the spacecurve setting can only be used with certain +block sizes that results in number of blocks in the x and y directions being +only multiples of 2, 3, or 5. + +If starting from a restart file, ensure that kcatbound is the same as +that used to create the file (`kcatbound` = 0 for the files included in +this code distribution). Other configuration parameters, such as +`NICELYR`, must also be consistent between runs. + +For stand-alone runs, check that `-Dcoupled` is *not* set in the +**Macros.\*** file. + +For coupled runs, check that `-Dcoupled` and other +coupled-model-specific (e.g., CESM, popcice or hadgem) preprocessing +options are set in the **Macros.\*** file. + +Set ``ICE_CLEANBUILD`` to true to clean before rebuilding. + + +.. _restarttrouble: + +Restarts +-------------- + +Manual restart tests require the path to the restart file be included in ``ice_in`` in the +namelist file. + +Ensure that ``kcatbound`` is the same as that used to create the restart file. +Other configuration parameters, such as ``NICELYR``, must also be consistent between runs. + +.. + this is commented out now + Underflows + ----------- + - Tests using a debug flag that traps underflows will fail unless a "flush-to-zero" flag + is set in the Macros file. This is due to very small exponential values in the delta-Eddington + radiation scheme. + +CICE version 5 introduces a new model configuration that makes +restarting from older simulations difficult. In particular, the number +of ice categories, the category boundaries, and the number of vertical +layers within each category must be the same in the restart file and in +the run restarting from that file. Moreover, significant differences in +the physics, such as the salinity profile, may cause the code to fail +upon restart. Therefore, new model configurations may need to be started +using `runtype` = ‘initial’. Binary restart files that were provided with +CICE v4.1 were made using the BL99 thermodynamics with 4 layers and 5 +thickness categories (`kcatbound` = 0) and therefore can not be used for +the default CICE v5 configuration (7 layers). In addition, CICE’s +default restart file format is now  instead of binary. + +Restarting a run using `runtype` = ‘continue’ requires restart data for +all tracers used in the new run. If tracer restart data is not +available, use `runtype` = ‘initial’, setting `ice\_ic` to the name of the +core restart file and setting to true the namelist restart flags for +each tracer that is available. The unavailable tracers will be +initialized to their default settings. + +On tripole grids, use `restart\_ext` = true when using either binary or +regular (non-PIO) netcdf. + +Provided that the same number of ice layers (default: 4) will be used +for the new runs, it is possible to convert v4.1 restart files to the +new file structure and then to  format. If the same physical +parameterizations are used, the code should be able to execute from +these files. However if different physics is used (for instance, mushy +thermo instead of BL99), the code may still fail. To convert a v4.1 +restart file: + +- Edit the code **input\_templates/convert\_restarts.f90** for your + model configuration and path names. Compile and run this code to + create a binary restart file that can be read using v5. Copy the + resulting file to the **restart/** subdirectory in your working + directory. + +- In your working directory, turn off all tracer restart flags in + **ice\_in** and set the following: + + - runtype = ‘initial’ + + - ice\_ic = ‘./restart/[your binary file name]’ + + - restart = .true. + + - use\_restart\_time = .true. + +- In **CICE\_InitMod.F90**, comment out the call to + restartfile(ice\_ic) and uncomment the call to + restartfile\_v4(ice\_ic) immediately below it. This will read the + v4.1 binary file and write a v5  file containing the same + information. + +If restart files are taking a long time to be written serially (i.e., +not using PIO), see the next section. + + +Slow execution +-------------------- + +On some architectures, underflows (:math:`10^{-300}` for example) are +not flushed to zero automatically. Usually a compiler flag is available +to do this, but if not, try uncommenting the block of code at the end of +subroutine *stress* in **ice\_dyn\_evp.F90** or **ice\_dyn\_eap.F90**. +You will take a hit for the extra computations, but it will not be as +bad as running with the underflows. + +In some configurations, multiple calls to scatter or gather global +variables may overfill MPI’s buffers, causing the code to slow down +(particularly when writing large output files such as restarts). To +remedy this problem, set `BARRIERS yes` in **comp\_ice**. This +synchronizes MPI messages, keeping the buffers in check. + + +Debugging hints +----------------------- + +Several utilities are available that can be helpful when debugging the +code. Not all of these will work everywhere in the code, due to possible +conflicts in module dependencies. + +*debug\_ice* (**CICE.F90**) + A wrapper for *print\_state* that is easily called from numerous + points during the timestepping loop (see + **CICE\_RunMod.F90\_debug**, which can be substituted for + **CICE\_RunMod.F90**). + +*print\_state* (**ice\_diagnostics.F90**) + Print the ice state and forcing fields for a given grid cell. + +`dbug` = true (**ice\_in**) + Print numerous diagnostic quantities. + +`print\_global` (**ice\_in**) + If true, compute and print numerous global sums for energy and mass + balance analysis. This option can significantly degrade code + efficiency. + +`print\_points` (**ice\_in**) + If true, print numerous diagnostic quantities for two grid cells, + one near the north pole and one in the Weddell Sea. This utility + also provides the local grid indices and block and processor numbers + (`ip`, `jp`, `iblkp`, `mtask`) for these points, which can be used in + conjunction with `check\_step`, to call *print\_state*. These flags + are set in **ice\_diagnostics.F90**. This option can be fairly slow, + due to gathering data from processors. + +*global\_minval, global\_maxval, global\_sum* (**ice\_global\_reductions.F90**) + Compute and print the minimum and maximum values for an individual + real array, or its global sum. + + +Known bugs +-------------- + +- Fluxes sent to the CESM coupler may have incorrect values in grid + cells that change from an ice-free state to having ice during the + given time step, or vice versa, due to scaling by the ice area. The + authors of the CESM flux coupler insist on the area scaling so that + the ice and land models are treated consistently in the coupler (but + note that the land area does not suddenly become zero in a grid cell, + as does the ice area). + +- With the old CCSM radiative scheme (`shortwave` = ‘default’ or + ‘ccsm3’), a sizable fraction (more than 10%) of the total shortwave + radiation is absorbed at the surface but should be penetrating into + the ice interior instead. This is due to use of the aggregated, + effective albedo rather than the bare ice albedo when + `snowpatch` :math:`< 1`. + +- The date-of-onset diagnostic variables, `melt\_onset` and `frz\_onset`, + are not included in the core restart file, and therefore may be + incorrect for the current year if the run is restarted after Jan 1. + Also, these variables were implemented with the Arctic in mind and + may be incorrect for the Antarctic. + +- The single-processor *system\_clock* time may give erratic results on + some architectures. + +- History files that contain time averaged data (`hist\_avg` = true in + **ice\_in**) will be incorrect if restarting from midway through an + averaging period. + +- In stand-alone runs, restarts from the end of `ycycle` will not be + exact. + +- Using the same frequency twice in `histfreq` will have unexpected + consequences and causes the code to abort. + +- Latitude and longitude fields in the history output may be wrong when + using padding. + + +Interpretation of albedos +---------------------------------------- + +The snow-and-ice albedo, `albsni`, and diagnostic albedos `albice`, `albsno`, +and `albpnd` are merged over categories but not scaled (divided) by the +total ice area. (This is a change from CICE v4.1 for `albsni`.) The latter +three history variables represent completely bare or completely snow- or +melt-pond-covered ice; that is, they do not take into account the snow +or melt pond fraction (`albsni` does, as does the code itself during +thermodyamic computations). This is to facilitate comparison with +typical values in measurements or other albedo parameterizations. The +melt pond albedo `albpnd` is only computed for the Delta-Eddington +shortwave case. + +With the Delta-Eddington parameterization, the albedo depends on the +cosine of the zenith angle (:math:`\cos\varphi`, `coszen`) and is zero if +the sun is below the horizon (:math:`\cos\varphi < 0`). Therefore +time-averaged albedo fields would be low if a diurnal solar cycle is +used, because zero values would be included in the average for half of +each 24-hour period. To rectify this, a separate counter is used for the +averaging that is incremented only when :math:`\cos\varphi > 0`. The +albedos will still be zero in the dark, polar winter hemisphere. + + +Proliferating subprocess parameterizations +------------------------------------------------------- + +With the addition of several alternative parameterizations for sea ice +processes, a number of subprocesses now appear in multiple parts of the +code with differing descriptions. For instance, sea ice porosity and +permeability, along with associated flushing and flooding, are +calculated separately for mushy thermodynamics, topo and level-ice melt +ponds, and for the brine height tracer, each employing its own +equations. Likewise, the BL99 and mushy thermodynamics compute freeboard +and snow–ice formation differently, and the topo and level-ice melt pond +schemes both allow fresh ice to grow atop melt ponds, using slightly +different formulations for Stefan freezing. These various process +parameterizations will be compared and their subprocess descriptions +possibly unified in the future. + + diff --git a/doc/sphinx-documentation-workflow.txt b/doc/sphinx-documentation-workflow.txt deleted file mode 100644 index ffaa4be75..000000000 --- a/doc/sphinx-documentation-workflow.txt +++ /dev/null @@ -1,72 +0,0 @@ -Basic workflow for github/sphinx documentation -Alice DuVivier -July 14, 2017 ----------------------------------------------------------- - -Most of this is adapted from https://github.com/ESMCI/cime/wiki/CIME-Git-Workflow, which has Figure 1 that is **VERY** useful -- Assumes you have sphinx installed on your own, personal machine. This includes the sphinxcontrib.bibtex library. Need to do this first. See the about-sphinx-documentation.txt file for more details. - -1. Editing *.rst files, testing html code, etc. in master branch -* Things to do once: -- Remotely: Create fork on personal GitHub area. Just do this once (usually). Use button on website of original repo to create this personal fork. -- Locally: Switch to your local machine -> cd ~/Documents/Research/github/CICE-Consortium/ —> go to local directory where you want to keep GitHub code, make changes, etc. -> git clone https://github.com/duvivier/CICE.git —> Clone the fork to your local machine from GitHub. Get the URL from which to clone from personal GitHub repository page, green “clone or download” button. -- This has now created a “local” copy of your fork called CICE. From here on you do changes from “local” fork on your local machine and push to your remote repo called “origin”. You can pull from “origin” or the original “upstream” remote repo. Any changes you want to eventually merge need to be pushed to the “origin” before issuing a pull request to “upstream” -> git status —> check that you’re on the master branch and have checked this out. -> git remote —v —> check your remote branches. Default will have just “origin” and it will push to your local fork. -> git remote add upstream https://github.com/CICE-Consortium/CICE —> add the consortium as the ultimate upstream source. Will need this for daily updates (see below). -> git remote —v —> check that the “upstream” branch has been added - -* Things you do daily: -> cd ~/Documents/Research/github/CICE-Consortium/CICE/ -> git status —> tells you what branch you are on and any commits that need to be made -> git branch —> tells you what branches are available locally -> git remote —v —> lists the remote sources. want “origin” to point to personal remote repo and “upstream” to point to the original code you forked from. -> git pull upstream master —> will fetch+merge any changes to the local master branch since you last stopped working on it. Need to specify “master” branch. If there are not changes it will tell you that you are already up-to-date can also do > git pull —rebase upstream master and the —rebase tells git to move your commits to the tip of the master branch after synchronizing with changes from central repository. Better to rebase than do a merge commit. Rebasing is unlikely to cause problems unless you’re working on the same code or feature as someone else. To stop this process just execute git rebase —abort. -- Make your local edits to *.rst files, code, etc. Then issue sphinx commands to test these. -> cd ../ (be in the /doc/ directory, not the source directory. Must be one directory up). -> make clean —> gets rid of old html -> make html —> makes new html from sphinx *.rst files. -> cd build/html -> open index.html (or other html code) —> opens html locally to check it quickly -- iterate on the steps above till you’re happy with the html code -- note that sometimes the math doesn't render properly the first time you try this. If this is the case, you should do another >make html and check it. If -that still doesn't work try just touching the *.rst file that isn't rendering properly (open it, save it, close it) and try >make html again. This has worked -in the past to get the math to render properly. At this time we are unsure why this is necessary. -> git status —> gives you list of files that are changed but not yet staged in red -> git add *.rst —> add *rst files or whatever else needs to be staged for documentation stuff. -> git status —> now should show list of changes that have been staged in green -> git commit -m “message” —> commit the changes to your local fork. This makes it ready to push to external fork. -> git push origin —> will push the local code changes to your remote “origin” fork. In this case the master fork with the *.rst files. Note that we set this up so that the push will ignore *.html files. -* note you may want to add the path for the documentation from gh-pages to the README.md file (or another file). The path is: https://duvivier.github.io/CICE/ (or use CICE-Consortium instead of duvivier for the consortium repository once the pull request is complete) - -2. Pushing *.html code in gh-pages branch -* Things to do once: -> cd ~/Documents/Research/github/CICE-Consortium/ -> git clone https://github.com/duvivier/CICE.git CICE.gh-pages -- note that this checks out the master branch. So we need to switch to the gh-pages branch. -> git checkout gh-pages -- This switches to the gh-pages branch, which *only* is used for html pages. - -* Things to do daily: -> cd ~/Documents/Research/github/CICE-Consortium/CICE.gh-pages/ -> git status —> check that you are on gh-pages branch with this tag. -> rm -rf . —> remove old html files in here -> cd ~/Documents/Research/github/CICE-Consortium/CICE/doc/ —> change to master branch -> make clean —> clean up old html code -> make html —> make the correct html code for *rst files you just committed to master branch -> cd build/html -> cp -r . ~/Documents/Research/github/CICE-Consortium/CICE.gh-pages/ -> cd ~/Documents/Research/github/CICE-Consortium/CICE.gh-pages/ -> git add . —> add the files to those needing a commit to local branch -> git commit -m “updates….” —> commit *.html files to local fork -> git push origin —> will push local changes to remote “origin” fork, which in this case is the gh-pages branch on my personal fork from the consortium. -- Check this online at personal pages to make sure it looks right, is pointing to right path, etc. etc. - -3. Merging with original repository -- Once you’ve checked and tested the documentation on your local fork, it’s time for a pull request to the original repository -- On personal GitHub webpage there is a button on left called “New Pull Request”. Click that. -- It then takes you to original repository (CICE-Consoritum/CICE/) from which you forked. It shows the number of changed files and the differences in green (additions) or red (subtractions) in these files with the files that exist on that branch. If you add a new file then everything is green. -- Once you’ve checked your code, then click the big, green “Create pull request” button and this will send the changes to the administrators of the CICE-Consoritum repository. (Elizabeth, Tony, Alice, others). -- Always issue a pull request to merge with the original repository rather than just merging it yourself. This is the main, well tested branch that we release from so we want multiple eyes to look everything over. This is less crucial for documentation than actual code, but still important. diff --git a/icepack b/icepack index 0fcee3aa2..005df7de8 160000 --- a/icepack +++ b/icepack @@ -1 +1 @@ -Subproject commit 0fcee3aa27e44111cabd2488074806eda5db7edd +Subproject commit 005df7de8e1b351a1c911de63012f19d153a7f15