Skip to content
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
File renamed without changes.
94 changes: 94 additions & 0 deletions cpl_gray+swamp+ocn/README.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,94 @@
# Introduction

This is the readme.md for `MITgcm_contrib/verification_other/cpl_gray+swamp+ocn` a directory which provides source code and input files for a coupled setup using gray atmospheric physics (O'Gorman and Schneider, JCl, 2008)
plugged into the MITgcm dynamical core, with part of the surface being a slab ocean (swamp or continent) and some being a dynamic ocean. This setup uses a standard cubed-sphere for atmosphere and ocean (6 faces, each 32x32), non-uniform pressure levels in the atmopshere (26), and non-uniform depth levels in the ocean. The code runs on 28 nodes, with 3 being for the ocean (2 faces per node), 24 being for the atmosphere (4 nodes per face), and 1 for coupling.

The code here is for the continental configuration used in Tuckman et al., The Zonal Seasonal Cycle of Tropical Precipitation: Introducing the Indo-Pacific Monsoonal Mode, Journal of Climate, 2024, for the dynamic ocean simulation. However, in the version in this directory, the ocean's vertical resolution is higher and albedo is symmetric.

# Components

This directory consists of:

* `cpl_gray+swamp+ocn/build_atm/` : directory to build atmospheric component executable
* `cpl_gray+swamp+ocn/code_atm/` : specific code for atmosphere
* `cpl_gray+swamp+ocn/input_atm/` : specific input for atmosphere

* `cpl_gray+swamp+ocn/build_ocn/` : directory to build oceanic component executable
* `cpl_gray+swamp+ocn/code_ocn/` : specific code for ocean
* `cpl_gray+swamp+ocn/input_ocn/` : specific input for ocean

* `cpl_gray+swamp+ocn/build_cpl/` : directory to build coupler executable
* `cpl_gray+swamp+ocn/code_cpl/` : specific code for coupler
* `cpl_gray+swamp+ocn/input_cpl/` : specific input for coupler
* `cpl_gray+swamp+ocn/shared_code/` : specific coupling code shared by all 3 components

* `cpl_gray+swamp+ocn/run_dir/` : Folder where you should run the code

The `runi_dir` folder is where the code will be run and the output will end up. It consists of:

* `cpl_gray+swamp+ocn/run_dir/sgm_run` : Progress tracker
* `cpl_gray+swamp+ocn/run_dir/run_cpl.slurm` : Slurm script to start the run
* `cpl_gray+swamp+ocn/run_dir/rank_0` : The folder which controls the coupling node
* `cpl_gray+swamp+ocn/run_dir/rank_[1-3]` : The folders which control the ocean code. `rank_2` and `rank_3` have soft links to `rank_1`, so if anything is to be changed you only need to change `rank_1`
* `cpl_gray+swamp+ocn/run_dir/rank_[4-28]` : The folders which control the atmosphere code. `rank_[5-28]` have soft links to `rank_4`
* `cpl_gray+swamp+ocn/run_dir/rank_0` : The folder which controls the coupling code
* `cpl_gray+swamp+ocn/run_dir/bin` : The folder which contains the executable for each of the model components. There are currently executables in the folder, but they will need to be replaced with ones that are compiled on the same system on which you plan on running the code
* `cpl_gray+swamp+ocn/run_dir/Output` : The folder which will contain all the output when the code is done running

# Instructions:

Download MITgcm from the MITgcm repository to `$my_base_dir$/MITgcm` and, in `MITgcm` directory,
download `cpl_gray_swamp+ocn` from the `MITgcm/verification_other/` repository.

## Short test run to check:
One could compile and run a very short test (4.h simulated time), using only 3 processors
(one for each component), by running the batch script `MITgcm/tools/run_cpl_test`
following the same instructions as described in `MITgcm/verification/cpl_aim+ocn/README.md`.
This provide a quick way to check that everything is in place for a longer and more meaningful
simulation as described below.

## Preparing the executables:
1. Go to the node on which you will be running the simulation
2. Load the relevant modules (usually intel and openmpi)
3. In each build directory (atm, ocn, and cpl)
a. `make Clean`
b. `$my_dir$/MITgcm/tools/genmake2 -mpi -rootdir $my_dir$/MITgcm`
c. `make depend`
d. `make`
e. The resulting executable will be named `mitgcmuv` -- rename it to `mitgcmuv.atm`, `mitgcmuv.ocn`, or `mitgcmuv.cpl`
4. Copy each of the three executables to `cpl_gray+swamp+ocn/run_dir/bin/`

## Changing runtime parameters:

1. Continental configuration
a. In this setup, the continental configuration is controlled by the `bathymetry.bin`, `wall_S.bin`, and `wall_W.bin` (`rank_1`) for the ocean, and `mixed_layer_depth.bin` for the atmosphere (`rank_4`). All of these are binary files containing 192 X 32 matrices.
b. The `bathymetry.bin` file represents the ocean depth. For regions that are meant to be a swamp/slab ocean, set this to zero. Otherwise, set it to the ocean depth
c. The `wall_S.bin` and `wall_W.bin` represent whether ocean flow is prevented from going from the cell in question to the one in the negative y direction (for `wall_S.bin`) or the negative X direction (for `wall_W.bin`). Note that this does not always correspond to Geographic South and West. These walls are used to separate basins from each other and have a large impact on ocean dynamics.
d. The `mixed_layer_depth.bin` file controls the heat capacity of the ocean in regions without ocean dynamics (i.e., where `bathymetry.bin` has a value of zero). The value corresponds to a depth of water, while the surface heat capacity of that cell (in units of J/m^2 K) will be that height multiplied by the density of water and the specific heat of water at constant pressure.
e. The `freshwater_balance_weights.bin` file in rank_1 controls how extra water that precipitates over the continent gets distributed over the ocean. This is done as E-P may not be zero over the continent, but we want freshwater to be conserved in the simulation.

2. Diagnostics to output
a. `rank_4/data.diagnostics` controls the atmosphere output
b. `rank_1/data.diagnostics` controls the ocean output
c. See MITgcm documentation for details on how to change these. By default, outputs monthly averages of many relevant variables.
d. It is possible to change the rate at which diagnostics are output midway through the run. This can be useful if you want higher resolution data for the last few years of the run (i.e., once it's in steady state). In order to do this, create a `data.diagnostics_fast` file in both `rank_1` and `rank_4` to replace `data.diagnostics`. Create links in all the other `rank_*` folders to the appropriate one. Then, uncomment the relevant lines (175-181) in `run_cpl.slurm`. The `FastDiagnosticsNumber` variable controls after how many runs it switches to the new file.

3. sgm_run
a. sgm_run is the progress tracker. The first number is the number of iterations that have finished, the second number is the total number that will be run

4. run_cpl.slurm
a. The slurm script to submit the job
b. Several options here (e.g., partition) will have to be changed depending on the cluster
c. The current configuration is meant to be run on 28 nodes (3 for ocean, 24 for atmosphere, 1 for coupling)
d. Make sure the run directory is what you want it to be.

## Final checks and running
1. Make sure all files in `rank_2` and `rank_3` match those in `rank_1`. Make sure all files in `rank_[5-28]` match those in `rank_4`. It is recommended to do this through symbolic links so any changes to `rank_1` or `rank_4` do the same thing in all other ranks. This should be currently set up
2. `sbatch run_cpl.slurm` to begin the run

## Accessing Output:

After the simulation has run, there will be four folders in `run_dir/Output`, one each for the atmosphere and ocean and output (`mnc_atm` and `mnc_ocn`) and pickup files (`pick_atm` and `pick_ocn`). In the `mnc` folders, there will be `Surface` and `Fields` output files for each time segment of the simulation run, labeled with the iteration number on which the time segment began and the `rank` folder and the tile that ran the code. There are 24 for the atmosphere, and 6 for the ocean. These can be consolidated with the `rdmnc` function in the `utils` folder of the base MITgcm directory.

--------------

2 changes: 2 additions & 0 deletions cpl_gray+swamp+ocn/build_atm/.gitignore
Original file line number Diff line number Diff line change
@@ -0,0 +1,2 @@
*
!/genmake_local
16 changes: 16 additions & 0 deletions cpl_gray+swamp+ocn/build_atm/genmake_local
Original file line number Diff line number Diff line change
@@ -0,0 +1,16 @@
#!/bin/bash

# This is the local options file for the "new" version of genmake

retv=1
if test "x$OPTFILE" != x ; then
basename $OPTFILE | grep gfortran > /dev/null 2>&1 ; retv=$? ;
fi
if test $retv = 0 ; then
FFLAGS='-fdefault-real-8 -fdefault-double-8'
echo " local gfortran setting: FFLAGS='$FFLAGS'"
else
FFLAGS='-r8'
echo " local default setting: FFLAGS='$FFLAGS'"
fi
MODS="../code_atm ../shared_code"
2 changes: 2 additions & 0 deletions cpl_gray+swamp+ocn/build_cpl/.gitignore
Original file line number Diff line number Diff line change
@@ -0,0 +1,2 @@
*
!/genmake_local
9 changes: 9 additions & 0 deletions cpl_gray+swamp+ocn/build_cpl/genmake_local
Original file line number Diff line number Diff line change
@@ -0,0 +1,9 @@
#!/bin/bash

# This is the local options file for the "new" version of genmake
#
# EH3 initial version 2003-08

STANDARDDIRS=""

MODS="../code_cpl ../shared_code"
2 changes: 2 additions & 0 deletions cpl_gray+swamp+ocn/build_ocn/.gitignore
Original file line number Diff line number Diff line change
@@ -0,0 +1,2 @@
*
!/genmake_local
7 changes: 7 additions & 0 deletions cpl_gray+swamp+ocn/build_ocn/genmake_local
Original file line number Diff line number Diff line change
@@ -0,0 +1,7 @@
#!/bin/bash

# This is the local options file for the "new" version of genmake
#
# EH3 initial version 2003-08

MODS="../code_ocn ../shared_code"
156 changes: 156 additions & 0 deletions cpl_gray+swamp+ocn/code_atm/CPP_EEOPTIONS.h
Original file line number Diff line number Diff line change
@@ -0,0 +1,156 @@
#ifndef _CPP_EEOPTIONS_H_
#define _CPP_EEOPTIONS_H_

CBOP
C !ROUTINE: CPP_EEOPTIONS.h
C !INTERFACE:
C include "CPP_EEOPTIONS.h"
C
C !DESCRIPTION:
C *==========================================================*
C | CPP\_EEOPTIONS.h |
C *==========================================================*
C | C preprocessor "execution environment" supporting |
C | flags. Use this file to set flags controlling the |
C | execution environment in which a model runs - as opposed |
C | to the dynamical problem the model solves. |
C | Note: Many options are implemented with both compile time|
C | and run-time switches. This allows options to be |
C | removed altogether, made optional at run-time or |
C | to be permanently enabled. This convention helps |
C | with the data-dependence analysis performed by the |
C | adjoint model compiler. This data dependency |
C | analysis can be upset by runtime switches that it |
C | is unable to recoginise as being fixed for the |
C | duration of an integration. |
C | A reasonable way to use these flags is to |
C | set all options as selectable at runtime but then |
C | once an experimental configuration has been |
C | identified, rebuild the code with the appropriate |
C | options set at compile time. |
C *==========================================================*
CEOP

C In general the following convention applies:
C ALLOW - indicates an feature will be included but it may
C CAN have a run-time flag to allow it to be switched
C on and off.
C If ALLOW or CAN directives are "undef'd" this generally
C means that the feature will not be available i.e. it
C will not be included in the compiled code and so no
C run-time option to use the feature will be available.
C
C ALWAYS - indicates the choice will be fixed at compile time
C so no run-time option will be present

C=== Macro related options ===
C-- Control storage of floating point operands
C On many systems it improves performance only to use
C 8-byte precision for time stepped variables.
C Constant in time terms ( geometric factors etc.. )
C can use 4-byte precision, reducing memory utilisation and
C boosting performance because of a smaller working set size.
C However, on vector CRAY systems this degrades performance.
C Enable to switch REAL4_IS_SLOW from genmake2 (with LET_RS_BE_REAL4):
#ifdef LET_RS_BE_REAL4
#undef REAL4_IS_SLOW
#else /* LET_RS_BE_REAL4 */
#define REAL4_IS_SLOW
#endif /* LET_RS_BE_REAL4 */

C-- Control use of "double" precision constants.
C Use D0 where it means REAL*8 but not where it means REAL*16
#define D0 d0

C=== IO related options ===
C-- Flag used to indicate whether Fortran formatted write
C and read are threadsafe. On SGI the routines can be thread
C safe, on Sun it is not possible - if you are unsure then
C undef this option.
#undef FMTFTN_IO_THREAD_SAFE

C-- Flag used to indicate whether Binary write to Local file (i.e.,
C a different file for each tile) and read are thread-safe.
#undef LOCBIN_IO_THREAD_SAFE

C-- Flag to turn off the writing of error message to ioUnit zero
#undef DISABLE_WRITE_TO_UNIT_ZERO

C-- Alternative formulation of BYTESWAP, faster than
C compiler flag -byteswapio on the Altix.
#undef FAST_BYTESWAP

C-- Flag to turn on old default of opening scratch files with the
C STATUS='SCRATCH' option. This method, while perfectly FORTRAN-standard,
C caused filename conflicts on some multi-node/multi-processor platforms
C in the past and has been replace by something (hopefully) more robust.
#undef USE_FORTRAN_SCRATCH_FILES

C-- Flag defined for eeboot_minimal.F, eeset_parms.F and open_copy_data_file.F
C to write STDOUT, STDERR and scratch files from process 0 only.
C WARNING: to use only when absolutely confident that the setup is working
C since any message (error/warning/print) from any proc <> 0 will be lost.
#undef SINGLE_DISK_IO

C=== MPI, EXCH and GLOBAL_SUM related options ===
C-- Flag turns off MPI_SEND ready_to_receive polling in the
C gather_* subroutines to speed up integrations.
#undef DISABLE_MPI_READY_TO_RECEIVE

C-- Control MPI based parallel processing
CXXX We no longer select the use of MPI via this file (CPP_EEOPTIONS.h)
CXXX To use MPI, use an appropriate genmake2 options file or use
CXXX genmake2 -mpi .
CXXX #undef ALLOW_USE_MPI

C-- Control use of communication that might overlap computation.
C Under MPI selects/deselects "non-blocking" sends and receives.
#undef ALLOW_ASYNC_COMMUNICATION
#undef ALWAYS_USE_ASYNC_COMMUNICATION
C-- Control use of communication that is atomic to computation.
C Under MPI selects/deselects "blocking" sends and receives.
#define ALLOW_SYNC_COMMUNICATION
#undef ALWAYS_USE_SYNC_COMMUNICATION

C-- Control XY periodicity in processor to grid mappings
C Note: Model code does not need to know whether a domain is
C periodic because it has overlap regions for every box.
C Model assume that these values have been
C filled in some way.
#undef ALWAYS_PREVENT_X_PERIODICITY
#undef ALWAYS_PREVENT_Y_PERIODICITY
#define CAN_PREVENT_X_PERIODICITY
#define CAN_PREVENT_Y_PERIODICITY

C-- disconnect tiles (no exchange between tiles, just fill-in edges
C assuming locally periodic subdomain)
#undef DISCONNECTED_TILES

C-- Always cumulate tile local-sum in the same order by applying MPI allreduce
C to array of tiles ; can get slower with large number of tiles (big set-up)
#define GLOBAL_SUM_ORDER_TILES

C-- Alternative way of doing global sum without MPI allreduce call
C but instead, explicit MPI send & recv calls. Expected to be slower.
#undef GLOBAL_SUM_SEND_RECV

C-- Alternative way of doing global sum on a single CPU
C to eliminate tiling-dependent roundoff errors. Note: This is slow.
#undef CG2D_SINGLECPU_SUM

C=== Other options (to add/remove pieces of code) ===
C-- Flag to turn on checking for errors from all threads and procs
C (calling S/R STOP_IF_ERROR) before stopping.
#define USE_ERROR_STOP

C-- Control use of communication with other component:
C allow to import and export from/to Coupler interface.
#define COMPONENT_MODULE

C-- Activate some pieces of code for coupling to GEOS AGCM
#undef HACK_FOR_GMAO_CPL

C=== And define Macros ===
#include "CPP_EEMACROS.h"

#endif /* _CPP_EEOPTIONS_H_ */
Loading