Skip to content

Commit

Permalink
Bug fixed 1d evp (CICE-Consortium#568)
Browse files Browse the repository at this point in the history
* Initialization of global HTE and HTN arrays with ghost cells on master task to avoid gathering constant arrays when running EVP 1D implementation. This solves issue with departure errors resulting from erroneous initialization of full land blocks in EVP 1D gathering. Correct initialization of remaining variables for full land blocks in EVP 1D gathering.

* Reverted configuration/scripts/cice.batch.csh to resolve bad auto merge of file. Machine 'freya' was double in file.

* Spell error corrected (CICE-Consortium#568 (comment)).

* Redundant initializations to default values removed (CICE-Consortium#568 (comment)). @mhrib questions to default value settings removed.

* Subroutine primary_grid_lengths_global_ext moved from ice_grid module to MPI and serial implementations of ice_boundary modules (CICE-Consortium#568 (comment)). Please note duplication of subroutine.

* Added 1d solver to be fully integrated. Removed kevp=102 bypass.
Clean up. Changed namelist parameter kevp to evp_algorithm

* General cleanup of ice_dyn_evp_1d module (CICE-Consortium#568 (comment) and CICE-Consortium#568 (comment))

* Modified documentation in order to comply with modifications of evp 1d

* Uncommenting of numainit parallelization removed and domp_init moved before numainit to ensure correct initialization. Additional cleanup of code.

* renamed set_nml.evp_algorithm to set_nml.evp1d
added test to base test

* rm set_nml.kevp102

* Naming of variable se is not logical. Changed to sse which is the same
as the global variable

* Alignment of NUMA-aware array splitting between threads

* Initialization of global HTE and HTN arrays with ghost cells on master task to avoid gathering constant arrays when running EVP 1D implementation. This solves issue with departure errors resulting from erroneous initialization of full land blocks in EVP 1D gathering. Correct initialization of remaining variables for full land blocks in EVP 1D gathering.

* Reverted configuration/scripts/cice.batch.csh to resolve bad auto merge of file. Machine 'freya' was double in file.

* Spell error corrected (CICE-Consortium#568 (comment)).

* Redundant initializations to default values removed (CICE-Consortium#568 (comment)). @mhrib questions to default value settings removed.

* Subroutine primary_grid_lengths_global_ext moved from ice_grid module to MPI and serial implementations of ice_boundary modules (CICE-Consortium#568 (comment)). Please note duplication of subroutine.

* Added 1d solver to be fully integrated. Removed kevp=102 bypass.
Clean up. Changed namelist parameter kevp to evp_algorithm

* General cleanup of ice_dyn_evp_1d module (CICE-Consortium#568 (comment) and CICE-Consortium#568 (comment))

* Modified documentation in order to comply with modifications of evp 1d

* Uncommenting of numainit parallelization removed and domp_init moved before numainit to ensure correct initialization. Additional cleanup of code.

* renamed set_nml.evp_algorithm to set_nml.evp1d
added test to base test

* rm set_nml.kevp102

* Naming of variable se is not logical. Changed to sse which is the same
as the global variable

* Alignment of NUMA-aware array splitting between threads

* Fix bad scaling of calculation of tinyarea. Different order of multiplication causes small differences in result

* Fix evp 1d masking issues

- There are relatively rare cases/gridcells where stress (tmask) and stepu (umask) are advanced
without the other.  The 1d evp implementation was not addressing this issue, so skiptcell was added
to the 1d evp (similar to skipucell) to mask out cells separately for stress and stepu.  This
fixed a few non bit-for-bit cases in testing.

- Cleaned up some indentation and other minor issues in 1d evp.

- Updated the evp_algorithm log output in ice_init.

* Some minor revisions in the documentation as suggested by @apcraig (CICE-Consortium#568 (review)) plus some additional minor spelling corrections.

Co-authored-by: Stefan Rethmeier <str@dmi.dk>
Co-authored-by: Till Rasmussen <tar@11glvp2.usr.local>
Co-authored-by: Stefan Rethmeier <stefan@rethmeier.dk>
Co-authored-by: apcraig <anthony.p.craig@gmail.com>
  • Loading branch information
5 people authored Aug 5, 2021
1 parent a35724c commit 0ccdea1
Show file tree
Hide file tree
Showing 18 changed files with 2,402 additions and 2,322 deletions.
215 changes: 106 additions & 109 deletions cicecore/cicedynB/dynamics/ice_dyn_evp.F90
Original file line number Diff line number Diff line change
Expand Up @@ -34,7 +34,7 @@
module ice_dyn_evp

use ice_kinds_mod
use ice_communicate, only: my_task
use ice_communicate, only: my_task, master_task
use ice_constants, only: field_loc_center, field_loc_NEcorner, &
field_type_scalar, field_type_vector
use ice_constants, only: c0, p027, p055, p111, p166, &
Expand Down Expand Up @@ -88,14 +88,14 @@ subroutine evp (dt)
stress12_1, stress12_2, stress12_3, stress12_4
use ice_grid, only: tmask, umask, dxt, dyt, dxhy, dyhx, cxp, cyp, cxm, cym, &
tarear, uarear, tinyarea, to_ugrid, t2ugrid_vector, u2tgrid_vector, &
grid_type, HTE, HTN
grid_type
use ice_state, only: aice, vice, vsno, uvel, vvel, divu, shear, &
aice_init, aice0, aicen, vicen, strength
use ice_timers, only: timer_dynamics, timer_bound, &
ice_timer_start, ice_timer_stop, timer_evp_1d, timer_evp_2d
use ice_dyn_evp_1d, only: ice_dyn_evp_1d_copyin, ice_dyn_evp_1d_kernel, &
ice_dyn_evp_1d_copyout
use ice_dyn_shared, only: kevp_kernel, stack_velocity_field, unstack_velocity_field
use ice_dyn_shared, only: evp_algorithm, stack_velocity_field, unstack_velocity_field

real (kind=dbl_kind), intent(in) :: &
dt ! time step
Expand Down Expand Up @@ -331,7 +331,7 @@ subroutine evp (dt)

if (seabed_stress) then

!$OMP PARALLEL DO PRIVATE(iblk)
!$OMP PARALLEL DO PRIVATE(iblk)
do iblk = 1, nblocks

if ( seabed_stress_method == 'LKD' ) then
Expand All @@ -351,118 +351,115 @@ subroutine evp (dt)
hwater(:,:,iblk), Tbu(:,:,iblk))
endif

enddo
enddo
!$OMP END PARALLEL DO
endif

call ice_timer_start(timer_evp_2d)
if (kevp_kernel > 0) then
if (first_time .and. my_task == 0) then
write(nu_diag,'(2a,i6)') subname,' Entering kevp_kernel version ',kevp_kernel
first_time = .false.
endif
if (trim(grid_type) == 'tripole') then
call abort_ice(trim(subname)//' Kernel not tested on tripole grid. Set kevp_kernel=0')
endif
call ice_dyn_evp_1d_copyin( &
nx_block,ny_block,nblocks,nx_global+2*nghost,ny_global+2*nghost, &
HTE,HTN, &
!v1 dxhy,dyhx,cyp,cxp,cym,cxm,tinyarea, &
!v1 waterx,watery, &
icetmask, iceumask, &
cdn_ocn,aiu,uocn,vocn,forcex,forcey,Tbu, &
umassdti,fm,uarear,tarear,strintx,strinty,uvel_init,vvel_init,&
strength,uvel,vvel,dxt,dyt, &
stressp_1 ,stressp_2, stressp_3, stressp_4, &
stressm_1 ,stressm_2, stressm_3, stressm_4, &
stress12_1,stress12_2,stress12_3,stress12_4 )
if (kevp_kernel == 2) then
call ice_timer_start(timer_evp_1d)
call ice_dyn_evp_1d_kernel()
call ice_timer_stop(timer_evp_1d)
!v1 else if (kevp_kernel == 1) then
!v1 call evp_kernel_v1()
else
if (my_task == 0) write(nu_diag,*) subname,' ERROR: kevp_kernel = ',kevp_kernel
call abort_ice(subname//' kevp_kernel not supported.')
endif
call ice_dyn_evp_1d_copyout( &
nx_block,ny_block,nblocks,nx_global+2*nghost,ny_global+2*nghost,&
!strocn uvel,vvel, strocnx,strocny, strintx,strinty, &
uvel,vvel, strintx,strinty, &
stressp_1, stressp_2, stressp_3, stressp_4, &
stressm_1, stressm_2, stressm_3, stressm_4, &
stress12_1,stress12_2,stress12_3,stress12_4, &
divu,rdg_conv,rdg_shear,shear,taubx,tauby )

else ! kevp_kernel == 0 (Standard CICE)

do ksub = 1,ndte ! subcycling

!-----------------------------------------------------------------
! stress tensor equation, total surface stress
!-----------------------------------------------------------------

!$TCXOMP PARALLEL DO PRIVATE(iblk,strtmp)
do iblk = 1, nblocks

! if (trim(yield_curve) == 'ellipse') then
call stress (nx_block, ny_block, &
ksub, icellt(iblk), &
indxti (:,iblk), indxtj (:,iblk), &
uvel (:,:,iblk), vvel (:,:,iblk), &
dxt (:,:,iblk), dyt (:,:,iblk), &
dxhy (:,:,iblk), dyhx (:,:,iblk), &
cxp (:,:,iblk), cyp (:,:,iblk), &
cxm (:,:,iblk), cym (:,:,iblk), &
tarear (:,:,iblk), tinyarea (:,:,iblk), &
strength (:,:,iblk), &
stressp_1 (:,:,iblk), stressp_2 (:,:,iblk), &
stressp_3 (:,:,iblk), stressp_4 (:,:,iblk), &
stressm_1 (:,:,iblk), stressm_2 (:,:,iblk), &
stressm_3 (:,:,iblk), stressm_4 (:,:,iblk), &
stress12_1(:,:,iblk), stress12_2(:,:,iblk), &
stress12_3(:,:,iblk), stress12_4(:,:,iblk), &
shear (:,:,iblk), divu (:,:,iblk), &
rdg_conv (:,:,iblk), rdg_shear (:,:,iblk), &
strtmp (:,:,:) )
! endif ! yield_curve

!-----------------------------------------------------------------
! momentum equation
!-----------------------------------------------------------------

call stepu (nx_block, ny_block, &
icellu (iblk), Cdn_ocn (:,:,iblk), &
indxui (:,iblk), indxuj (:,iblk), &
ksub, &
aiu (:,:,iblk), strtmp (:,:,:), &
uocn (:,:,iblk), vocn (:,:,iblk), &
waterx (:,:,iblk), watery (:,:,iblk), &
forcex (:,:,iblk), forcey (:,:,iblk), &
umassdti (:,:,iblk), fm (:,:,iblk), &
uarear (:,:,iblk), &
strintx (:,:,iblk), strinty (:,:,iblk), &
taubx (:,:,iblk), tauby (:,:,iblk), &
uvel_init(:,:,iblk), vvel_init(:,:,iblk),&
uvel (:,:,iblk), vvel (:,:,iblk), &
Tbu (:,:,iblk))
enddo
!$TCXOMP END PARALLEL DO
if (evp_algorithm == "shared_mem_1d" ) then

call stack_velocity_field(uvel, vvel, fld2)
call ice_timer_start(timer_bound)
if (maskhalo_dyn) then
call ice_HaloUpdate (fld2, halo_info_mask, &
field_loc_NEcorner, field_type_vector)
else
call ice_HaloUpdate (fld2, halo_info, &
field_loc_NEcorner, field_type_vector)
if (first_time .and. my_task == master_task) then
write(nu_diag,'(3a)') subname,' Entering evp_algorithm version ',evp_algorithm
first_time = .false.
endif
call ice_timer_stop(timer_bound)
call unstack_velocity_field(fld2, uvel, vvel)
if (trim(grid_type) == 'tripole') then
call abort_ice(trim(subname)//' &
& Kernel not tested on tripole grid. Set evp_algorithm=standard_2d')
endif

call ice_dyn_evp_1d_copyin( &
nx_block,ny_block,nblocks,nx_global+2*nghost,ny_global+2*nghost, &
icetmask, iceumask, &
cdn_ocn,aiu,uocn,vocn,forcex,forcey,Tbu, &
umassdti,fm,uarear,tarear,strintx,strinty,uvel_init,vvel_init,&
strength,uvel,vvel,dxt,dyt, &
stressp_1 ,stressp_2, stressp_3, stressp_4, &
stressm_1 ,stressm_2, stressm_3, stressm_4, &
stress12_1,stress12_2,stress12_3,stress12_4 )
call ice_timer_start(timer_evp_1d)
call ice_dyn_evp_1d_kernel()
call ice_timer_stop(timer_evp_1d)
call ice_dyn_evp_1d_copyout( &
nx_block,ny_block,nblocks,nx_global+2*nghost,ny_global+2*nghost,&
!strocn uvel,vvel, strocnx,strocny, strintx,strinty, &
uvel,vvel, strintx,strinty, &
stressp_1, stressp_2, stressp_3, stressp_4, &
stressm_1, stressm_2, stressm_3, stressm_4, &
stress12_1,stress12_2,stress12_3,stress12_4, &
divu,rdg_conv,rdg_shear,shear,taubx,tauby )

else ! evp_algorithm == standard_2d (Standard CICE)

do ksub = 1,ndte ! subcycling

!-----------------------------------------------------------------
! stress tensor equation, total surface stress
!-----------------------------------------------------------------

!$TCXOMP PARALLEL DO PRIVATE(iblk,strtmp)
do iblk = 1, nblocks

! if (trim(yield_curve) == 'ellipse') then
call stress (nx_block, ny_block, &
ksub, icellt(iblk), &
indxti (:,iblk), indxtj (:,iblk), &
uvel (:,:,iblk), vvel (:,:,iblk), &
dxt (:,:,iblk), dyt (:,:,iblk), &
dxhy (:,:,iblk), dyhx (:,:,iblk), &
cxp (:,:,iblk), cyp (:,:,iblk), &
cxm (:,:,iblk), cym (:,:,iblk), &
tarear (:,:,iblk), tinyarea (:,:,iblk), &
strength (:,:,iblk), &
stressp_1 (:,:,iblk), stressp_2 (:,:,iblk), &
stressp_3 (:,:,iblk), stressp_4 (:,:,iblk), &
stressm_1 (:,:,iblk), stressm_2 (:,:,iblk), &
stressm_3 (:,:,iblk), stressm_4 (:,:,iblk), &
stress12_1(:,:,iblk), stress12_2(:,:,iblk), &
stress12_3(:,:,iblk), stress12_4(:,:,iblk), &
shear (:,:,iblk), divu (:,:,iblk), &
rdg_conv (:,:,iblk), rdg_shear (:,:,iblk), &
strtmp (:,:,:) )
! endif ! yield_curve

!-----------------------------------------------------------------
! momentum equation
!-----------------------------------------------------------------

call stepu (nx_block, ny_block, &
icellu (iblk), Cdn_ocn (:,:,iblk), &
indxui (:,iblk), indxuj (:,iblk), &
ksub, &
aiu (:,:,iblk), strtmp (:,:,:), &
uocn (:,:,iblk), vocn (:,:,iblk), &
waterx (:,:,iblk), watery (:,:,iblk), &
forcex (:,:,iblk), forcey (:,:,iblk), &
umassdti (:,:,iblk), fm (:,:,iblk), &
uarear (:,:,iblk), &
strintx (:,:,iblk), strinty (:,:,iblk), &
taubx (:,:,iblk), tauby (:,:,iblk), &
uvel_init(:,:,iblk), vvel_init(:,:,iblk),&
uvel (:,:,iblk), vvel (:,:,iblk), &
Tbu (:,:,iblk))

enddo
!$TCXOMP END PARALLEL DO

call stack_velocity_field(uvel, vvel, fld2)
call ice_timer_start(timer_bound)
if (maskhalo_dyn) then
call ice_HaloUpdate (fld2, halo_info_mask, &
field_loc_NEcorner, field_type_vector)
else
call ice_HaloUpdate (fld2, halo_info, &
field_loc_NEcorner, field_type_vector)
endif
call ice_timer_stop(timer_bound)
call unstack_velocity_field(fld2, uvel, vvel)

enddo ! subcycling
endif ! kevp_kernel
enddo ! subcycling
endif ! evp_algorithm

call ice_timer_stop(timer_evp_2d)

deallocate(fld2)
Expand Down
Loading

0 comments on commit 0ccdea1

Please sign in to comment.