Skip to content

Conversation

iuryt
Copy link
Collaborator

@iuryt iuryt commented Jan 13, 2024

From #2788

@iuryt iuryt marked this pull request as draft January 13, 2024 13:30
@iuryt
Copy link
Collaborator Author

iuryt commented Jan 13, 2024

@glwagner and @simone-silvestri

Continuing the discussion.
#2788 (reply in thread)

It looks like MPI needs to be initialized before accessing its communication variables.

Thus, despite calling Distributed initializes MPI, we need to access Nranks = MPI.Comm_size(comm) before to define Partition.

comm = MPI.COMM_WORLD
rank = MPI.Comm_rank(comm)
Nranks = MPI.Comm_size(comm)

Should we initialize MPI when importing DistributedComputations?

@simone-silvestri
Copy link
Collaborator

in my opinion no, it's fine to have an MPI.Init() if the script directly uses MPI variables. It is more explicit.

The other option to not use MPI variables is to hardcode the number of processors

@iuryt
Copy link
Collaborator Author

iuryt commented Jan 15, 2024

I am still working on that and the PR is in draft mode, but I will let you know when it is ready for review.

@iuryt
Copy link
Collaborator Author

iuryt commented Jan 15, 2024

I just pushed changes for the geostrophic adjustment case. The code seems alright, but it returns the following warning:

warning: ~/.julia/packages/KernelAbstractions/GCOhX/src/extras/loopinfo.jl:28:0: loop not unrolled: the optimizer was unable to perform the requested transformation; the transformation might be disabled or specified as part of an unsupported transformation ordering

Then it returns the following error:

Error output:
ERROR: ERROR: LoadError: ERROR: LoadError: LoadError: ERROR: LoadError: MethodError: no method matching MethodError: no method matching MethodError: iterate(::SplitExplicitFreeSurface{Field{Center, Center, Face, Nothing, ImmersedBoundaryGrid{Float64, FullyConnected, Periodic, Bounded, RectilinearGrid{Float64, FullyConnected, Periodic, Bounded, Float64, Float64, Float64, OffsetArrays.OffsetVector{Float64, StepRangeLen{Float64, Base.TwicePrecision{Float64}, Base.TwicePrecision{Float64}, Int64}}, OffsetArrays.OffsetVector{Float64, StepRangeLen{Float64, Base.TwicePrecision{Float64}, Base.TwicePrecision{Float64}, Int64}}, OffsetArrays.OffsetVector{Float64, StepRangeLen{Float64, Base.TwicePrecision{Float64}, Base.TwicePrecision{Float64}, Int64}}, Distributed{CPU, false, Partition{Int64, Int64, Int64}, Tuple{Int64, Int64, Int64}, Int64, Tuple{Int64, Int64, Int64}, Oceananigans.DistributedComputations.RankConnectivity{Int64, Int64, Nothing, Nothing, Nothing, Nothing, Nothing, Nothing}, MPI.Comm, Vector{MPI.Request}, Base.RefValue{Int64}}}, GridFittedBottom{Field{Center, Center, Nothing, Nothing, RectilinearGrid{Float64, FullyConnected, Periodic, Bounded, Float64, Float64, Float64, OffsetArrays.OffsetVector{Float64, StepRangeLen{Float64, Base.TwicePrecision{Float64}, Base.TwicePrecision{Float64}, Int64}}, OffsetArrays.OffsetVector{Float64, StepRangeLen{Float64, Base.TwicePrecision{Float64}, Base.TwicePrecision{Float64}, Int64}}, OffsetArrays.OffsetVector{Float64, StepRangeLen{Float64, Base.TwicePrecision{Float64}, Base.TwicePrecision{Float64}, Int64}}, Distributed{CPU, false, Partition{Int64, Int64, Int64}, Tuple{Int64, Int64, Int64}, Int64, Tuple{Int64, Int64, Int64}, Oceananigans.DistributedComputations.RankConnectivity{Int64, Int64, Nothing, Nothing, Nothing, Nothing, Nothing, Nothing}, MPI.Comm, Vector{MPI.Request}, Base.RefValue{Int64}}}, Tuple{Colon, Colon, Colon}, OffsetArrays.OffsetArray{Float64, 3, Array{Float64, 3}}, Float64, FieldBoundaryConditions{BoundaryCondition{Oceananigans.BoundaryConditions.DistributedCommunication, Oceananigans.DistributedComputations.HaloCommunicationRanks{Int64, Int64}}, BoundaryCondition{Oceananigans.BoundaryConditions.DistributedCommunication, Oceananigans.DistributedComputations.HaloCommunicationRanks{Int64, Int64}}, BoundaryCondition{Oceananigans.BoundaryConditions.Periodic, Nothing}, BoundaryCondition{Oceananigans.BoundaryConditions.Periodic, Nothing}, Nothing, Nothing, BoundaryCondition{Oceananigans.BoundaryConditions.Flux, Nothing}}, Nothing, Oceananigans.Fields.FieldBoundaryBuffers{@NamedTuple{send::Array{Float64, 3}, recv::Array{Float64, 3}}, @NamedTuple{send::Array{Float64, 3}, recv::Array{Float64, 3}}, Nothing, Nothing, Nothing, Nothing, Nothing, Nothing}}, Oceananigans.ImmersedBoundaries.CenterImmersedCondition}, Nothing, Distributed{CPU, false, Partition{Int64, Int64, Int64}, Tuple{Int64, Int64, Int64}, Int64, Tuple{Int64, Int64, Int64}, Oceananigans.DistributedComputations.RankConnectivity{Int64, Int64, Nothing, Nothing, Nothing, Nothing, Nothing, Nothing}, MPI.Comm, Vector{MPI.Request}, Base.RefValue{Int64}}}, Tuple{Colon, Colon, UnitRange{Int64}}, OffsetArrays.OffsetArray{Float64, 3, Array{Float64, 3}}, Float64, FieldBoundaryConditions{BoundaryCondition{Oceananigans.BoundaryConditions.DistributedCommunication, Oceananigans.DistributedComputations.HaloCommunicationRanks{Int64, Int64}}, BoundaryCondition{Oceananigans.BoundaryConditions.DistributedCommunication, Oceananigans.DistributedComputations.HaloCommunicationRanks{Int64, Int64}}, BoundaryCondition{Oceananigans.BoundaryConditions.Periodic, Nothing}, BoundaryCondition{Oceananigans.BoundaryConditions.Periodic, Nothing}, Nothing, Nothing, BoundaryCondition{Oceananigans.BoundaryConditions.Flux, Nothing}}, Nothing, Oceananigans.Fields.FieldBoundaryBuffers{@NamedTuple{send::Array{Float64, 3}, recv::Array{Float64, 3}}, @NamedTuple{send::Array{Float64, 3}, recv::Array{Float64, 3}}, Nothing, Nothing, Nothing, Nothing, Nothing, Nothing}}, Oceananigans.Models.HydrostaticFreeSurfaceModels.SplitExplicitState{Field{Center, Center, Face, Nothing, ImmersedBoundaryGrid{Float64, FullyConnected, Periodic, Bounded, RectilinearGrid{Float64, FullyConnected, Periodic, Bounded, Float64, Float64, Float64, OffsetArrays.OffsetVector{Float64, StepRangeLen{Float64, Base.TwicePrecision{Float64}, Base.TwicePrecision{Float64}, Int64}}, OffsetArrays.OffsetVector{Float64, StepRangeLen{Float64, Base.TwicePrecision{Float64}, Base.TwicePrecision{Float64}, Int64}}, OffsetArrays.OffsetVector{Float64, StepRangeLen{Float64, Base.TwicePrecision{Float64}, Base.TwicePrecision{Float64}, Int64}}, Distributed{CPU, false, Partition{Int64, Int64, Int64}, Tuple{Int64, Int64, Int64}, Int64, Tuple{Int64, Int64, Int64}, Oceananigans.DistributedComputations.RankConnectivity{Int64, Int64, Nothing, Nothing, Nothing, Nothing, Nothing, Nothing}, MPI.Comm, Vector{MPI.Request}, Base.RefValue{Int64}}}, GridFittedBottom{Field{Center, Center, Nothing, Nothing, RectilinearGrid{Float64, FullyConnected, Periodic, Bounded, Float64, Float64, Float64, OffsetArrays.OffsetVector{Float64, StepRangeLen{Float64, Base.TwicePrecision{Float64}, Base.TwicePrecision{Float64}, Int64}}, OffsetArrays.OffsetVector{Float64, StepRangeLen{Float64, Base.TwicePrecision{Float64}, Base.TwicePrecision{Float64}, Int64}}, OffsetArrays.OffsetVector{Float64, StepRangeLen{Float64, Base.TwicePrecision{Float64}, Base.TwicePrecision{Float64}, Int64}}, Distributed{CPU, false, Partition{Int64, Int64, Int64}, Tuple{Int64, Int64, Int64}, Int64, Tuple{Int64, Int64, Int64}, Oceananigans.DistributedComputations.RankConnectivity{Int64, Int64, Nothing, Nothing, Nothing, Nothing, Nothing, Nothing}, MPI.Comm, Vector{MPI.Request}, Base.RefValue{Int64}}}, Tuple{Colon, Colon, Colon}, OffsetArrays.OffsetArray{Float64, 3, Array{Float64, 3}}, Float64, FieldBoundaryConditions{BoundaryCondition{Oceananigans.BoundaryConditions.DistributedCommunication, Oceananigans.DistributedComputations.HaloCommunicationRanks{Int64, Int64}}, BoundaryCondition{Oceananigans.BoundaryConditions.DistributedCommunication, Oceananigans.DistributedComputations.HaloCommunicationRanks{Int64, Int64}}, BoundaryCondition{Oceananigans.BoundaryConditions.Periodic, Nothing}, BoundaryCondition{Oceananigans.BoundaryConditions.Periodic, Nothing}, Nothing, Nothing, BoundaryCondition{Oceananigans.BoundaryConditions.Flux, Nothing}}, Nothing, Oceananigans.Fields.FieldBoundaryBuffers{@NamedTuple{send::Array{Float64, 3}, recv::Array{Float64, 3}}, @NamedTuple{send::Array{Float64, 3}, recv::Array{Float64, 3}}, Nothing, Nothing, Nothing, Nothing, Nothing, Nothing}}, Oceananigans.ImmersedBoundaries.CenterImmersedCondition}, Nothing, Distributed{CPU, false, Partition{Int64, Int64, Int64}, Tuple{Int64, Int64, Int64}, Int64, Tuple{Int64, Int64, Int64}, Oceananigans.DistributedComputations.RankConnectivity{Int64, Int64, Nothing, Nothing, Nothing, Nothing, Nothing, Nothing}, MPI.Comm, Vector{MPI.Request}, Base.RefValue{Int64}}}, Tuple{Colon, Colon, UnitRange{Int64}}, OffsetArrays.OffsetArray{Float64, 3, Array{Float64, 3}}, Float64, FieldBoundaryConditions{BoundaryCondition{Oceananigans.BoundaryConditions.DistributedCommunication, Oceananigans.DistributedComputations.HaloCommunicationRanks{Int64, Int64}}, BoundaryCondition{Oceananigans.BoundaryConditions.DistributedCommunication, Oceananigans.DistributedComputations.HaloCommunicationRanks{Int64, Int64}}, BoundaryCondition{Oceananigans.BoundaryConditions.Periodic, Nothing}, BoundaryCondition{Oceananigans.BoundaryConditions.Periodic, Nothing}, Nothing, Nothing, BoundaryCondition{Oceananigans.BoundaryConditions.Flux, Nothing}}, Nothing, Oceananigans.Fields.FieldBoundaryBuffers{@NamedTuple{send::Array{Float64, 3}, recv::Array{Float64, 3}}, @NamedTuple{send::Array{Float64, 3}, recv::Array{Float64, 3}}, Nothing, Nothing, Nothing, Nothing, Nothing, Nothing}}, Field{Face, Center, Nothing, Nothing, ImmersedBoundaryGrid{Float64, FullyConnected, Periodic, Bounded, RectilinearGrid{Float64, FullyConnected, Periodic, Bounded, Float64, Float64, Float64, OffsetArrays.OffsetVector{Float64, StepRangeLen{Float64, Base.TwicePrecision{Float64}, Base.TwicePrecision{Float64}, Int64}}, OffsetArrays.OffsetVector{Float64, StepRangeLen{Float64, Base.TwicePrecision{Float64}, Base.TwicePrecision{Float64}, Int64}}, OffsetArrays.OffsetVector{Float64, StepRangeLen{Float64, Base.TwicePrecision{Float64}, Base.TwicePrecision{Float64}, Int64}}, Distributed{CPU, false, Partition{Int64, Int64, Int64}, Tuple{Int64, Int64, Int64}, Int64, Tuple{Int64, Int64, Int64}, Oceananigans.DistributedComputations.RankConnectivity{Int64, Int64, Nothing, Nothing, Nothing, Nothing, Nothing, Nothing}, MPI.Comm, Vector{MPI.Request}, Base.RefValue{Int64}}}, GridFittedBottom{Field{Center, Center, Nothing, Nothing, RectilinearGrid{Float64, FullyConnected, Periodic, Bounded, Float64, Float64, Float64, OffsetArrays.OffsetVector{Float64, StepRangeLen{Float64, Base.TwicePrecision{Float64}, Base.TwicePrecision{Float64}, Int64}}, OffsetArrays.OffsetVector{Float64, StepRangeLen{Float64, Base.TwicePrecision{Float64}, Base.TwicePrecision{Float64}, Int64}}, OffsetArrays.OffsetVector{Float64, StepRangeLen{Float64, Base.TwicePrecision{Float64}, Base.TwicePrecision{Float64}, Int64}}, Distributed{CPU, false, Partition{Int64, Int64, Int64}, Tuple{Int64, Int64, Int64}, Int64, Tuple{Int64, Int64, Int64}, Oceananigans.DistributedComputations.RankConnectivity{Int64, Int64, Nothing, Nothing, Nothing, Nothing, Nothing, Nothing}, MPI.Comm, Vector{MPI.Request}, Base.RefValue{Int64}}}, Tuple{Colon, Colon, Colon}, OffsetArrays.OffsetArray{Float64, 3, Array{Float64, 3}}, Float64, FieldBoundaryConditions{BoundaryCondition{Oceananigans.BoundaryConditions.DistributedCommunication, Oceananigans.DistributedComputations.HaloCommunicationRanks{Int64, Int64}}, BoundaryCondition{Oceananigans.BoundaryConditions.DistributedCommunication, Oceananigans.DistributedComputations.HaloCommunicationRanks{Int64, Int64}}, BoundaryCondition{Oceananigans.BoundaryConditions.Periodic, Nothing}, BoundaryCondition{Oceananigans.BoundaryConditions.Periodic, Nothing}, Nothing, Nothing, BoundaryCondition{Oceananigans.BoundaryConditions.Flux, Nothing}}, Nothing, Oceananigans.Fields.FieldBoundaryBuffers{@NamedTuple{send::Array{Float64, 3}, recv::Array{Float64, 3}}, @NamedTuple{send::Array{Float64, 3}, recv::Array{Float64, 3}}, Nothing, Nothing, Nothing, Nothing, Nothing, Nothing}}, Oceananigans.ImmersedBoundaries.CenterImmersedCondition}, Nothing, Distributed{CPU, false, Partition{Int64, Int64, Int64}, Tuple{Int64, Int64, Int64}, Int64, Tuple{Int64, Int64, Int64}, Oceananigans.DistributedComputations.RankConnectivity{Int64, Int64, Nothing, Nothing, Nothing, Nothing, Nothing, Nothing}, MPI.Comm, Vector{MPI.Request}, Base.RefValue{Int64}}}, Tuple{Colon, Colon, Colon}, OffsetArrays.OffsetArray{Float64, 3, Array{Float64, 3}}, Float64, FieldBoundaryConditions{BoundaryCondition{Oceananigans.BoundaryConditions.DistributedCommunication, Oceananigans.DistributedComputations.HaloCommunicationRanks{Int64, Int64}}, BoundaryCondition{Oceananigans.BoundaryConditions.DistributedCommunication, Oceananigans.DistributedComputations.HaloCommunicationRanks{Int64, Int64}}, BoundaryCondition{Oceananigans.BoundaryConditions.Periodic, Nothing}, BoundaryCondition{Oceananigans.BoundaryConditions.Periodic, Nothing}, Nothing, Nothing, BoundaryCondition{Oceananigans.BoundaryConditions.Flux, Nothing}}, Nothing, Oceananigans.Fields.FieldBoundaryBuffers{@NamedTuple{send::Array{Float64, 3}, recv::Array{Float64, 3}}, @NamedTuple{send::Array{Float64, 3}, recv::Array{Float64, 3}}, Nothing, Nothing, Nothing, Nothing, Nothing, Nothing}}, Field{Center, Face, Nothing, Nothing, ImmersedBoundaryGrid{Float64, FullyConnected, Periodic, Bounded, RectilinearGrid{Float64, FullyConnected, Periodic, Bounded, Float64, Float64, Float64, OffsetArrays.OffsetVector{Float64, StepRangeLen{Float64, Base.TwicePrecision{Float64}, Base.TwicePrecision{Float64}, Int64}}, OffsetArrays.OffsetVector{Float64, StepRangeLen{Float64, Base.TwicePrecision{Float64}, Base.TwicePrecision{Float64}, Int64}}, OffsetArrays.OffsetVector{Float64, StepRangeLen{Float64, Base.TwicePrecision{Float64}, Base.TwicePrecision{Float64}, Int64}}, Distributed{CPU, false, Partition{Int64, Int64, Int64}, Tuple{Int64, Int64, Int64}, Int64, Tuple{Int64, Int64, Int64}, Oceananigans.DistributedComputations.RankConnectivity{Int64, Int64, Nothing, Nothing, Nothing, Nothing, Nothing, Nothing}, MPI.Comm, Vector{MPI.Request}, Base.RefValue{Int64}}}, GridFittedBottom{Field{Center, Center, Nothing, Nothing, RectilinearGrid{Float64, FullyConnected, Periodic, Bounded, Float64, Float64, Float64, OffsetArrays.OffsetVector{Float64, StepRangeLen{Float64, Base.TwicePrecision{Float64}, Base.TwicePrecision{Float64}, Int64}}, OffsetArrays.OffsetVector{Float64, StepRangeLen{Float64, Base.TwicePrecision{Float64}, Base.TwicePrecision{Float64}, Int64}}, OffsetArrays.OffsetVector{Float64, StepRangeLen{Float64, Base.TwicePrecision{Float64}, Base.TwicePrecision{Float64}, Int64}}, Distributed{CPU, false, Partition{Int64, Int64, Int64}, Tuple{Int64, Int64, Int64}, Int64, Tuple{Int64, Int64, Int64}, Oceananigans.DistributedComputations.RankConnectivity{Int64, Int64, Nothing, Nothing, Nothing, Nothing, Nothing, Nothing}, MPI.Comm, Vector{MPI.Request}, Base.RefValue{Int64}}}, Tuple{Colon, Colon, Colon}, OffsetArrays.OffsetArray{Float64, 3, Array{Float64, 3}}, Float64, FieldBoundaryConditions{BoundaryCondition{Oceananigans.BoundaryConditions.DistributedCommunication, Oceananigans.DistributedComputations.HaloCommunicationRanks{Int64, Int64}}, BoundaryCondition{Oceananigans.BoundaryConditions.DistributedCommunication, Oceananigans.DistributedComputations.HaloCommunicationRanks{Int64, Int64}}, BoundaryCondition{Oceananigans.BoundaryConditions.Periodic, Nothing}, BoundaryCondition{Oceananigans.BoundaryConditions.Periodic, Nothing}, Nothing, Nothing, BoundaryCondition{Oceananigans.BoundaryConditions.Flux, Nothing}}, Nothing, Oceananigans.Fields.FieldBoundaryBuffers{@NamedTuple{send::Array{Float64, 3}, recv::Array{Float64, 3}}, @NamedTuple{send::Array{Float64, 3}, recv::Array{Float64, 3}}, Nothing, Nothing, Nothing, Nothing, Nothing, Nothing}}, Oceananigans.ImmersedBoundaries.CenterImmersedCondition}, Nothing, Distributed{CPU, false, Partition{Int64, Int64, Int64}, Tuple{Int64, Int64, Int64}, Int64, Tuple{Int64, Int64, Int64}, Oceananigans.DistributedComputations.RankConnectivity{Int64, Int64, Nothing, Nothing, Nothing, Nothing, Nothing, Nothing}, MPI.Comm, Vector{MPI.Request}, Base.RefValue{Int64}}}, Tuple{Colon, Colon, Colon}, OffsetArrays.OffsetArray{Float64, 3, Array{Float64, 3}}, Float64, FieldBoundaryConditions{BoundaryCondition{Oceananigans.BoundaryConditions.DistributedCommunication, Oceananigans.DistributedComputations.HaloCommunicationRanks{Int64, Int64}}, BoundaryCondition{Oceananigans.BoundaryConditions.DistributedCommunication, Oceananigans.DistributedComputations.HaloCommunicationRanks{Int64, Int64}}, BoundaryCondition{Oceananigans.BoundaryConditions.Periodic, Nothing}, BoundaryCondition{Oceananigans.BoundaryConditions.Periodic, Nothing}, Nothing, Nothing, BoundaryCondition{Oceananigans.BoundaryConditions.Flux, Nothing}}, Nothing, Oceananigans.Fields.FieldBoundaryBuffers{@NamedTuple{send::Array{Float64, 3}, recv::Array{Float64, 3}}, @NamedTuple{send::Array{Float64, 3}, recv::Array{Float64, 3}}, Nothing, Nothing, Nothing, Nothing, Nothing, Nothing}}}, Oceananigans.Models.HydrostaticFreeSurfaceModels.SplitExplicitAuxiliaryFields{Field{Center, Face, Nothing, Nothing, ImmersedBoundaryGrid{Float64, FullyConnected, Periodic, Bounded, RectilinearGrid{Float64, FullyConnected, Periodic, Bounded, Float64, Float64, Float64, OffsetArrays.OffsetVector{Float64, StepRangeLen{Float64, Base.TwicePrecision{Float64}, Base.TwicePrecision{Float64}, Int64}}, OffsetArrays.OffsetVector{Float64, StepRangeLen{Float64, Base.TwicePrecision{Float64}, Base.TwicePrecision{Float64}, Int64}}, OffsetArrays.OffsetVector{Float64, StepRangeLen{Float64, Base.TwicePrecision{Float64}, Base.TwicePrecision{Float64}, Int64}}, Distributed{CPU, false, Partition{Int64, Int64, Int64}, Tuple{Int64, Int64, Int64}, Int64, Tuple{Int64, Int64, Int64}, Oceananigans.DistributedComputations.RankConnectivity{Int64, Int64, Nothing, Nothing, Nothing, Nothing, Nothing, Nothing}, MPI.Comm, Vector{MPI.Request}, Base.RefValue{Int64}}}, GridFittedBottom{Field{Center, Center, Nothing, Nothing, RectilinearGrid{Float64, FullyConnected, Periodic, Bounded, Float64, Float64, Float64, OffsetArrays.OffsetVector{Float64, StepRangeLen{Float64, Base.TwicePrecision{Float64}, Base.TwicePrecision{Float64}, Int64}}, OffsetArrays.OffsetVector{Float64, StepRangeLen{Float64, Base.TwicePrecision{Float64}, Base.TwicePrecision{Float64}, Int64}}, OffsetArrays.OffsetVector{Float64, StepRangeLen{Float64, Base.TwicePrecision{Float64}, Base.TwicePrecision{Float64}, Int64}}, Distributed{CPU, false, Partition{Int64, Int64, Int64}, Tuple{Int64, Int64, Int64}, Int64, Tuple{Int64, Int64, Int64}, Oceananigans.DistributedComputations.RankConnectivity{Int64, Int64, Nothing, Nothing, Nothing, Nothing, Nothing, Nothing}, MPI.Comm, Vector{MPI.Request}, Base.RefValue{Int64}}}, Tuple{Colon, Colon, Colon}, OffsetArrays.OffsetArray{Float64, 3, Array{Float64, 3}}, Float64, FieldBoundaryConditions{BoundaryCondition{Oceananigans.BoundaryConditions.DistributedCommunication, Oceananigans.DistributedComputations.HaloCommunicationRanks{Int64, Int64}}, BoundaryCondition{Oceananigans.BoundaryConditions.DistributedCommunication, Oceananigans.DistributedComputations.HaloCommunicationRanks{Int64, Int64}}, BoundaryCondition{Oceananigans.BoundaryConditions.Periodic, Nothing}, BoundaryCondition{Oceananigans.BoundaryConditions.Periodic, Nothing}, Nothing, Nothing, BoundaryCondition{Oceananigans.BoundaryConditions.Flux, Nothing}}, Nothing, Oceananigans.Fields.FieldBoundaryBuffers{@NamedTuple{send::Array{Float64, 3}, recv::Array{Float64, 3}}, @NamedTuple{send::Array{Float64, 3}, recv::Array{Float64, 3}}, Nothing, Nothing, Nothing, Nothing, Nothing, Nothing}}, Oceananigans.ImmersedBoundaries.CenterImmersedCondition}, Nothing, Distributed{CPU, false, Partition{Int64, Int64, Int64}, Tuple{Int64, Int64, Int64}, Int64, Tuple{Int64, Int64, Int64}, Oceananigans.DistributedComputations.RankConnectivity{Int64, Int64, Nothing, Nothing, Nothing, Nothing, Nothing, Nothing}, MPI.Comm, Vector{MPI.Request}, Base.RefValue{Int64}}}, Tuple{Colon, Colon, Colon}, OffsetArrays.OffsetArray{Float64, 3, Array{Float64, 3}}, Float64, FieldBoundaryConditions{BoundaryCondition{Oceananigans.BoundaryConditions.DistributedCommunication, Oceananigans.DistributedComputations.HaloCommunicationRanks{Int64, Int64}}, BoundaryCondition{Oceananigans.BoundaryConditions.DistributedCommunication, Oceananigans.DistributedComputations.HaloCommunicationRanks{Int64, Int64}}, BoundaryCondition{Oceananigans.BoundaryConditions.Periodic, Nothing}, BoundaryCondition{Oceananigans.BoundaryConditions.Periodic, Nothing}, Nothing, Nothing, BoundaryCondition{Oceananigans.BoundaryConditions.Flux, Nothing}}, Nothing, Oceananigans.Fields.FieldBoundaryBuffers{@NamedTuple{send::Array{Float64, 3}, recv::Array{Float64, 3}}, @NamedTuple{send::Array{Float64, 3}, recv::Array{Float64, 3}}, Nothing, Nothing, Nothing, Nothing, Nothing, Nothing}}, Field{Face, Center, Nothing, Nothing, ImmersedBoundaryGrid{Float64, FullyConnected, Periodic, Bounded, RectilinearGrid{Float64, FullyConnected, Periodic, Bounded, Float64, Float64, Float64, OffsetArrays.OffsetVector{Float64, StepRangeLen{Float64, Base.TwicePrecision{Float64}, Base.TwicePrecision{Float64}, Int64}}, OffsetArrays.OffsetVector{Float64, StepRangeLen{Float64, Base.TwicePrecision{Float64}, Base.TwicePrecision{Float64}, Int64}}, OffsetArrays.OffsetVector{Float64, StepRangeLen{Float64, Base.TwicePrecision{Float64}, Base.TwicePrecision{Float64}, Int64}}, Distributed{CPU, false, Partition{Int64, Int64, Int64}, Tuple{Int64, Int64, Int64}, Int64, Tuple{Int64, Int64, Int64}, Oceananigans.DistributedComputations.RankConnectivity{Int64, Int64, Nothing, Nothing, Nothing, Nothing, Nothing, Nothing}, MPI.Comm, Vector{MPI.Request}, Base.RefValue{Int64}}}, GridFittedBottom{Field{Center, Center, Nothing, Nothing, RectilinearGrid{Float64, FullyConnected, Periodic, Bounded, Float64, Float64, Float64, OffsetArrays.OffsetVector{Float64, StepRangeLen{Float64, Base.TwicePrecision{Float64}, Base.TwicePrecision{Float64}, Int64}}, OffsetArrays.OffsetVector{Float64, StepRangeLen{Float64, Base.TwicePrecision{Float64}, Base.TwicePrecision{Float64}, Int64}}, OffsetArrays.OffsetVector{Float64, StepRangeLen{Float64, Base.TwicePrecision{Float64}, Base.TwicePrecision{Float64}, Int64}}, Distributed{CPU, false, Partition{Int64, Int64, Int64}, Tuple{Int64, Int64, Int64}, Int64, Tuple{Int64, Int64, Int64}, Oceananigans.DistributedComputations.RankConnectivity{Int64, Int64, Nothing, Nothing, Nothing, Nothing, Nothing, Nothing}, MPI.Comm, Vector{MPI.Request}, Base.RefValue{Int64}}}, Tuple{Colon, Colon, Colon}, OffsetArrays.OffsetArray{Float64, 3, Array{Float64, 3}}, Float64, FieldBoundaryConditions{BoundaryCondition{Oceananigans.BoundaryConditions.DistributedCommunication, Oceananigans.DistributedComputations.HaloCommunicationRanks{Int64, Int64}}, BoundaryCondition{Oceananigans.BoundaryConditions.DistributedCommunication, Oceananigans.DistributedComputations.HaloCommunicationRanks{Int64, Int64}}, BoundaryCondition{Oceananigans.BoundaryConditions.Periodic, Nothing}, BoundaryCondition{Oceananigans.BoundaryConditions.Periodic, Nothing}, Nothing, Nothing, BoundaryCondition{Oceananigans.BoundaryConditions.Flux, Nothing}}, Nothing, Oceananigans.Fields.FieldBoundaryBuffers{@NamedTuple{send::Array{Float64, 3}, recv::Array{Float64, 3}}, @NamedTuple{send::Array{Float64, 3}, recv::Array{Float64, 3}}, Nothing, Nothing, Nothing, Nothing, Nothing, Nothing}}, Oceananigans.ImmersedBoundaries.CenterImmersedCondition}, Nothing, Distributed{CPU, false, Partition{Int64, Int64, Int64}, Tuple{Int64, Int64, Int64}, Int64, Tuple{Int64, Int64, Int64}, Oceananigans.DistributedComputations.RankConnectivity{Int64, Int64, Nothing, Nothing, Nothing, Nothing, Nothing, Nothing}, MPI.Comm, Vector{MPI.Request}, Base.RefValue{Int64}}}, Tuple{Colon, Colon, Colon}, OffsetArrays.OffsetArray{Float64, 3, Array{Float64, 3}}, Float64, FieldBoundaryConditions{BoundaryCondition{Oceananigans.BoundaryConditions.DistributedCommunication, Oceananigans.DistributedComputations.HaloCommunicationRanks{Int64, Int64}}, BoundaryCondition{Oceananigans.BoundaryConditions.DistributedCommunication, Oceananigans.DistributedComputations.HaloCommunicationRanks{Int64, Int64}}, BoundaryCondition{Oceananigans.BoundaryConditions.Periodic, Nothing}, BoundaryCondition{Oceananigans.BoundaryConditions.Periodic, Nothing}, Nothing, Nothing, BoundaryCondition{Oceananigans.BoundaryConditions.Flux, Nothing}}, Nothing, Oceananigans.Fields.FieldBoundaryBuffers{@NamedTuple{send::Array{Float64, 3}, recv::Array{Float64, 3}}, @NamedTuple{send::Array{Float64, 3}, recv::Array{Float64, 3}}, Nothing, Nothing, Nothing, Nothing, Nothing, Nothing}}, Field{Center, Center, Nothing, Nothing, ImmersedBoundaryGrid{Float64, FullyConnected, Periodic, Bounded, RectilinearGrid{Float64, FullyConnected, Periodic, Bounded, Float64, Float64, Float64, OffsetArrays.OffsetVector{Float64, StepRangeLen{Float64, Base.TwicePrecision{Float64}, Base.TwicePrecision{Float64}, Int64}}, OffsetArrays.OffsetVector{Float64, StepRangeLen{Float64, Base.TwicePrecision{Float64}, Base.TwicePrecision{Float64}, Int64}}, OffsetArrays.OffsetVector{Float64, StepRangeLen{Float64, Base.TwicePrecision{Float64}, Base.TwicePrecision{Float64}, Int64}}, Distributed{CPU, false, Partition{Int64, Int64, Int64}, Tuple{Int64, Int64, Int64}, Int64, Tuple{Int64, Int64, Int64}, Oceananigans.DistributedComputations.RankConnectivity{Int64, Int64, Nothing, Nothing, Nothing, Nothing, Nothing, Nothing}, MPI.Comm, Vector{MPI.Request}, Base.RefValue{Int64}}}, GridFittedBottom{Field{Center, Center, Nothing, Nothing, RectilinearGrid{Float64, FullyConnected, Periodic, Bounded, Float64, Float64, Float64, OffsetArrays.OffsetVector{Float64, StepRangeLen{Float64, Base.TwicePrecision{Float64}, Base.TwicePrecision{Float64}, Int64}}, OffsetArrays.OffsetVector{Float64, StepRangeLen{Float64, Base.TwicePrecision{Float64}, Base.TwicePrecision{Float64}, Int64}}, OffsetArrays.OffsetVector{Float64, StepRangeLen{Float64, Base.TwicePrecision{Float64}, Base.TwicePrecision{Float64}, Int64}}, Distributed{CPU, false, Partition{Int64, Int64, Int64}, Tuple{Int64, Int64, Int64}, Int64, Tuple{Int64, Int64, Int64}, Oceananigans.DistributedComputations.RankConnectivity{Int64, Int64, Nothing, Nothing, Nothing, Nothing, Nothing, Nothing}, MPI.Comm, Vector{MPI.Request}, Base.RefValue{Int64}}}, Tuple{Colon, Colon, Colon}, OffsetArrays.OffsetArray{Float64, 3, Array{Float64, 3}}, Float64, FieldBoundaryConditions{BoundaryCondition{Oceananigans.BoundaryConditions.DistributedCommunication, Oceananigans.DistributedComputations.HaloCommunicationRanks{Int64, Int64}}, BoundaryCondition{Oceananigans.BoundaryConditions.DistributedCommunication, Oceananigans.DistributedComputations.HaloCommunicationRanks{Int64, Int64}}, BoundaryCondition{Oceananigans.BoundaryConditions.Periodic, Nothing}, BoundaryCondition{Oceananigans.BoundaryConditions.Periodic, Nothing}, Nothing, Nothing, BoundaryCondition{Oceananigans.BoundaryConditions.Flux, Nothing}}, Nothing, Oceananigans.Fields.FieldBoundaryBuffers{@NamedTuple{send::Array{Float64, 3}, recv::Array{Float64, 3}}, @NamedTuple{send::Array{Float64, 3}, recv::Array{Float64, 3}}, Nothing, Nothing, Nothing, Nothing, Nothing, Nothing}}, Oceananigans.ImmersedBoundaries.CenterImmersedCondition}, Nothing, Distributed{CPU, false, Partition{Int64, Int64, Int64}, Tuple{Int64, Int64, Int64}, Int64, Tuple{Int64, Int64, Int64}, Oceananigans.DistributedComputations.RankConnectivity{Int64, Int64, Nothing, Nothing, Nothing, Nothing, Nothing, Nothing}, MPI.Comm, Vector{MPI.Request}, Base.RefValue{Int64}}}, Tuple{Colon, Colon, Colon}, OffsetArrays.OffsetArray{Float64, 3, Array{Float64, 3}}, Float64, FieldBoundaryConditions{BoundaryCondition{Oceananigans.BoundaryConditions.DistributedCommunication, Oceananigans.DistributedComputations.HaloCommunicationRanks{Int64, Int64}}, BoundaryCondition{Oceananigans.BoundaryConditions.DistributedCommunication, Oceananigans.DistributedComputations.HaloCommunicationRanks{Int64, Int64}}, BoundaryCondition{Oceananigans.BoundaryConditions.Periodic, Nothing}, BoundaryCondition{Oceananigans.BoundaryConditions.Periodic, Nothing}, Nothing, Nothing, BoundaryCondition{Oceananigans.BoundaryConditions.Flux, Nothing}}, Nothing, Oceananigans.Fields.FieldBoundaryBuffers{@NamedTuple{send::Array{Float64, 3}, recv::Array{Float64, 3}}, @NamedTuple{send::Array{Float64, 3}, recv::Array{Float64, 3}}, Nothing, Nothing, Nothing, Nothing, Nothing, Nothing}}, Oceananigans.Utils.KernelParameters{(34, 3), (-7, 0)}}, Float64, Oceananigans.Models.HydrostaticFreeSurfaceModels.SplitExplicitSettings{Oceananigans.Models.HydrostaticFreeSurfaceModels.FixedSubstepNumber{Float64, Vector{Float64}}, Oceananigans.Models.HydrostaticFreeSurfaceModels.ForwardBackwardScheme}})no method matching MethodError: no method matching iterate(::SplitExplicitFreeSurface{Field{Center, Center, Face, Nothing, ImmersedBoundaryGrid{Float64, FullyConnected, Periodic, Bounded, RectilinearGrid{Float64, FullyConnected, Periodic, Bounded, Float64, Float64, Float64, OffsetArrays.OffsetVector{Float64, StepRangeLen{Float64, Base.TwicePrecision{Float64}, Base.TwicePrecision{Float64}, Int64}}, OffsetArrays.OffsetVector{Float64, StepRangeLen{Float64, Base.TwicePrecision{Float64}, Base.TwicePrecision{Float64}, Int64}}, OffsetArrays.OffsetVector{Float64, StepRangeLen{Float64, Base.TwicePrecision{Float64}, Base.TwicePrecision{Float64}, Int64}}, Distributed{CPU, false, Partition{Int64, Int64, Int64}, Tuple{Int64, Int64, Int64}, Int64, Tuple{Int64, Int64, Int64}, Oceananigans.DistributedComputations.RankConnectivity{Int64, Int64, Nothing, Nothing, Nothing, Nothing, Nothing, Nothing}, MPI.Comm, Vector{MPI.Request}, Base.RefValue{Int64}}}, GridFittedBottom{Field{Center, Center, Nothing, Nothing, RectilinearGrid{Float64, FullyConnected, Periodic, Bounded, Float64, Float64, Float64, OffsetArrays.OffsetVector{Float64, StepRangeLen{Float64, Base.TwicePrecision{Float64}, Base.TwicePrecision{Float64}, Int64}}, OffsetArrays.OffsetVector{Float64, StepRangeLen{Float64, Base.TwicePrecision{Float64}, Base.TwicePrecision{Float64}, Int64}}, OffsetArrays.OffsetVector{Float64, StepRangeLen{Float64, Base.TwicePrecision{Float64}, Base.TwicePrecision{Float64}, Int64}}, Distributed{CPU, false, Partition{Int64, Int64, Int64}, Tuple{Int64, Int64, Int64}, Int64, Tuple{Int64, Int64, Int64}, Oceananigans.DistributedComputations.RankConnectivity{Int64, Int64, Nothing, Nothing, Nothing, Nothing, Nothing, Nothing}, MPI.Comm, Vector{MPI.Request}, Base.RefValue{Int64}}}, Tuple{Colon, Colon, Colon}, OffsetArrays.OffsetArray{Float64, 3, Array{Float64, 3}}, Float64, FieldBoundaryConditions{BoundaryCondition{Oceananigans.BoundaryConditions.DistributedCommunication, Oceananigans.DistributedComputations.HaloCommunicationRanks{Int64, Int64}}, BoundaryCondition{Oceananigans.BoundaryConditions.DistributedCommunication, Oceananigans.DistributedComputations.HaloCommunicationRanks{Int64, Int64}}, BoundaryCondition{Oceananigans.BoundaryConditions.Periodic, Nothing}, BoundaryCondition{Oceananigans.BoundaryConditions.Periodic, Nothing}, Nothing, Nothing, BoundaryCondition{Oceananigans.BoundaryConditions.Flux, Nothing}}, Nothing, Oceananigans.Fields.FieldBoundaryBuffers{@NamedTuple{send::Array{Float64, 3}, recv::Array{Float64, 3}}, @NamedTuple{send::Array{Float64, 3}, recv::Array{Float64, 3}}, Nothing, Nothing, Nothing, Nothing, Nothing, Nothing}}, Oceananigans.ImmersedBoundaries.CenterImmersedCondition}, Nothing, Distributed{CPU, false, Partition{Int64, Int64, Int64}, Tuple{Int64, Int64, Int64}, Int64, Tuple{Int64, Int64, Int64}, Oceananigans.DistributedComputations.RankConnectivity{Int64, Int64, Nothing, Nothing, Nothing, Nothing, Nothing, Nothing}, MPI.Comm, Vector{MPI.Request}, Base.RefValue{Int64}}}, Tuple{Colon, Colon, UnitRange{Int64}}, OffsetArrays.OffsetArray{Float64, 3, Array{Float64, 3}}, Float64, FieldBoundaryConditions{BoundaryCondition{Oceananigans.BoundaryConditions.DistributedCommunication, Oceananigans.DistributedComputations.HaloCommunicationRanks{Int64, Int64}}, BoundaryCondition{Oceananigans.BoundaryConditions.DistributedCommunication, Oceananigans.DistributedComputations.HaloCommunicationRanks{Int64, Int64}}, BoundaryCondition{Oceananigans.BoundaryConditions.Periodic, Nothing}, BoundaryCondition{Oceananigans.BoundaryConditions.Periodic, Nothing}, Nothing, Nothing, BoundaryCondition{Oceananigans.BoundaryConditions.Flux, Nothing}}, Nothing, Oceananigans.Fields.FieldBoundaryBuffers{@NamedTuple{send::Array{Float64, 3}, recv::Array{Float64, 3}}, @NamedTuple{send::Array{Float64, 3}, recv::Array{Float64, 3}}, Nothing, Nothing, Nothing, Nothing, Nothing, Nothing}}, Oceananigans.Models.HydrostaticFreeSurfaceModels.SplitExplicitState{Field{Center, Center, Face, Nothing, ImmersedBoundaryGrid{Float64, FullyConnected, Periodic, Bounded, RectilinearGrid{Float64, FullyConnected, Periodic, Bounded, Float64, Float64, Float64, OffsetArrays.OffsetVector{Float64, StepRangeLen{Float64, Base.TwicePrecision{Float64}, Base.TwicePrecision{Float64}, Int64}}, OffsetArrays.OffsetVector{Float64, StepRangeLen{Float64, Base.TwicePrecision{Float64}, Base.TwicePrecision{Float64}, Int64}}, OffsetArrays.OffsetVector{Float64, StepRangeLen{Float64, Base.TwicePrecision{Float64}, Base.TwicePrecision{Float64}, Int64}}, Distributed{CPU, false, Partition{Int64, Int64, Int64}, Tuple{Int64, Int64, Int64}, Int64, Tuple{Int64, Int64, Int64}, Oceananigans.DistributedComputations.RankConnectivity{Int64, Int64, Nothing, Nothing, Nothing, Nothing, Nothing, Nothing}, MPI.Comm, Vector{MPI.Request}, Base.RefValue{Int64}}}, GridFittedBottom{Field{Center, Center, Nothing, Nothing, RectilinearGrid{Float64, FullyConnected, Periodic, Bounded, Float64, Float64, Float64, OffsetArrays.OffsetVector{Float64, StepRangeLen{Float64, Base.TwicePrecision{Float64}, Base.TwicePrecision{Float64}, Int64}}, OffsetArrays.OffsetVector{Float64, StepRangeLen{Float64, Base.TwicePrecision{Float64}, Base.TwicePrecision{Float64}, Int64}}, OffsetArrays.OffsetVector{Float64, StepRangeLen{Float64, Base.TwicePrecision{Float64}, Base.TwicePrecision{Float64}, Int64}}, Distributed{CPU, false, Partition{Int64, Int64, Int64}, Tuple{Int64, Int64, Int64}, Int64, Tuple{Int64, Int64, Int64}, Oceananigans.DistributedComputations.RankConnectivity{Int64, Int64, Nothing, Nothing, Nothing, Nothing, Nothing, Nothing}, MPI.Comm, Vector{MPI.Request}, Base.RefValue{Int64}}}, Tuple{Colon, Colon, Colon}, OffsetArrays.OffsetArray{Float64, 3, Array{Float64, 3}}, Float64, FieldBoundaryConditions{BoundaryCondition{Oceananigans.BoundaryConditions.DistributedCommunication, Oceananigans.DistributedComputations.HaloCommunicationRanks{Int64, Int64}}, BoundaryCondition{Oceananigans.BoundaryConditions.DistributedCommunication, Oceananigans.DistributedComputations.HaloCommunicationRanks{Int64, Int64}}, BoundaryCondition{Oceananigans.BoundaryConditions.Periodic, Nothing}, BoundaryCondition{Oceananigans.BoundaryConditions.Periodic, Nothing}, Nothing, Nothing, BoundaryCondition{Oceananigans.BoundaryConditions.Flux, Nothing}}, Nothing, Oceananigans.Fields.FieldBoundaryBuffers{@NamedTuple{send::Array{Float64, 3}, recv::Array{Float64, 3}}, @NamedTuple{send::Array{Float64, 3}, recv::Array{Float64, 3}}, Nothing, Nothing, Nothing, Nothing, Nothing, Nothing}}, Oceananigans.ImmersedBoundaries.CenterImmersedCondition}, Nothing, Distributed{CPU, false, Partition{Int64, Int64, Int64}, Tuple{Int64, Int64, Int64}, Int64, Tuple{Int64, Int64, Int64}, Oceananigans.DistributedComputations.RankConnectivity{Int64, Int64, Nothing, Nothing, Nothing, Nothing, Nothing, Nothing}, MPI.Comm, Vector{MPI.Request}, Base.RefValue{Int64}}}, Tuple{Colon, Colon, UnitRange{Int64}}, OffsetArrays.OffsetArray{Float64, 3, Array{Float64, 3}}, Float64, FieldBoundaryConditions{BoundaryCondition{Oceananigans.BoundaryConditions.DistributedCommunication, Oceananigans.DistributedComputations.HaloCommunicationRanks{Int64, Int64}}, BoundaryCondition{Oceananigans.BoundaryConditions.DistributedCommunication, Oceananigans.DistributedComputations.HaloCommunicationRanks{Int64, Int64}}, BoundaryCondition{Oceananigans.BoundaryConditions.Periodic, Nothing}, BoundaryCondition{Oceananigans.BoundaryConditions.Periodic, Nothing}, Nothing, Nothing, BoundaryCondition{Oceananigans.BoundaryConditions.Flux, Nothing}}, Nothing, Oceananigans.Fields.FieldBoundaryBuffers{@NamedTuple{send::Array{Float64, 3}, recv::Array{Float64, 3}}, @NamedTuple{send::Array{Float64, 3}, recv::Array{Float64, 3}}, Nothing, Nothing, Nothing, Nothing, Nothing, Nothing}}, Field{Face, Center, Nothing, Nothing, ImmersedBoundaryGrid{Float64, FullyConnected, Periodic, Bounded, RectilinearGrid{Float64, FullyConnected, Periodic, Bounded, Float64, Float64, Float64, OffsetArrays.OffsetVector{Float64, StepRangeLen{Float64, Base.TwicePrecision{Float64}, Base.TwicePrecision{Float64}, Int64}}, OffsetArrays.OffsetVector{Float64, StepRangeLen{Float64, Base.TwicePrecision{Float64}, Base.TwicePrecision{Float64}, Int64}}, OffsetArrays.OffsetVector{Float64, StepRangeLen{Float64, Base.TwicePrecision{Float64}, Base.TwicePrecision{Float64}, Int64}}, Distributed{CPU, false, Partition{Int64, Int64, Int64}, Tuple{Int64, Int64, Int64}, Int64, Tuple{Int64, Int64, Int64}, Oceananigans.DistributedComputations.RankConnectivity{Int64, Int64, Nothing, Nothing, Nothing, Nothing, Nothing, Nothing}, MPI.Comm, Vector{MPI.Request}, Base.RefValue{Int64}}}, GridFittedBottom{Field{Center, Center, Nothing, Nothing, RectilinearGrid{Float64, FullyConnected, Periodic, Bounded, Float64, Float64, Float64, OffsetArrays.OffsetVector{Float64, StepRangeLen{Float64, Base.TwicePrecision{Float64}, Base.TwicePrecision{Float64}, Int64}}, OffsetArrays.OffsetVector{Float64, StepRangeLen{Float64, Base.TwicePrecision{Float64}, Base.TwicePrecision{Float64}, Int64}}, OffsetArrays.OffsetVector{Float64, StepRangeLen{Float64, Base.TwicePrecision{Float64}, Base.TwicePrecision{Float64}, Int64}}, Distributed{CPU, false, Partition{Int64, Int64, Int64}, Tuple{Int64, Int64, Int64}, Int64, Tuple{Int64, Int64, Int64}, Oceananigans.DistributedComputations.RankConnectivity{Int64, Int64, Nothing, Nothing, Nothing, Nothing, Nothing, Nothing}, MPI.Comm, Vector{MPI.Request}, Base.RefValue{Int64}}}, Tuple{Colon, Colon, Colon}, OffsetArrays.OffsetArray{Float64, 3, Array{Float64, 3}}, Float64, FieldBoundaryConditions{BoundaryCondition{Oceananigans.BoundaryConditions.DistributedCommunication, Oceananigans.DistributedComputations.HaloCommunicationRanks{Int64, Int64}}, BoundaryCondition{Oceananigans.BoundaryConditions.DistributedCommunication, Oceananigans.DistributedComputations.HaloCommunicationRanks{Int64, Int64}}, BoundaryCondition{Oceananigans.BoundaryConditions.Periodic, Nothing}, BoundaryCondition{Oceananigans.BoundaryConditions.Periodic, Nothing}, Nothing, Nothing, BoundaryCondition{Oceananigans.BoundaryConditions.Flux, Nothing}}, Nothing, Oceananigans.Fields.FieldBoundaryBuffers{@NamedTuple{send::Array{Float64, 3}, recv::Array{Float64, 3}}, @NamedTuple{send::Array{Float64, 3}, recv::Array{Float64, 3}}, Nothing, Nothing, Nothing, Nothing, Nothing, Nothing}}, Oceananigans.ImmersedBoundaries.CenterImmersedCondition}, Nothing, Distributed{CPU, false, Partition{Int64, Int64, Int64}, Tuple{Int64, Int64, Int64}, Int64, Tuple{Int64, Int64, Int64}, Oceananigans.DistributedComputations.RankConnectivity{Int64, Int64, Nothing, Nothing, Nothing, Nothing, Nothing, Nothing}, MPI.Comm, Vector{MPI.Request}, Base.RefValue{Int64}}}, Tuple{Colon, Colon, Colon}, OffsetArrays.OffsetArray{Float64, 3, Array{Float64, 3}}, Float64, FieldBoundaryConditions{BoundaryCondition{Oceananigans.BoundaryConditions.DistributedCommunication, Oceananigans.DistributedComputations.HaloCommunicationRanks{Int64, Int64}}, BoundaryCondition{Oceananigans.BoundaryConditions.DistributedCommunication, Oceananigans.DistributedComputations.HaloCommunicationRanks{Int64, Int64}}, BoundaryCondition{Oceananigans.BoundaryConditions.Periodic, Nothing}, BoundaryCondition{Oceananigans.BoundaryConditions.Periodic, Nothing}, Nothing, Nothing, BoundaryCondition{Oceananigans.BoundaryConditions.Flux, Nothing}}, Nothing, Oceananigans.Fields.FieldBoundaryBuffers{@NamedTuple{send::Array{Float64, 3}, recv::Array{Float64, 3}}, @NamedTuple{send::Array{Float64, 3}, recv::Array{Float64, 3}}, Nothing, Nothing, Nothing, Nothing, Nothing, Nothing}}, Field{Center, Face, Nothing, Nothing, ImmersedBoundaryGrid{Float64, FullyConnected, Periodic, Bounded, RectilinearGrid{Float64, FullyConnected, Periodic, Bounded, Float64, Float64, Float64, OffsetArrays.OffsetVector{Float64, StepRangeLen{Float64, Base.TwicePrecision{Float64}, Base.TwicePrecision{Float64}, Int64}}, OffsetArrays.OffsetVector{Float64, StepRangeLen{Float64, Base.TwicePrecision{Float64}, Base.TwicePrecision{Float64}, Int64}}, OffsetArrays.OffsetVector{Float64, StepRangeLen{Float64, Base.TwicePrecision{Float64}, Base.TwicePrecision{Float64}, Int64}}, Distributed{CPU, false, Partition{Int64, Int64, Int64}, Tuple{Int64, Int64, Int64}, Int64, Tuple{Int64, Int64, Int64}, Oceananigans.DistributedComputations.RankConnectivity{Int64, Int64, Nothing, Nothing, Nothing, Nothing, Nothing, Nothing}, MPI.Comm, Vector{MPI.Request}, Base.RefValue{Int64}}}, GridFittedBottom{Field{Center, Center, Nothing, Nothing, RectilinearGrid{Float64, FullyConnected, Periodic, Bounded, Float64, Float64, Float64, OffsetArrays.OffsetVector{Float64, StepRangeLen{Float64, Base.TwicePrecision{Float64}, Base.TwicePrecision{Float64}, Int64}}, OffsetArrays.OffsetVector{Float64, StepRangeLen{Float64, Base.TwicePrecision{Float64}, Base.TwicePrecision{Float64}, Int64}}, OffsetArrays.OffsetVector{Float64, StepRangeLen{Float64, Base.TwicePrecision{Float64}, Base.TwicePrecision{Float64}, Int64}}, Distributed{CPU, false, Partition{Int64, Int64, Int64}, Tuple{Int64, Int64, Int64}, Int64, Tuple{Int64, Int64, Int64}, Oceananigans.DistributedComputations.RankConnectivity{Int64, Int64, Nothing, Nothing, Nothing, Nothing, Nothing, Nothing}, MPI.Comm, Vector{MPI.Request}, Base.RefValue{Int64}}}, Tuple{Colon, Colon, Colon}, OffsetArrays.OffsetArray{Float64, 3, Array{Float64, 3}}, Float64, FieldBoundaryConditions{BoundaryCondition{Oceananigans.BoundaryConditions.DistributedCommunication, Oceananigans.DistributedComputations.HaloCommunicationRanks{Int64, Int64}}, BoundaryCondition{Oceananigans.BoundaryConditions.DistributedCommunication, Oceananigans.DistributedComputations.HaloCommunicationRanks{Int64, Int64}}, BoundaryCondition{Oceananigans.BoundaryConditions.Periodic, Nothing}, BoundaryCondition{Oceananigans.BoundaryConditions.Periodic, Nothing}, Nothing, Nothing, BoundaryCondition{Oceananigans.BoundaryConditions.Flux, Nothing}}, Nothing, Oceananigans.Fields.FieldBoundaryBuffers{@NamedTuple{send::Array{Float64, 3}, recv::Array{Float64, 3}}, @NamedTuple{send::Array{Float64, 3}, recv::Array{Float64, 3}}, Nothing, Nothing, Nothing, Nothing, Nothing, Nothing}}, Oceananigans.ImmersedBoundaries.CenterImmersedCondition}, Nothing, Distributed{CPU, false, Partition{Int64, Int64, Int64}, Tuple{Int64, Int64, Int64}, Int64, Tuple{Int64, Int64, Int64}, Oceananigans.DistributedComputations.RankConnectivity{Int64, Int64, Nothing, Nothing, Nothing, Nothing, Nothing, Nothing}, MPI.Comm, Vector{MPI.Request}, Base.RefValue{Int64}}}, Tuple{Colon, Colon, Colon}, OffsetArrays.OffsetArray{Float64, 3, Array{Float64, 3}}, Float64, FieldBoundaryConditions{BoundaryCondition{Oceananigans.BoundaryConditions.DistributedCommunication, Oceananigans.DistributedComputations.HaloCommunicationRanks{Int64, Int64}}, BoundaryCondition{Oceananigans.BoundaryConditions.DistributedCommunication, Oceananigans.DistributedComputations.HaloCommunicationRanks{Int64, Int64}}, BoundaryCondition{Oceananigans.BoundaryConditions.Periodic, Nothing}, BoundaryCondition{Oceananigans.BoundaryConditions.Periodic, Nothing}, Nothing, Nothing, BoundaryCondition{Oceananigans.BoundaryConditions.Flux, Nothing}}, Nothing, Oceananigans.Fields.FieldBoundaryBuffers{@NamedTuple{send::Array{Float64, 3}, recv::Array{Float64, 3}}, @NamedTuple{send::Array{Float64, 3}, recv::Array{Float64, 3}}, Nothing, Nothing, Nothing, Nothing, Nothing, Nothing}}}, Oceananigans.Models.HydrostaticFreeSurfaceModels.SplitExplicitAuxiliaryFields{Field{Center, Face, Nothing, Nothing, ImmersedBoundaryGrid{Float64, FullyConnected, Periodic, Bounded, RectilinearGrid{Float64, FullyConnected, Periodic, Bounded, Float64, Float64, Float64, OffsetArrays.OffsetVector{Float64, StepRangeLen{Float64, Base.TwicePrecision{Float64}, Base.TwicePrecision{Float64}, Int64}}, OffsetArrays.OffsetVector{Float64, StepRangeLen{Float64, Base.TwicePrecision{Float64}, Base.TwicePrecision{Float64}, Int64}}, OffsetArrays.OffsetVector{Float64, StepRangeLen{Float64, Base.TwicePrecision{Float64}, Base.TwicePrecision{Float64}, Int64}}, Distributed{CPU, false, Partition{Int64, Int64, Int64}, Tuple{Int64, Int64, Int64}, Int64, Tuple{Int64, Int64, Int64}, Oceananigans.DistributedComputations.RankConnectivity{Int64, Int64, Nothing, Nothing, Nothing, Nothing, Nothing, Nothing}, MPI.Comm, Vector{MPI.Request}, Base.RefValue{Int64}}}, GridFittedBottom{Field{Center, Center, Nothing, Nothing, RectilinearGrid{Float64, FullyConnected, Periodic, Bounded, Float64, Float64, Float64, OffsetArrays.OffsetVector{Float64, StepRangeLen{Float64, Base.TwicePrecision{Float64}, Base.TwicePrecision{Float64}, Int64}}, OffsetArrays.OffsetVector{Float64, StepRangeLen{Float64, Base.TwicePrecision{Float64}, Base.TwicePrecision{Float64}, Int64}}, OffsetArrays.OffsetVector{Float64, StepRangeLen{Float64, Base.TwicePrecision{Float64}, Base.TwicePrecision{Float64}, Int64}}, Distributed{CPU, false, Partition{Int64, Int64, Int64}, Tuple{Int64, Int64, Int64}, Int64, Tuple{Int64, Int64, Int64}, Oceananigans.DistributedComputations.RankConnectivity{Int64, Int64, Nothing, Nothing, Nothing, Nothing, Nothing, Nothing}, MPI.Comm, Vector{MPI.Request}, Base.RefValue{Int64}}}, Tuple{Colon, Colon, Colon}, OffsetArrays.OffsetArray{Float64, 3, Array{Float64, 3}}, Float64, FieldBoundaryConditions{BoundaryCondition{Oceananigans.BoundaryConditions.DistributedCommunication, Oceananigans.DistributedComputations.HaloCommunicationRanks{Int64, Int64}}, BoundaryCondition{Oceananigans.BoundaryConditions.DistributedCommunication, Oceananigans.DistributedComputations.HaloCommunicationRanks{Int64, Int64}}, BoundaryCondition{Oceananigans.BoundaryConditions.Periodic, Nothing}, BoundaryCondition{Oceananigans.BoundaryConditions.Periodic, Nothing}, Nothing, Nothing, BoundaryCondition{Oceananigans.BoundaryConditions.Flux, Nothing}}, Nothing, Oceananigans.Fields.FieldBoundaryBuffers{@NamedTuple{send::Array{Float64, 3}, recv::Array{Float64, 3}}, @NamedTuple{send::Array{Float64, 3}, recv::Array{Float64, 3}}, Nothing, Nothing, Nothing, Nothing, Nothing, Nothing}}, Oceananigans.ImmersedBoundaries.CenterImmersedCondition}, Nothing, Distributed{CPU, false, Partition{Int64, Int64, Int64}, Tuple{Int64, Int64, Int64}, Int64, Tuple{Int64, Int64, Int64}, Oceananigans.DistributedComputations.RankConnectivity{Int64, Int64, Nothing, Nothing, Nothing, Nothing, Nothing, Nothing}, MPI.Comm, Vector{MPI.Request}, Base.RefValue{Int64}}}, Tuple{Colon, Colon, Colon}, OffsetArrays.OffsetArray{Float64, 3, Array{Float64, 3}}, Float64, FieldBoundaryConditions{BoundaryCondition{Oceananigans.BoundaryConditions.DistributedCommunication, Oceananigans.DistributedComputations.HaloCommunicationRanks{Int64, Int64}}, BoundaryCondition{Oceananigans.BoundaryConditions.DistributedCommunication, Oceananigans.DistributedComputations.HaloCommunicationRanks{Int64, Int64}}, BoundaryCondition{Oceananigans.BoundaryConditions.Periodic, Nothing}, BoundaryCondition{Oceananigans.BoundaryConditions.Periodic, Nothing}, Nothing, Nothing, BoundaryCondition{Oceananigans.BoundaryConditions.Flux, Nothing}}, Nothing, Oceananigans.Fields.FieldBoundaryBuffers{@NamedTuple{send::Array{Float64, 3}, recv::Array{Float64, 3}}, @NamedTuple{send::Array{Float64, 3}, recv::Array{Float64, 3}}, Nothing, Nothing, Nothing, Nothing, Nothing, Nothing}}, Field{Face, Center, Nothing, Nothing, ImmersedBoundaryGrid{Float64, FullyConnected, Periodic, Bounded, RectilinearGrid{Float64, FullyConnected, Periodic, Bounded, Float64, Float64, Float64, OffsetArrays.OffsetVector{Float64, StepRangeLen{Float64, Base.TwicePrecision{Float64}, Base.TwicePrecision{Float64}, Int64}}, OffsetArrays.OffsetVector{Float64, StepRangeLen{Float64, Base.TwicePrecision{Float64}, Base.TwicePrecision{Float64}, Int64}}, OffsetArrays.OffsetVector{Float64, StepRangeLen{Float64, Base.TwicePrecision{Float64}, Base.TwicePrecision{Float64}, Int64}}, Distributed{CPU, false, Partition{Int64, Int64, Int64}, Tuple{Int64, Int64, Int64}, Int64, Tuple{Int64, Int64, Int64}, Oceananigans.DistributedComputations.RankConnectivity{Int64, Int64, Nothing, Nothing, Nothing, Nothing, Nothing, Nothing}, MPI.Comm, Vector{MPI.Request}, Base.RefValue{Int64}}}, GridFittedBottom{Field{Center, Center, Nothing, Nothing, RectilinearGrid{Float64, FullyConnected, Periodic, Bounded, Float64, Float64, Float64, OffsetArrays.OffsetVector{Float64, StepRangeLen{Float64, Base.TwicePrecision{Float64}, Base.TwicePrecision{Float64}, Int64}}, OffsetArrays.OffsetVector{Float64, StepRangeLen{Float64, Base.TwicePrecision{Float64}, Base.TwicePrecision{Float64}, Int64}}, OffsetArrays.OffsetVector{Float64, StepRangeLen{Float64, Base.TwicePrecision{Float64}, Base.TwicePrecision{Float64}, Int64}}, Distributed{CPU, false, Partition{Int64, Int64, Int64}, Tuple{Int64, Int64, Int64}, Int64, Tuple{Int64, Int64, Int64}, Oceananigans.DistributedComputations.RankConnectivity{Int64, Int64, Nothing, Nothing, Nothing, Nothing, Nothing, Nothing}, MPI.Comm, Vector{MPI.Request}, Base.RefValue{Int64}}}, Tuple{Colon, Colon, Colon}, OffsetArrays.OffsetArray{Float64, 3, Array{Float64, 3}}, Float64, FieldBoundaryConditions{BoundaryCondition{Oceananigans.BoundaryConditions.DistributedCommunication, Oceananigans.DistributedComputations.HaloCommunicationRanks{Int64, Int64}}, BoundaryCondition{Oceananigans.BoundaryConditions.DistributedCommunication, Oceananigans.DistributedComputations.HaloCommunicationRanks{Int64, Int64}}, BoundaryCondition{Oceananigans.BoundaryConditions.Periodic, Nothing}, BoundaryCondition{Oceananigans.BoundaryConditions.Periodic, Nothing}, Nothing, Nothing, BoundaryCondition{Oceananigans.BoundaryConditions.Flux, Nothing}}, Nothing, Oceananigans.Fields.FieldBoundaryBuffers{@NamedTuple{send::Array{Float64, 3}, recv::Array{Float64, 3}}, @NamedTuple{send::Array{Float64, 3}, recv::Array{Float64, 3}}, Nothing, Nothing, Nothing, Nothing, Nothing, Nothing}}, Oceananigans.ImmersedBoundaries.CenterImmersedCondition}, Nothing, Distributed{CPU, false, Partition{Int64, Int64, Int64}, Tuple{Int64, Int64, Int64}, Int64, Tuple{Int64, Int64, Int64}, Oceananigans.DistributedComputations.RankConnectivity{Int64, Int64, Nothing, Nothing, Nothing, Nothing, Nothing, Nothing}, MPI.Comm, Vector{MPI.Request}, Base.RefValue{Int64}}}, Tuple{Colon, Colon, Colon}, OffsetArrays.OffsetArray{Float64, 3, Array{Float64, 3}}, Float64, FieldBoundaryConditions{BoundaryCondition{Oceananigans.BoundaryConditions.DistributedCommunication, Oceananigans.DistributedComputations.HaloCommunicationRanks{Int64, Int64}}, BoundaryCondition{Oceananigans.BoundaryConditions.DistributedCommunication, Oceananigans.DistributedComputations.HaloCommunicationRanks{Int64, Int64}}, BoundaryCondition{Oceananigans.BoundaryConditions.Periodic, Nothing}, BoundaryCondition{Oceananigans.BoundaryConditions.Periodic, Nothing}, Nothing, Nothing, BoundaryCondition{Oceananigans.BoundaryConditions.Flux, Nothing}}, Nothing, Oceananigans.Fields.FieldBoundaryBuffers{@NamedTuple{send::Array{Float64, 3}, recv::Array{Float64, 3}}, @NamedTuple{send::Array{Float64, 3}, recv::Array{Float64, 3}}, Nothing, Nothing, Nothing, Nothing, Nothing, Nothing}}, Field{Center, Center, Nothing, Nothing, ImmersedBoundaryGrid{Float64, FullyConnected, Periodic, Bounded, RectilinearGrid{Float64, FullyConnected, Periodic, Bounded, Float64, Float64, Float64, OffsetArrays.OffsetVector{Float64, StepRangeLen{Float64, Base.TwicePrecision{Float64}, Base.TwicePrecision{Float64}, Int64}}, OffsetArrays.OffsetVector{Float64, StepRangeLen{Float64, Base.TwicePrecision{Float64}, Base.TwicePrecision{Float64}, Int64}}, OffsetArrays.OffsetVector{Float64, StepRangeLen{Float64, Base.TwicePrecision{Float64}, Base.TwicePrecision{Float64}, Int64}}, Distributed{CPU, false, Partition{Int64, Int64, Int64}, Tuple{Int64, Int64, Int64}, Int64, Tuple{Int64, Int64, Int64}, Oceananigans.DistributedComputations.RankConnectivity{Int64, Int64, Nothing, Nothing, Nothing, Nothing, Nothing, Nothing}, MPI.Comm, Vector{MPI.Request}, Base.RefValue{Int64}}}, GridFittedBottom{Field{Center, Center, Nothing, Nothing, RectilinearGrid{Float64, FullyConnected, Periodic, Bounded, Float64, Float64, Float64, OffsetArrays.OffsetVector{Float64, StepRangeLen{Float64, Base.TwicePrecision{Float64}, Base.TwicePrecision{Float64}, Int64}}, OffsetArrays.OffsetVector{Float64, StepRangeLen{Float64, Base.TwicePrecision{Float64}, Base.TwicePrecision{Float64}, Int64}}, OffsetArrays.OffsetVector{Float64, StepRangeLen{Float64, Base.TwicePrecision{Float64}, Base.TwicePrecision{Float64}, Int64}}, Distributed{CPU, false, Partition{Int64, Int64, Int64}, Tuple{Int64, Int64, Int64}, Int64, Tuple{Int64, Int64, Int64}, Oceananigans.DistributedComputations.RankConnectivity{Int64, Int64, Nothing, Nothing, Nothing, Nothing, Nothing, Nothing}, MPI.Comm, Vector{MPI.Request}, Base.RefValue{Int64}}}, Tuple{Colon, Colon, Colon}, OffsetArrays.OffsetArray{Float64, 3, Array{Float64, 3}}, Float64, FieldBoundaryConditions{BoundaryCondition{Oceananigans.BoundaryConditions.DistributedCommunication, Oceananigans.DistributedComputations.HaloCommunicationRanks{Int64, Int64}}, BoundaryCondition{Oceananigans.BoundaryConditions.DistributedCommunication, Oceananigans.DistributedComputations.HaloCommunicationRanks{Int64, Int64}}, BoundaryCondition{Oceananigans.BoundaryConditions.Periodic, Nothing}, BoundaryCondition{Oceananigans.BoundaryConditions.Periodic, Nothing}, Nothing, Nothing, BoundaryCondition{Oceananigans.BoundaryConditions.Flux, Nothing}}, Nothing, Oceananigans.Fields.FieldBoundaryBuffers{@NamedTuple{send::Array{Float64, 3}, recv::Array{Float64, 3}}, @NamedTuple{send::Array{Float64, 3}, recv::Array{Float64, 3}}, Nothing, Nothing, Nothing, Nothing, Nothing, Nothing}}, Oceananigans.ImmersedBoundaries.CenterImmersedCondition}, Nothing, Distributed{CPU, false, Partition{Int64, Int64, Int64}, Tuple{Int64, Int64, Int64}, Int64, Tuple{Int64, Int64, Int64}, Oceananigans.DistributedComputations.RankConnectivity{Int64, Int64, Nothing, Nothing, Nothing, Nothing, Nothing, Nothing}, MPI.Comm, Vector{MPI.Request}, Base.RefValue{Int64}}}, Tuple{Colon, Colon, Colon}, OffsetArrays.OffsetArray{Float64, 3, Array{Float64, 3}}, Float64, FieldBoundaryConditions{BoundaryCondition{Oceananigans.BoundaryConditions.DistributedCommunication, Oceananigans.DistributedComputations.HaloCommunicationRanks{Int64, Int64}}, BoundaryCondition{Oceananigans.BoundaryConditions.DistributedCommunication, Oceananigans.DistributedComputations.HaloCommunicationRanks{Int64, Int64}}, BoundaryCondition{Oceananigans.BoundaryConditions.Periodic, Nothing}, BoundaryCondition{Oceananigans.BoundaryConditions.Periodic, Nothing}, Nothing, Nothing, BoundaryCondition{Oceananigans.BoundaryConditions.Flux, Nothing}}, Nothing, Oceananigans.Fields.FieldBoundaryBuffers{@NamedTuple{send::Array{Float64, 3}, recv::Array{Float64, 3}}, @NamedTuple{send::Array{Float64, 3}, recv::Array{Float64, 3}}, Nothing, Nothing, Nothing, Nothing, Nothing, Nothing}}, Oceananigans.Utils.KernelParameters{(34, 3), (-7, 0)}}, Float64, Oceananigans.Models.HydrostaticFreeSurfaceModels.SplitExplicitSettings{Oceananigans.Models.HydrostaticFreeSurfaceModels.FixedSubstepNumber{Float64, Vector{Float64}}, Oceananigans.Models.HydrostaticFreeSurfaceModels.ForwardBackwardScheme}})iterate(::SplitExplicitFreeSurface{Field{Center, Center, Face, Nothing, ImmersedBoundaryGrid{Float64, LeftConnected, Periodic, Bounded, RectilinearGrid{Float64, LeftConnected, Periodic, Bounded, Float64, Float64, Float64, OffsetArrays.OffsetVector{Float64, StepRangeLen{Float64, Base.TwicePrecision{Float64}, Base.TwicePrecision{Float64}, Int64}}, OffsetArrays.OffsetVector{Float64, StepRangeLen{Float64, Base.TwicePrecision{Float64}, Base.TwicePrecision{Float64}, Int64}}, OffsetArrays.OffsetVector{Float64, StepRangeLen{Float64, Base.TwicePrecision{Float64}, Base.TwicePrecision{Float64}, Int64}}, Distributed{CPU, false, Partition{Int64, Int64, Int64}, Tuple{Int64, Int64, Int64}, Int64, Tuple{Int64, Int64, Int64}, Oceananigans.DistributedComputations.RankConnectivity{Int64, Int64, Nothing, Nothing, Nothing, Nothing, Nothing, Nothing}, MPI.Comm, Vector{MPI.Request}, Base.RefValue{Int64}}}, GridFittedBottom{Field{Center, Center, Nothing, Nothing, RectilinearGrid{Float64, LeftConnected, Periodic, Bounded, Float64, Float64, Float64, OffsetArrays.OffsetVector{Float64, StepRangeLen{Float64, Base.TwicePrecision{Float64}, Base.TwicePrecision{Float64}, Int64}}, OffsetArrays.OffsetVector{Float64, StepRangeLen{Float64, Base.TwicePrecision{Float64}, Base.TwicePrecision{Float64}, Int64}}, OffsetArrays.OffsetVector{Float64, StepRangeLen{Float64, Base.TwicePrecision{Float64}, Base.TwicePrecision{Float64}, Int64}}, Distributed{CPU, false, Partition{Int64, Int64, Int64}, Tuple{Int64, Int64, Int64}, Int64, Tuple{Int64, Int64, Int64}, Oceananigans.DistributedComputations.RankConnectivity{Int64, Int64, Nothing, Nothing, Nothing, Nothing, Nothing, Nothing}, MPI.Comm, Vector{MPI.Request}, Base.RefValue{Int64}}}, Tuple{Colon, Colon, Colon}, OffsetArrays.OffsetArray{Float64, 3, Array{Float64, 3}}, Float64, FieldBoundaryConditions{BoundaryCondition{Oceananigans.BoundaryConditions.DistributedCommunication, Oceananigans.DistributedComputations.HaloCommunicationRanks{Int64, Int64}}, BoundaryCondition{Oceananigans.BoundaryConditions.Flux, Nothing}, BoundaryCondition{Oceananigans.BoundaryConditions.Periodic, Nothing}, BoundaryCondition{Oceananigans.BoundaryConditions.Periodic, Nothing}, Nothing, Nothing, BoundaryCondition{Oceananigans.BoundaryConditions.Flux, Nothing}}, Nothing, Oceananigans.Fields.FieldBoundaryBuffers{@NamedTuple{send::Array{Float64, 3}, recv::Array{Float64, 3}}, Nothing, Nothing, Nothing, Nothing, Nothing, Nothing, Nothing}}, Oceananigans.ImmersedBoundaries.CenterImmersedCondition}, Nothing, Distributed{CPU, false, Partition{Int64, Int64, Int64}, Tuple{Int64, Int64, Int64}, Int64, Tuple{Int64, Int64, Int64}, Oceananigans.DistributedComputations.RankConnectivity{Int64, Int64, Nothing, Nothing, Nothing, Nothing, Nothing, Nothing}, MPI.Comm, Vector{MPI.Request}, Base.RefValue{Int64}}}, Tuple{Colon, Colon, UnitRange{Int64}}, OffsetArrays.OffsetArray{Float64, 3, Array{Float64, 3}}, Float64, FieldBoundaryConditions{BoundaryCondition{Oceananigans.BoundaryConditions.DistributedCommunication, Oceananigans.DistributedComputations.HaloCommunicationRanks{Int64, Int64}}, BoundaryCondition{Oceananigans.BoundaryConditions.Flux, Nothing}, BoundaryCondition{Oceananigans.BoundaryConditions.Periodic, Nothing}, BoundaryCondition{Oceananigans.BoundaryConditions.Periodic, Nothing}, Nothing, Nothing, BoundaryCondition{Oceananigans.BoundaryConditions.Flux, Nothing}}, Nothing, Oceananigans.Fields.FieldBoundaryBuffers{@NamedTuple{send::Array{Float64, 3}, recv::Array{Float64, 3}}, Nothing, Nothing, Nothing, Nothing, Nothing, Nothing, Nothing}}, Oceananigans.Models.HydrostaticFreeSurfaceModels.SplitExplicitState{Field{Center, Center, Face, Nothing, ImmersedBoundaryGrid{Float64, LeftConnected, Periodic, Bounded, RectilinearGrid{Float64, LeftConnected, Periodic, Bounded, Float64, Float64, Float64, OffsetArrays.OffsetVector{Float64, StepRangeLen{Float64, Base.TwicePrecision{Float64}, Base.TwicePrecision{Float64}, Int64}}, OffsetArrays.OffsetVector{Float64, StepRangeLen{Float64, Base.TwicePrecision{Float64}, Base.TwicePrecision{Float64}, Int64}}, OffsetArrays.OffsetVector{Float64, StepRangeLen{Float64, Base.TwicePrecision{Float64}, Base.TwicePrecision{Float64}, Int64}}, Distributed{CPU, false, Partition{Int64, Int64, Int64}, Tuple{Int64, Int64, Int64}, Int64, Tuple{Int64, Int64, Int64}, Oceananigans.DistributedComputations.RankConnectivity{Int64, Int64, Nothing, Nothing, Nothing, Nothing, Nothing, Nothing}, MPI.Comm, Vector{MPI.Request}, Base.RefValue{Int64}}}, GridFittedBottom{Field{Center, Center, Nothing, Nothing, RectilinearGrid{Float64, LeftConnected, Periodic, Bounded, Float64, Float64, Float64, OffsetArrays.OffsetVector{Float64, StepRangeLen{Float64, Base.TwicePrecision{Float64}, Base.TwicePrecision{Float64}, Int64}}, OffsetArrays.OffsetVector{Float64, StepRangeLen{Float64, Base.TwicePrecision{Float64}, Base.TwicePrecision{Float64}, Int64}}, OffsetArrays.OffsetVector{Float64, StepRangeLen{Float64, Base.TwicePrecision{Float64}, Base.TwicePrecision{Float64}, Int64}}, Distributed{CPU, false, Partition{Int64, Int64, Int64}, Tuple{Int64, Int64, Int64}, Int64, Tuple{Int64, Int64, Int64}, Oceananigans.DistributedComputations.RankConnectivity{Int64, Int64, Nothing, Nothing, Nothing, Nothing, Nothing, Nothing}, MPI.Comm, Vector{MPI.Request}, Base.RefValue{Int64}}}, Tuple{Colon, Colon, Colon}, OffsetArrays.OffsetArray{Float64, 3, Array{Float64, 3}}, Float64, FieldBoundaryConditions{BoundaryCondition{Oceananigans.BoundaryConditions.DistributedCommunication, Oceananigans.DistributedComputations.HaloCommunicationRanks{Int64, Int64}}, BoundaryCondition{Oceananigans.BoundaryConditions.Flux, Nothing}, BoundaryCondition{Oceananigans.BoundaryConditions.Periodic, Nothing}, BoundaryCondition{Oceananigans.BoundaryConditions.Periodic, Nothing}, Nothing, Nothing, BoundaryCondition{Oceananigans.BoundaryConditions.Flux, Nothing}}, Nothing, Oceananigans.Fields.FieldBoundaryBuffers{@NamedTuple{send::Array{Float64, 3}, recv::Array{Float64, 3}}, Nothing, Nothing, Nothing, Nothing, Nothing, Nothing, Nothing}}, Oceananigans.ImmersedBoundaries.CenterImmersedCondition}, Nothing, Distributed{CPU, false, Partition{Int64, Int64, Int64}, Tuple{Int64, Int64, Int64}, Int64, Tuple{Int64, Int64, Int64}, Oceananigans.DistributedComputations.RankConnectivity{Int64, Int64, Nothing, Nothing, Nothing, Nothing, Nothing, Nothing}, MPI.Comm, Vector{MPI.Request}, Base.RefValue{Int64}}}, Tuple{Colon, Colon, UnitRange{Int64}}, OffsetArrays.OffsetArray{Float64, 3, Array{Float64, 3}}, Float64, FieldBoundaryConditions{BoundaryCondition{Oceananigans.BoundaryConditions.DistributedCommunication, Oceananigans.DistributedComputations.HaloCommunicationRanks{Int64, Int64}}, BoundaryCondition{Oceananigans.BoundaryConditions.Flux, Nothing}, BoundaryCondition{Oceananigans.BoundaryConditions.Periodic, Nothing}, BoundaryCondition{Oceananigans.BoundaryConditions.Periodic, Nothing}, Nothing, Nothing, BoundaryCondition{Oceananigans.BoundaryConditions.Flux, Nothing}}, Nothing, Oceananigans.Fields.FieldBoundaryBuffers{@NamedTuple{send::Array{Float64, 3}, recv::Array{Float64, 3}}, Nothing, Nothing, Nothing, Nothing, Nothing, Nothing, Nothing}}, Field{Face, Center, Nothing, Nothing, ImmersedBoundaryGrid{Float64, LeftConnected, Periodic, Bounded, RectilinearGrid{Float64, LeftConnected, Periodic, Bounded, Float64, Float64, Float64, OffsetArrays.OffsetVector{Float64, StepRangeLen{Float64, Base.TwicePrecision{Float64}, Base.TwicePrecision{Float64}, Int64}}, OffsetArrays.OffsetVector{Float64, StepRangeLen{Float64, Base.TwicePrecision{Float64}, Base.TwicePrecision{Float64}, Int64}}, OffsetArrays.OffsetVector{Float64, StepRangeLen{Float64, Base.TwicePrecision{Float64}, Base.TwicePrecision{Float64}, Int64}}, Distributed{CPU, false, Partition{Int64, Int64, Int64}, Tuple{Int64, Int64, Int64}, Int64, Tuple{Int64, Int64, Int64}, Oceananigans.DistributedComputations.RankConnectivity{Int64, Int64, Nothing, Nothing, Nothing, Nothing, Nothing, Nothing}, MPI.Comm, Vector{MPI.Request}, Base.RefValue{Int64}}}, GridFittedBottom{Field{Center, Center, Nothing, Nothing, RectilinearGrid{Float64, LeftConnected, Periodic, Bounded, Float64, Float64, Float64, OffsetArrays.OffsetVector{Float64, StepRangeLen{Float64, Base.TwicePrecision{Float64}, Base.TwicePrecision{Float64}, Int64}}, OffsetArrays.OffsetVector{Float64, StepRangeLen{Float64, Base.TwicePrecision{Float64}, Base.TwicePrecision{Float64}, Int64}}, OffsetArrays.OffsetVector{Float64, StepRangeLen{Float64, Base.TwicePrecision{Float64}, Base.TwicePrecision{Float64}, Int64}}, Distributed{CPU, false, Partition{Int64, Int64, Int64}, Tuple{Int64, Int64, Int64}, Int64, Tuple{Int64, Int64, Int64}, Oceananigans.DistributedComputations.RankConnectivity{Int64, Int64, Nothing, Nothing, Nothing, Nothing, Nothing, Nothing}, MPI.Comm, Vector{MPI.Request}, Base.RefValue{Int64}}}, Tuple{Colon, Colon, Colon}, OffsetArrays.OffsetArray{Float64, 3, Array{Float64, 3}}, Float64, FieldBoundaryConditions{BoundaryCondition{Oceananigans.BoundaryConditions.DistributedCommunication, Oceananigans.DistributedComputations.HaloCommunicationRanks{Int64, Int64}}, BoundaryCondition{Oceananigans.BoundaryConditions.Flux, Nothing}, BoundaryCondition{Oceananigans.BoundaryConditions.Periodic, Nothing}, BoundaryCondition{Oceananigans.BoundaryConditions.Periodic, Nothing}, Nothing, Nothing, BoundaryCondition{Oceananigans.BoundaryConditions.Flux, Nothing}}, Nothing, Oceananigans.Fields.FieldBoundaryBuffers{@NamedTuple{send::Array{Float64, 3}, recv::Array{Float64, 3}}, Nothing, Nothing, Nothing, Nothing, Nothing, Nothing, Nothing}}, Oceananigans.ImmersedBoundaries.CenterImmersedCondition}, Nothing, Distributed{CPU, false, Partition{Int64, Int64, Int64}, Tuple{Int64, Int64, Int64}, Int64, Tuple{Int64, Int64, Int64}, Oceananigans.DistributedComputations.RankConnectivity{Int64, Int64, Nothing, Nothing, Nothing, Nothing, Nothing, Nothing}, MPI.Comm, Vector{MPI.Request}, Base.RefValue{Int64}}}, Tuple{Colon, Colon, Colon}, OffsetArrays.OffsetArray{Float64, 3, Array{Float64, 3}}, Float64, FieldBoundaryConditions{BoundaryCondition{Oceananigans.BoundaryConditions.DistributedCommunication, Oceananigans.DistributedComputations.HaloCommunicationRanks{Int64, Int64}}, Nothing, BoundaryCondition{Oceananigans.BoundaryConditions.Periodic, Nothing}, BoundaryCondition{Oceananigans.BoundaryConditions.Periodic, Nothing}, Nothing, Nothing, BoundaryCondition{Oceananigans.BoundaryConditions.Flux, Nothing}}, Nothing, Oceananigans.Fields.FieldBoundaryBuffers{@NamedTuple{send::Array{Float64, 3}, recv::Array{Float64, 3}}, Nothing, Nothing, Nothing, Nothing, Nothing, Nothing, Nothing}}, Field{Center, Face, Nothing, Nothing, ImmersedBoundaryGrid{Float64, LeftConnected, Periodic, Bounded, RectilinearGrid{Float64, LeftConnected, Periodic, Bounded, Float64, Float64, Float64, OffsetArrays.OffsetVector{Float64, StepRangeLen{Float64, Base.TwicePrecision{Float64}, Base.TwicePrecision{Float64}, Int64}}, OffsetArrays.OffsetVector{Float64, StepRangeLen{Float64, Base.TwicePrecision{Float64}, Base.TwicePrecision{Float64}, Int64}}, OffsetArrays.OffsetVector{Float64, StepRangeLen{Float64, Base.TwicePrecision{Float64}, Base.TwicePrecision{Float64}, Int64}}, Distributed{CPU, false, Partition{Int64, Int64, Int64}, Tuple{Int64, Int64, Int64}, Int64, Tuple{Int64, Int64, Int64}, Oceananigans.DistributedComputations.RankConnectivity{Int64, Int64, Nothing, Nothing, Nothing, Nothing, Nothing, Nothing}, MPI.Comm, Vector{MPI.Request}, Base.RefValue{Int64}}}, GridFittedBottom{Field{Center, Center, Nothing, Nothing, RectilinearGrid{Float64, LeftConnected, Periodic, Bounded, Float64, Float64, Float64, OffsetArrays.OffsetVector{Float64, StepRangeLen{Float64, Base.TwicePrecision{Float64}, Base.TwicePrecision{Float64}, Int64}}, OffsetArrays.OffsetVector{Float64, StepRangeLen{Float64, Base.TwicePrecision{Float64}, Base.TwicePrecision{Float64}, Int64}}, OffsetArrays.OffsetVector{Float64, StepRangeLen{Float64, Base.TwicePrecision{Float64}, Base.TwicePrecision{Float64}, Int64}}, Distributed{CPU, false, Partition{Int64, Int64, Int64}, Tuple{Int64, Int64, Int64}, Int64, Tuple{Int64, Int64, Int64}, Oceananigans.DistributedComputations.RankConnectivity{Int64, Int64, Nothing, Nothing, Nothing, Nothing, Nothing, Nothing}, MPI.Comm, Vector{MPI.Request}, Base.RefValue{Int64}}}, Tuple{Colon, Colon, Colon}, OffsetArrays.OffsetArray{Float64, 3, Array{Float64, 3}}, Float64, FieldBoundaryConditions{BoundaryCondition{Oceananigans.BoundaryConditions.DistributedCommunication, Oceananigans.DistributedComputations.HaloCommunicationRanks{Int64, Int64}}, BoundaryCondition{Oceananigans.BoundaryConditions.Flux, Nothing}, BoundaryCondition{Oceananigans.BoundaryConditions.Periodic, Nothing}, BoundaryCondition{Oceananigans.BoundaryConditions.Periodic, Nothing}, Nothing, Nothing, BoundaryCondition{Oceananigans.BoundaryConditions.Flux, Nothing}}, Nothing, Oceananigans.Fields.FieldBoundaryBuffers{@NamedTuple{send::Array{Float64, 3}, recv::Array{Float64, 3}}, Nothing, Nothing, Nothing, Nothing, Nothing, Nothing, Nothing}}, Oceananigans.ImmersedBoundaries.CenterImmersedCondition}, Nothing, Distributed{CPU, false, Partition{Int64, Int64, Int64}, Tuple{Int64, Int64, Int64}, Int64, Tuple{Int64, Int64, Int64}, Oceananigans.DistributedComputations.RankConnectivity{Int64, Int64, Nothing, Nothing, Nothing, Nothing, Nothing, Nothing}, MPI.Comm, Vector{MPI.Request}, Base.RefValue{Int64}}}, Tuple{Colon, Colon, Colon}, OffsetArrays.OffsetArray{Float64, 3, Array{Float64, 3}}, Float64, FieldBoundaryConditions{BoundaryCondition{Oceananigans.BoundaryConditions.DistributedCommunication, Oceananigans.DistributedComputations.HaloCommunicationRanks{Int64, Int64}}, BoundaryCondition{Oceananigans.BoundaryConditions.Flux, Nothing}, BoundaryCondition{Oceananigans.BoundaryConditions.Periodic, Nothing}, BoundaryCondition{Oceananigans.BoundaryConditions.Periodic, Nothing}, Nothing, Nothing, BoundaryCondition{Oceananigans.BoundaryConditions.Flux, Nothing}}, Nothing, Oceananigans.Fields.FieldBoundaryBuffers{@NamedTuple{send::Array{Float64, 3}, recv::Array{Float64, 3}}, Nothing, Nothing, Nothing, Nothing, Nothing, Nothing, Nothing}}}, Oceananigans.Models.HydrostaticFreeSurfaceModels.SplitExplicitAuxiliaryFields{Field{Center, Face, Nothing, Nothing, ImmersedBoundaryGrid{Float64, LeftConnected, Periodic, Bounded, RectilinearGrid{Float64, LeftConnected, Periodic, Bounded, Float64, Float64, Float64, OffsetArrays.OffsetVector{Float64, StepRangeLen{Float64, Base.TwicePrecision{Float64}, Base.TwicePrecision{Float64}, Int64}}, OffsetArrays.OffsetVector{Float64, StepRangeLen{Float64, Base.TwicePrecision{Float64}, Base.TwicePrecision{Float64}, Int64}}, OffsetArrays.OffsetVector{Float64, StepRangeLen{Float64, Base.TwicePrecision{Float64}, Base.TwicePrecision{Float64}, Int64}}, Distributed{CPU, false, Partition{Int64, Int64, Int64}, Tuple{Int64, Int64, Int64}, Int64, Tuple{Int64, Int64, Int64}, Oceananigans.DistributedComputations.RankConnectivity{Int64, Int64, Nothing, Nothing, Nothing, Nothing, Nothing, Nothing}, MPI.Comm, Vector{MPI.Request}, Base.RefValue{Int64}}}, GridFittedBottom{Field{Center, Center, Nothing, Nothing, RectilinearGrid{Float64, LeftConnected, Periodic, Bounded, Float64, Float64, Float64, OffsetArrays.OffsetVector{Float64, StepRangeLen{Float64, Base.TwicePrecision{Float64}, Base.TwicePrecision{Float64}, Int64}}, OffsetArrays.OffsetVector{Float64, StepRangeLen{Float64, Base.TwicePrecision{Float64}, Base.TwicePrecision{Float64}, Int64}}, OffsetArrays.OffsetVector{Float64, StepRangeLen{Float64, Base.TwicePrecision{Float64}, Base.TwicePrecision{Float64}, Int64}}, Distributed{CPU, false, Partition{Int64, Int64, Int64}, Tuple{Int64, Int64, Int64}, Int64, Tuple{Int64, Int64, Int64}, Oceananigans.DistributedComputations.RankConnectivity{Int64, Int64, Nothing, Nothing, Nothing, Nothing, Nothing, Nothing}, MPI.Comm, Vector{MPI.Request}, Base.RefValue{Int64}}}, Tuple{Colon, Colon, Colon}, OffsetArrays.OffsetArray{Float64, 3, Array{Float64, 3}}, Float64, FieldBoundaryConditions{BoundaryCondition{Oceananigans.BoundaryConditions.DistributedCommunication, Oceananigans.DistributedComputations.HaloCommunicationRanks{Int64, Int64}}, BoundaryCondition{Oceananigans.BoundaryConditions.Flux, Nothing}, BoundaryCondition{Oceananigans.BoundaryConditions.Periodic, Nothing}, BoundaryCondition{Oceananigans.BoundaryConditions.Periodic, Nothing}, Nothing, Nothing, BoundaryCondition{Oceananigans.BoundaryConditions.Flux, Nothing}}, Nothing, Oceananigans.Fields.FieldBoundaryBuffers{@NamedTuple{send::Array{Float64, 3}, recv::Array{Float64, 3}}, Nothing, Nothing, Nothing, Nothing, Nothing, Nothing, Nothing}}, Oceananigans.ImmersedBoundaries.CenterImmersedCondition}, Nothing, Distributed{CPU, false, Partition{Int64, Int64, Int64}, Tuple{Int64, Int64, Int64}, Int64, Tuple{Int64, Int64, Int64}, Oceananigans.DistributedComputations.RankConnectivity{Int64, Int64, Nothing, Nothing, Nothing, Nothing, Nothing, Nothing}, MPI.Comm, Vector{MPI.Request}, Base.RefValue{Int64}}}, Tuple{Colon, Colon, Colon}, OffsetArrays.OffsetArray{Float64, 3, Array{Float64, 3}}, Float64, FieldBoundaryConditions{BoundaryCondition{Oceananigans.BoundaryConditions.DistributedCommunication, Oceananigans.DistributedComputations.HaloCommunicationRanks{Int64, Int64}}, BoundaryCondition{Oceananigans.BoundaryConditions.Flux, Nothing}, BoundaryCondition{Oceananigans.BoundaryConditions.Periodic, Nothing}, BoundaryCondition{Oceananigans.BoundaryConditions.Periodic, Nothing}, Nothing, Nothing, BoundaryCondition{Oceananigans.BoundaryConditions.Flux, Nothing}}, Nothing, Oceananigans.Fields.FieldBoundaryBuffers{@NamedTuple{send::Array{Float64, 3}, recv::Array{Float64, 3}}, Nothing, Nothing, Nothing, Nothing, Nothing, Nothing, Nothing}}, Field{Face, Center, Nothing, Nothing, ImmersedBoundaryGrid{Float64, LeftConnected, Periodic, Bounded, RectilinearGrid{Float64, LeftConnected, Periodic, Bounded, Float64, Float64, Float64, OffsetArrays.OffsetVector{Float64, StepRangeLen{Float64, Base.TwicePrecision{Float64}, Base.TwicePrecision{Float64}, Int64}}, OffsetArrays.OffsetVector{Float64, StepRangeLen{Float64, Base.TwicePrecision{Float64}, Base.TwicePrecision{Float64}, Int64}}, OffsetArrays.OffsetVector{Float64, StepRangeLen{Float64, Base.TwicePrecision{Float64}, Base.TwicePrecision{Float64}, Int64}}, Distributed{CPU, false, Partition{Int64, Int64, Int64}, Tuple{Int64, Int64, Int64}, Int64, Tuple{Int64, Int64, Int64}, Oceananigans.DistributedComputations.RankConnectivity{Int64, Int64, Nothing, Nothing, Nothing, Nothing, Nothing, Nothing}, MPI.Comm, Vector{MPI.Request}, Base.RefValue{Int64}}}, GridFittedBottom{Field{Center, Center, Nothing, Nothing, RectilinearGrid{Float64, LeftConnected, Periodic, Bounded, Float64, Float64, Float64, OffsetArrays.OffsetVector{Float64, StepRangeLen{Float64, Base.TwicePrecision{Float64}, Base.TwicePrecision{Float64}, Int64}}, OffsetArrays.OffsetVector{Float64, StepRangeLen{Float64, Base.TwicePrecision{Float64}, Base.TwicePrecision{Float64}, Int64}}, OffsetArrays.OffsetVector{Float64, StepRangeLen{Float64, Base.TwicePrecision{Float64}, Base.TwicePrecision{Float64}, Int64}}, Distributed{CPU, false, Partition{Int64, Int64, Int64}, Tuple{Int64, Int64, Int64}, Int64, Tuple{Int64, Int64, Int64}, Oceananigans.DistributedComputations.RankConnectivity{Int64, Int64, Nothing, Nothing, Nothing, Nothing, Nothing, Nothing}, MPI.Comm, Vector{MPI.Request}, Base.RefValue{Int64}}}, Tuple{Colon, Colon, Colon}, OffsetArrays.OffsetArray{Float64, 3, Array{Float64, 3}}, Float64, FieldBoundaryConditions{BoundaryCondition{Oceananigans.BoundaryConditions.DistributedCommunication, Oceananigans.DistributedComputations.HaloCommunicationRanks{Int64, Int64}}, BoundaryCondition{Oceananigans.BoundaryConditions.Flux, Nothing}, BoundaryCondition{Oceananigans.BoundaryConditions.Periodic, Nothing}, BoundaryCondition{Oceananigans.BoundaryConditions.Periodic, Nothing}, Nothing, Nothing, BoundaryCondition{Oceananigans.BoundaryConditions.Flux, Nothing}}, Nothing, Oceananigans.Fields.FieldBoundaryBuffers{@NamedTuple{send::Array{Float64, 3}, recv::Array{Float64, 3}}, Nothing, Nothing, Nothing, Nothing, Nothing, Nothing, Nothing}}, Oceananigans.ImmersedBoundaries.CenterImmersedCondition}, Nothing, Distributed{CPU, false, Partition{Int64, Int64, Int64}, Tuple{Int64, Int64, Int64}, Int64, Tuple{Int64, Int64, Int64}, Oceananigans.DistributedComputations.RankConnectivity{Int64, Int64, Nothing, Nothing, Nothing, Nothing, Nothing, Nothing}, MPI.Comm, Vector{MPI.Request}, Base.RefValue{Int64}}}, Tuple{Colon, Colon, Colon}, OffsetArrays.OffsetArray{Float64, 3, Array{Float64, 3}}, Float64, FieldBoundaryConditions{BoundaryCondition{Oceananigans.BoundaryConditions.DistributedCommunication, Oceananigans.DistributedComputations.HaloCommunicationRanks{Int64, Int64}}, Nothing, BoundaryCondition{Oceananigans.BoundaryConditions.Periodic, Nothing}, BoundaryCondition{Oceananigans.BoundaryConditions.Periodic, Nothing}, Nothing, Nothing, BoundaryCondition{Oceananigans.BoundaryConditions.Flux, Nothing}}, Nothing, Oceananigans.Fields.FieldBoundaryBuffers{@NamedTuple{send::Array{Float64, 3}, recv::Array{Float64, 3}}, Nothing, Nothing, Nothing, Nothing, Nothing, Nothing, Nothing}}, Field{Center, Center, Nothing, Nothing, ImmersedBoundaryGrid{Float64, LeftConnected, Periodic, Bounded, RectilinearGrid{Float64, LeftConnected, Periodic, Bounded, Float64, Float64, Float64, OffsetArrays.OffsetVector{Float64, StepRangeLen{Float64, Base.TwicePrecision{Float64}, Base.TwicePrecision{Float64}, Int64}}, OffsetArrays.OffsetVector{Float64, StepRangeLen{Float64, Base.TwicePrecision{Float64}, Base.TwicePrecision{Float64}, Int64}}, OffsetArrays.OffsetVector{Float64, StepRangeLen{Float64, Base.TwicePrecision{Float64}, Base.TwicePrecision{Float64}, Int64}}, Distributed{CPU, false, Partition{Int64, Int64, Int64}, Tuple{Int64, Int64, Int64}, Int64, Tuple{Int64, Int64, Int64}, Oceananigans.DistributedComputations.RankConnectivity{Int64, Int64, Nothing, Nothing, Nothing, Nothing, Nothing, Nothing}, MPI.Comm, Vector{MPI.Request}, Base.RefValue{Int64}}}, GridFittedBottom{Field{Center, Center, Nothing, Nothing, RectilinearGrid{Float64, LeftConnected, Periodic, Bounded, Float64, Float64, Float64, OffsetArrays.OffsetVector{Float64, StepRangeLen{Float64, Base.TwicePrecision{Float64}, Base.TwicePrecision{Float64}, Int64}}, OffsetArrays.OffsetVector{Float64, StepRangeLen{Float64, Base.TwicePrecision{Float64}, Base.TwicePrecision{Float64}, Int64}}, OffsetArrays.OffsetVector{Float64, StepRangeLen{Float64, Base.TwicePrecision{Float64}, Base.TwicePrecision{Float64}, Int64}}, Distributed{CPU, false, Partition{Int64, Int64, Int64}, Tuple{Int64, Int64, Int64}, Int64, Tuple{Int64, Int64, Int64}, Oceananigans.DistributedComputations.RankConnectivity{Int64, Int64, Nothing, Nothing, Nothing, Nothing, Nothing, Nothing}, MPI.Comm, Vector{MPI.Request}, Base.RefValue{Int64}}}, Tuple{Colon, Colon, Colon}, OffsetArrays.OffsetArray{Float64, 3, Array{Float64, 3}}, Float64, FieldBoundaryConditions{BoundaryCondition{Oceananigans.BoundaryConditions.DistributedCommunication, Oceananigans.DistributedComputations.HaloCommunicationRanks{Int64, Int64}}, BoundaryCondition{Oceananigans.BoundaryConditions.Flux, Nothing}, BoundaryCondition{Oceananigans.BoundaryConditions.Periodic, Nothing}, BoundaryCondition{Oceananigans.BoundaryConditions.Periodic, Nothing}, Nothing, Nothing, BoundaryCondition{Oceananigans.BoundaryConditions.Flux, Nothing}}, Nothing, Oceananigans.Fields.FieldBoundaryBuffers{@NamedTuple{send::Array{Float64, 3}, recv::Array{Float64, 3}}, Nothing, Nothing, Nothing, Nothing, Nothing, Nothing, Nothing}}, Oceananigans.ImmersedBoundaries.CenterImmersedCondition}, Nothing, Distributed{CPU, false, Partition{Int64, Int64, Int64}, Tuple{Int64, Int64, Int64}, Int64, Tuple{Int64, Int64, Int64}, Oceananigans.DistributedComputations.RankConnectivity{Int64, Int64, Nothing, Nothing, Nothing, Nothing, Nothing, Nothing}, MPI.Comm, Vector{MPI.Request}, Base.RefValue{Int64}}}, Tuple{Colon, Colon, Colon}, OffsetArrays.OffsetArray{Float64, 3, Array{Float64, 3}}, Float64, FieldBoundaryConditions{BoundaryCondition{Oceananigans.BoundaryConditions.DistributedCommunication, Oceananigans.DistributedComputations.HaloCommunicationRanks{Int64, Int64}}, BoundaryCondition{Oceananigans.BoundaryConditions.Flux, Nothing}, BoundaryCondition{Oceananigans.BoundaryConditions.Periodic, Nothing}, BoundaryCondition{Oceananigans.BoundaryConditions.Periodic, Nothing}, Nothing, Nothing, BoundaryCondition{Oceananigans.BoundaryConditions.Flux, Nothing}}, Nothing, Oceananigans.Fields.FieldBoundaryBuffers{@NamedTuple{send::Array{Float64, 3}, recv::Array{Float64, 3}}, Nothing, Nothing, Nothing, Nothing, Nothing, Nothing, Nothing}}, Oceananigans.Utils.KernelParameters{(27, 3), (-7, 0)}}, Float64, Oceananigans.Models.HydrostaticFreeSurfaceModels.SplitExplicitSettings{Oceananigans.Models.HydrostaticFreeSurfaceModels.FixedSubstepNumber{Float64, Vector{Float64}}, Oceananigans.Models.HydrostaticFreeSurfaceModels.ForwardBackwardScheme}})

Closest candidates are:
  iterate(!Matched::Base.MethodSpecializations)
   @ Base reflection.jl:1148
  iterate(!Matched::Base.MethodSpecializations, !Matched::Nothing)
   @ Base reflection.jl:1154
  iterate(!Matched::Base.MethodSpecializations, !Matched::Int64)
   @ Base reflection.jl:1155
  ...

Any idea what it could be, @simone-silvestri ?

…ustment.jl

Co-authored-by: Simone Silvestri <silvestri.simone0@gmail.com>
…ustment.jl

Co-authored-by: Simone Silvestri <silvestri.simone0@gmail.com>
…ustment.jl

Co-authored-by: Simone Silvestri <silvestri.simone0@gmail.com>
@glwagner
Copy link
Member

@glwagner and @simone-silvestri

Continuing the discussion. #2788 (reply in thread)

It looks like MPI needs to be initialized before accessing its communication variables.

Thus, despite calling Distributed initializes MPI, we need to access Nranks = MPI.Comm_size(comm) before to define Partition.

comm = MPI.COMM_WORLD
rank = MPI.Comm_rank(comm)
Nranks = MPI.Comm_size(comm)

Should we initialize MPI when importing DistributedComputations?

Either 1) explicitly invoke MPI.Init() or 2) move those lines to after the grid constructor.

@glwagner
Copy link
Member

in my opinion no, it's fine to have an MPI.Init() if the script directly uses MPI variables. It is more explicit.

The other option to not use MPI variables is to hardcode the number of processors

This advice seems to contradict the goal of having scripts that can be changed from non-distributed to distributed with a single line.

I don't think user scripts should have to write using MPI. The API should have all the features necessary to support "one-click" distributed simulations. When we see using MPI, it measn that the API is incomplete and we have work to do.

@glwagner
Copy link
Member

Just to be clear: our goal is to develop an API for distributed simulations that require a "minimum" of changes to the same script applied to a non-distributed simulation.

The point of this goal is to make it easy to scale simulations from single-process to multi-process.

@iuryt
Copy link
Collaborator Author

iuryt commented Jan 16, 2024

Does that make sense to initialize MPI when importing DistributedComputations?

@simone-silvestri
Copy link
Collaborator

I don't want to initialize MPI when importing the module because you might import the module but not use MPI routines.
This might lead to errors/confusion. For example, the initialization is implicit when constructing the architecture which hasd to use MPI routines (for example MPI.Comm_rank(MPI.COMM_WORLD)

If you want to explicitly use MPI routines in the script (as the example above), it makes sense you initialize MPI and not hide the initialization somewhere else.

If you want you can introduce an API for calculating the rank and the size that internally initializes MPI, although I don't really deem it necessary to have our own take on MPI as MPI is already very standardized.

@simone-silvestri
Copy link
Collaborator

simone-silvestri commented Jan 17, 2024

in my opinion no, it's fine to have an MPI.Init() if the script directly uses MPI variables. It is more explicit.
The other option to not use MPI variables is to hardcode the number of processors

This advice seems to contradict the goal of having scripts that can be changed from non-distributed to distributed with a single line.

I don't think user scripts should have to write using MPI. The API should have all the features necessary to support "one-click" distributed simulations. When we see using MPI, it measn that the API is incomplete and we have work to do.

Just to clarify, this is already possible, just by doing

arch = Distributed()
rank = arch.local_rank

The script, on the other hand, explicitly uses the MPI module before constructing the architecture, for which it makes sense to have using MPI; MPI.Init().

@glwagner
Copy link
Member

glwagner commented Jan 17, 2024

I don't want to initialize MPI when importing the module because you might import the module but not use MPI routines. This might lead to errors/confusion. For example, the initialization is implicit when constructing the architecture which hasd to use MPI routines (for example MPI.Comm_rank(MPI.COMM_WORLD)

If you want to explicitly use MPI routines in the script (as the example above), it makes sense you initialize MPI and not hide the initialization somewhere else.

If you want you can introduce an API for calculating the rank and the size that internally initializes MPI, although I don't really deem it necessary to have our own take on MPI as MPI is already very standardized.

But why do we need to calculate the rank and size? That's the root issue.

I'm not suggesting that we build an API to emit the rank and size. I'm suggesting that we design the code so that kind of calculation is not necessary.

Consider: rank and size do not exist for non-distributed simulations. So no matter how it is done, if your code needs to calculate the rank and size, then it is not "agnostic" to being distributed vs serial. That is distributed-explicit code.

Make it very clear: the objective is to provide a codebase that permits distributed simulations with minimal code changes to serial scripts. If you find that users are constantly writing distributed-specific code, then a rethink is needed.

Agree that we should not initialize when importing the module! It doesn't make sense, DistributedComputations is always imported but rarely used.

@iuryt
Copy link
Collaborator Author

iuryt commented Jan 17, 2024

Maybe the best is to hardcode the number of ranks instead of using -n from mpiexec?
But then we would need to somehow get the rank id to use it for the output file..

@navidcy
Copy link
Member

navidcy commented Jan 17, 2024

I just pushed changes for the geostrophic adjustment case. The code seems alright, but it returns the following warning:

warning: ~/.julia/packages/KernelAbstractions/GCOhX/src/extras/loopinfo.jl:28:0: loop not unrolled: the optimizer was unable to perform the requested transformation; the transformation might be disabled or specified as part of an unsupported transformation ordering

Then it returns the following error:

Error output:
Any idea what it could be, @simone-silvestri ?

That's related with #3374 and the warning comes from Julia v1.10 (wasn't there with Julia v1.9).
The PR #3403 will deal with these warnings.

@simone-silvestri
Copy link
Collaborator

in my opinion no, it's fine to have an MPI.Init() if the script directly uses MPI variables. It is more explicit.
The other option to not use MPI variables is to hardcode the number of processors

This advice seems to contradict the goal of having scripts that can be changed from non-distributed to distributed with a single line.

I don't think user scripts should have to write using MPI. The API should have all the features necessary to support "one-click" distributed simulations. When we see using MPI, it measn that the API is incomplete and we have work to do.

Just to clarify, this is already possible, just by doing

arch = Distributed()
rank = arch.local_rank

The script, on the other hand, explicitly uses the MPI module before constructing the architecture, for which it makes sense to have using MPI; MPI.Init().

@iuryt this works already if you don't want to hardcore/import the MPI module.

I would leave the validation case like this though

@simone-silvestri
Copy link
Collaborator

Distributed IO is not supported yet and will require a lot of work and refactor so for the time being we have to do with the rank in the output writers

@glwagner
Copy link
Member

Maybe the best is to hardcode the number of ranks instead of using -n from mpiexec? But then we would need to somehow get the rank id to use it for the output file..

We need output features that don't require manually inserting the rank into the filename.

@glwagner
Copy link
Member

glwagner commented Jan 17, 2024

Distributed IO is not supported yet and will require a lot of work and refactor so for the time being we have to do with the rank in the output writers

Distributed IO is not necessary to solve this problem. We can introduce a convention for filenames whereby the rank number is automatically inserted into the filename for distributed simulations.

Similarly the convention can be reversed for FieldTimeSeries to make the process seamless for users.

There are probably plenty of other possible solutions. The first step is recognizing that a problem or deficiency exists, and having the desire to fix it. Then we can do the fun part, which is to design solutions to an important problem.

Note that distributed IO is unlikely to ever be supported for JLD2. So to provide a seamless experience with JLD2, we really do require a filename convention to solve the problem. (For NetCDF, we could envision providing a choice between file splitting or distributed IO.)

Filename conventions also seem preferred to me for many cases in a world of GPUs where each rank likely needs to hold a substantial portion of the computation for performance. Splitting files by rank will limit the filesize, which makes it easier to transfer data when the simulations are very large --- which will probably often be the case when doing distributed simulations.

@simone-silvestri
Copy link
Collaborator

Sure, that is just how we are doing it now (the difference would be just inserting the rank behind the scenes). @iuryt if you want to have a go at it in this PR, that should be quite simple to implement (the rank is held in the architecture in arch.local_rank) and make sure that the the correct partitioning is taken into account (x partitioning vs y partitioning vs x-y)

This API "problem" does not exhaust the IO issue though. The problem I was referring to is having split files. I still believe that distributed IO is necessary to have a fully functioning distributed code.

@iuryt
Copy link
Collaborator Author

iuryt commented Jan 17, 2024

I will work on that

@glwagner
Copy link
Member

glwagner commented Jan 17, 2024

Sure, that is just how we are doing it now (the difference would be just inserting the rank behind the scenes). @iuryt if you want to have a go at it in this PR, that should be quite simple to implement (the rank is held in the architecture in arch.local_rank) and make sure that the the correct partitioning is taken into account (x partitioning vs y partitioning vs x-y)

This API "problem" does not exhaust the IO issue though. The problem I was referring to is having split files. I still believe that distributed IO is necessary to have a fully functioning distributed code.

What's the problem with split files?

Combining data into JLD2 files is not always possible is it? It would require the entirety of the output field to fit into the memory of one node. It also may not be efficient. Correct me if you think otherwise, but it seems we will always want to support split output, even if we also support combined output for distributed simulations.

@iuryt
Copy link
Collaborator Author

iuryt commented Jan 17, 2024

Sorry @glwagner
Maybe I didn't express very well.
I don't see any problem in split files and understand the importance in having that.
What I meant is that we could access the rank id from Distributed output to use it on the output filename so that we don't need to explicitly import MPI.

@glwagner
Copy link
Member

glwagner commented Jan 17, 2024

Sorry @glwagner Maybe I didn't express very well. I don't see any problem in split files and understand the importance in having that. What I meant is that we could access the rank id from Distributed output to use it on the output filename so that we don't need to explicitly import MPI.

I was responding to @simone-silvestri !

In terms of user scripts, I certainly do think it's better to access the rank_id from the architecture. I think it's easier to read too, and it allows user scripts to depend on one fewer package.

However, I think that the output writers should automatically change the filename when the simulation is distributed. This should be pretty easy and just involves adding the suffix _rank$(rank_id) to the filename for distributed architectures.

We should also have an API for callbacks that are intended to only run on one rank (for example for printing stuff); eg a property on_rank=0 which is used when arch isa Distributed.

There's probably other useful things. FieldTimeSeries will have to be modified so that it can combine distributed files, for example. Probably, distributed output should also save some kind of "meta" file that contains information about the processor layout, to be used by FieldTimeSeries.

@iuryt
Copy link
Collaborator Author

iuryt commented Jan 18, 2024

I liked the idea of automatically add the suffix. I can work on that too. This is becoming and interesting PR.

For the callbacks, the idea is that Callback could have an on_rank argument that could be an integer or a list. This argument will only be used in case of a distributed simulation, otherwise it gives a warning.
We add the new argument here
https://github.com/CliMA/Oceananigans.jl/blob/1c2a6f8752b6425bf30d856f8ba0aa681c0ab818/src/Simulations/callback.jl

Then we need to make sure the simulation check the rank from arch and discard the callback from the list of callbacks for those ranks that should not run it. What do you think and where I could make this change?

I am not sure I understand the last paragraph. I understand the part of including the meta information about the processor, but not sure about the rest.

@glwagner
Copy link
Member

@iuryt can we do it in a new PR? I can get it started to illustrate, and you can help me by refining the implementation and getting the tests to pass. What do you think?

@iuryt
Copy link
Collaborator Author

iuryt commented Jan 18, 2024

@iuryt can we do it in a new PR? I can get it started to illustrate, and you can help me by refining the implementation and getting the tests to pass. What do you think?

As this new PR will change this current PR, I will work on the new PR first and then come back here.

@navidcy
Copy link
Member

navidcy commented Jan 29, 2024

Perhaps we should also update this docstring in this PR?

"""
Distributed(child_architecture = CPU();
topology,
partition,
devices = nothing,
communicator = MPI.COMM_WORLD)
Constructor for a distributed architecture that uses MPI for communications
Positional arguments
=================
- `child_architecture`: Specifies whether the computation is performed on CPUs or GPUs.
Default: `child_architecture = CPU()`.
Keyword arguments
=================
- `synchronized_communication`: if true, always use synchronized communication through ranks
- `ranks` (required): A 3-tuple `(Rx, Ry, Rz)` specifying the total processors in the `x`,
`y` and `z` direction. NOTE: support for distributed z direction is
limited, so `Rz = 1` is strongly suggested.
- `devices`: `GPU` device linked to local rank. The GPU will be assigned based on the
local node rank as such `devices[node_rank]`. Make sure to run `--ntasks-per-node` <= `--gres=gpu`.
If `nothing`, the devices will be assigned automatically based on the available resources
- `communicator`: the MPI communicator, `MPI.COMM_WORLD`. This keyword argument should not be tampered with
if not for testing or developing. Change at your own risk!
"""
function Distributed(child_architecture = CPU();
communicator = MPI.COMM_WORLD,
devices = nothing,
synchronized_communication = false,
partition = Partition(MPI.Comm_size(communicator)))

The docstring function signature does not mirror the actual signature + some explanation on partition could be handy. (#3448 made an attempt to update this docstring.)

@glwagner
Copy link
Member

Perhaps we should also update this docstring in this PR?

"""
Distributed(child_architecture = CPU();
topology,
partition,
devices = nothing,
communicator = MPI.COMM_WORLD)
Constructor for a distributed architecture that uses MPI for communications
Positional arguments
=================
- `child_architecture`: Specifies whether the computation is performed on CPUs or GPUs.
Default: `child_architecture = CPU()`.
Keyword arguments
=================
- `synchronized_communication`: if true, always use synchronized communication through ranks
- `ranks` (required): A 3-tuple `(Rx, Ry, Rz)` specifying the total processors in the `x`,
`y` and `z` direction. NOTE: support for distributed z direction is
limited, so `Rz = 1` is strongly suggested.
- `devices`: `GPU` device linked to local rank. The GPU will be assigned based on the
local node rank as such `devices[node_rank]`. Make sure to run `--ntasks-per-node` <= `--gres=gpu`.
If `nothing`, the devices will be assigned automatically based on the available resources
- `communicator`: the MPI communicator, `MPI.COMM_WORLD`. This keyword argument should not be tampered with
if not for testing or developing. Change at your own risk!
"""
function Distributed(child_architecture = CPU();
communicator = MPI.COMM_WORLD,
devices = nothing,
synchronized_communication = false,
partition = Partition(MPI.Comm_size(communicator)))

The docstring function signature does not mirror the actual signature + some explanation on partition could be handy. (#3448 made an attempt to update this docstring.)

Definitely fix if its not accurate

@iuryt
Copy link
Collaborator Author

iuryt commented Jan 29, 2024

@iuryt can we do it in a new PR? I can get it started to illustrate, and you can help me by refining the implementation and getting the tests to pass. What do you think?

As this new PR will change this current PR, I will work on the new PR first and then come back here.

@glwagner
Should I work on this PR first and then when make the new implementation update the validation scripts again?

@glwagner
Copy link
Member

@iuryt can we do it in a new PR? I can get it started to illustrate, and you can help me by refining the implementation and getting the tests to pass. What do you think?

As this new PR will change this current PR, I will work on the new PR first and then come back here.

@glwagner Should I work on this PR first and then when make the new implementation update the validation scripts again?

Up to you but it seems like you will save yourself some effort if you first update the user interface, and then update the validation scripts?

@glwagner
Copy link
Member

Is it so hard to update the validation scripts too? Hopefully that should be easy and it doesn't really matter what you do first. We use the validation scripts to test the user interface. You'll be changing them no matter what, in either case.

…turbulence.jl

Co-authored-by: Simone Silvestri <silvestri.simone0@gmail.com>
@iuryt
Copy link
Collaborator Author

iuryt commented Apr 30, 2024

Sorry for taking that long to come back here.
We should also add a validation for multi-GPU simulations, right?
If so, what should change? We still use MPI as the communicator?
Do you have an example I could use to understand?

@glwagner
Copy link
Member

@simone-silvestri can probably contribute a simple script (maybe something that adapts a baroclinic adjustment case) for Distributed

@simone-silvestri
Copy link
Collaborator

The GPU validation should be identical. You just need to

  1. Make sure you have enough gpus and the MPI installation is cuda-aware (MPI.has_cuda() == true)
  2. Switch CPU to GPU in the script.

The rest of the script does not change

@glwagner
Copy link
Member

glwagner commented May 1, 2024

The GPU validation should be identical. You just need to

  1. Make sure you have enough gpus and the MPI installation is cuda-aware (MPI.has_cuda() == true)
  2. Switch CPU to GPU in the script.

The rest of the script does not change

We still need actual, explicit code. That way you don't have to make this comment "it doesn't change" every time.

@simone-silvestri
Copy link
Collaborator

@iuryt can I bring this PR to completion? I think we just need to update and bring it to speed with the new oceannigans version

@glwagner
Copy link
Member

This PR just updates two validation tests right

@iuryt
Copy link
Collaborator Author

iuryt commented Dec 25, 2024

Sorry for not giving updates. Let me know if you need any help. I can also run the examples from my side to test it.

@glwagner
Copy link
Member

I think this is stale but reopen if needed

@glwagner glwagner closed this Jun 24, 2025
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Projects

None yet

Development

Successfully merging this pull request may close these issues.

4 participants