Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Implement PML for the outer RZ boundary with PSATD #2211

Merged
merged 39 commits into from
Jan 20, 2022
Merged
Show file tree
Hide file tree
Changes from 1 commit
Commits
Show all changes
39 commits
Select commit Hold shift + click to select a range
7b98da7
Initial version of RZ PSATD PML BCs
dpgrote Aug 18, 2021
de5a72b
Cleaned up some bugs
dpgrote Aug 18, 2021
30ca30c
Add support of do_pml_in_domain option
dpgrote Aug 19, 2021
7bc35ac
Cleaned up stuff for building
dpgrote Aug 19, 2021
5f5d4c3
Fix PMLPsatdAlgorithm macro
dpgrote Aug 19, 2021
7f48397
Removed unneeded variable from SpectralSolverRZ
dpgrote Aug 20, 2021
4d41190
Merge remote-tracking branch 'ECPwarpx/development' into RZ_psatd_pml
dpgrote Aug 24, 2021
76cdfa3
Merge remote-tracking branch 'ECPwarpx/development' into RZ_psatd_pml
dpgrote Sep 1, 2021
2519c7e
Merge remote-tracking branch 'ECPwarpx/development' into RZ_psatd_pml
dpgrote Oct 11, 2021
1b1160f
Change length 3 arrays to length 2 (for 2D)
dpgrote Oct 12, 2021
3ec7931
Merge remote-tracking branch 'ECPwarpx/development' into RZ_psatd_pml
dpgrote Oct 13, 2021
a86192d
Cleanup around DampPML
dpgrote Oct 13, 2021
fe3ae84
Added more checks of pml[lev]
dpgrote Oct 13, 2021
1f54148
Added CI test for RZ PML
dpgrote Oct 13, 2021
0b9105c
Added code to update the corner guard cells
dpgrote Oct 19, 2021
47da308
Further updates
dpgrote Oct 19, 2021
862e411
Added CI test
dpgrote Oct 19, 2021
9e1cfd4
Fixed EOL space
dpgrote Oct 19, 2021
2fa0bb1
Merge remote-tracking branch 'ECPwarpx/development' into RZ_psatd_pml
dpgrote Oct 19, 2021
4747a29
Updated CI benchmarks, removing round off fields
dpgrote Oct 20, 2021
a30ba04
Changes to CI missed on previous commit
dpgrote Oct 20, 2021
0497dfb
Merge remote-tracking branch 'ECPwarpx/development' into RZ_psatd_pml
dpgrote Oct 20, 2021
80fbf99
Merge remote-tracking branch 'ECPwarpx/development' into RZ_psatd_pml
dpgrote Nov 16, 2021
d338b72
Various fixes for clean up
dpgrote Nov 16, 2021
81a22c5
More fixes for clean up
dpgrote Nov 16, 2021
7011f51
Further cleanup
dpgrote Nov 17, 2021
0e55515
Updated benchmark
dpgrote Nov 17, 2021
0f0c2c9
Fixed benchmarks file
dpgrote Nov 19, 2021
fccf166
Merge remote-tracking branch 'ECPwarpx/development' into RZ_psatd_pml
dpgrote Nov 19, 2021
576b135
Minor cleanup
dpgrote Nov 24, 2021
9c2adcc
Added round off benchmark values
dpgrote Nov 24, 2021
a736c35
Fixed testname in analysis_pml_psatd_rz.py
dpgrote Nov 24, 2021
9c15106
Update comment in analysis file
dpgrote Nov 29, 2021
693f903
Merge branch 'development' into RZ_psatd_pml
RemiLehe Dec 6, 2021
47d762f
Put pml_rz code in RZ and PSATD macro blocks
dpgrote Dec 6, 2021
aae6a9c
Merge remote-tracking branch 'ECPwarpx/development' into RZ_psatd_pml
dpgrote Jan 10, 2022
1824ae2
[pre-commit.ci] auto fixes from pre-commit.com hooks
pre-commit-ci[bot] Jan 10, 2022
edd8041
Add geometry.dims input to CI test input file, inputs_rz
dpgrote Jan 11, 2022
76d85ea
Cleanup to match recent changes
dpgrote Jan 12, 2022
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
Prev Previous commit
Next Next commit
Add support of do_pml_in_domain option
  • Loading branch information
dpgrote committed Aug 19, 2021
commit 30ca30c1e5005e3341d233c4269fd56860f9b03f
3 changes: 2 additions & 1 deletion Source/BoundaryConditions/PML_RZ.H
Original file line number Diff line number Diff line change
Expand Up @@ -37,7 +37,7 @@ class PML_RZ
{
public:
PML_RZ (const int lev, const amrex::BoxArray& grid_ba, const amrex::DistributionMapping& grid_dm,
const amrex::Geometry* geom, const int ncell);
const amrex::Geometry* geom, const int ncell, const int do_pml_in_domain);

void ApplyDamping(const std::array<amrex::MultiFab*,3>& E_fp,
const std::array<amrex::MultiFab*,3>& B_fp,
Expand Down Expand Up @@ -65,6 +65,7 @@ private:
bool m_ok;

const int m_ncell;
const int m_do_pml_in_domain;
const amrex::Geometry* m_geom;

// Only contains Er and Et, and Br and Bt
Expand Down
13 changes: 8 additions & 5 deletions Source/BoundaryConditions/PML_RZ.cpp
Original file line number Diff line number Diff line change
Expand Up @@ -49,8 +49,9 @@
using namespace amrex;

PML_RZ::PML_RZ (const int lev, const BoxArray& grid_ba, const DistributionMapping& grid_dm,
const Geometry* geom, const int ncell)
const Geometry* geom, const int ncell, const int do_pml_in_domain)
: m_ncell(ncell),
m_do_pml_in_domain(do_pml_in_domain),
m_geom(geom)
{

Expand Down Expand Up @@ -103,16 +104,18 @@ PML_RZ::ApplyDamping(const std::array<amrex::MultiFab*,3>& E_fp,
// They are all the same, cell centered
amrex::Box tilebox = amrex::convert((*E_fp[1])[mfi].box(), E_fp[0]->ixType().toIntVect());
EZoni marked this conversation as resolved.
Show resolved Hide resolved

// Get upper radius of tilebox
const int tilebox_bigEnd_r = tilebox.bigEnd(0);

// Box for the whole simulation domain
amrex::Box const& domain = m_geom->Domain();
const int nr_domain = domain.bigEnd(0);

// Set tilebox to only include the upper radial cells
const int nr_damp = m_ncell;
const int nr_damp_min = nr_domain - nr_damp;
int nr_damp_min;
if (m_do_pml_in_domain) {
nr_damp_min = nr_domain - nr_damp;
} else {
nr_damp_min = nr_domain;
}
tilebox.setSmall(0, nr_damp_min + 1);

amrex::ParallelFor( tilebox, E_fp[0]->nComp(),
Expand Down
2 changes: 1 addition & 1 deletion Source/Initialization/WarpXInitData.cpp
Original file line number Diff line number Diff line change
Expand Up @@ -242,7 +242,7 @@ WarpX::InitPML ()

#ifdef WARPX_DIM_RZ
do_pml_Lo_corrected[0] = 0; // no PML at r=0, in cylindrical geometry
pml_rz[0] = std::make_unique<PML_RZ>(0, boxArray(0), DistributionMap(0), &Geom(0), pml_ncell);
pml_rz[0] = std::make_unique<PML_RZ>(0, boxArray(0), DistributionMap(0), &Geom(0), pml_ncell, do_pml_in_domain);
#else
pml[0] = std::make_unique<PML>(0, boxArray(0), DistributionMap(0), &Geom(0), nullptr,
pml_ncell, pml_delta, amrex::IntVect::TheZeroVector(),
Expand Down
8 changes: 7 additions & 1 deletion Source/Parallelization/GuardCellManager.H
Original file line number Diff line number Diff line change
Expand Up @@ -36,6 +36,9 @@ public:
* \param nci_corr_stencil stencil of NCI corrector
* \param maxwell_solver_id if of Maxwell solver
* \param max_level max level of the simulation
* \param do_pml whether pml is turned on (only used by RZ PSATD)
* \param do_pml_in_domain whether pml is done in the domain (only used by RZ PSATD)
* \param pml_ncell number of cells on the pml layer (only used by RZ PSATD)
*/
void Init(
const amrex::Real dt,
Expand All @@ -53,7 +56,10 @@ public:
const amrex::Array<amrex::Real,3> v_galilean,
const amrex::Array<amrex::Real,3> v_comoving,
const bool safe_guard_cells,
const int do_electrostatic);
const int do_electrostatic,
const bool do_pml,
const int do_pml_in_domain,
const int pml_ncell);

// Guard cells allocated for MultiFabs E and B
amrex::IntVect ng_alloc_EB = amrex::IntVect::TheZeroVector();
Expand Down
15 changes: 14 additions & 1 deletion Source/Parallelization/GuardCellManager.cpp
Original file line number Diff line number Diff line change
Expand Up @@ -45,7 +45,10 @@ guardCellManager::Init (
const amrex::Array<amrex::Real,3> v_galilean,
const amrex::Array<amrex::Real,3> v_comoving,
const bool safe_guard_cells,
const int do_electrostatic)
const int do_electrostatic,
const bool do_pml,
const int do_pml_in_domain,
const int pml_ncell)
{
// When using subcycling, the particles on the fine level perform two pushes
// before being redistributed ; therefore, we need one extra guard cell
Expand Down Expand Up @@ -174,6 +177,16 @@ guardCellManager::Init (
IntVect ngFFT = IntVect(ngFFt_x, ngFFt_z);
#endif

#ifdef WARPX_DIM_RZ
if (do_pml) {
if (!do_pml_in_domain) {
ngFFT[0] = std::max(ngFFT[0], pml_ncell);
}
}
#else
amrex::ignore_unused(do_pml, do_pml_in_domain, pml_ncell);
#endif

// All boxes should have the same number of guard cells, to avoid temporary parallel copies:
// thus we take the maximum of the required number of guard cells over all available fields.
for (int i_dim = 0; i_dim < AMREX_SPACEDIM; i_dim++) {
Expand Down
11 changes: 10 additions & 1 deletion Source/WarpX.cpp
Original file line number Diff line number Diff line change
Expand Up @@ -742,6 +742,8 @@ WarpX::ReadParameters ()
#ifdef WARPX_DIM_RZ
AMREX_ALWAYS_ASSERT_WITH_MESSAGE( isAnyBoundaryPML() == false || maxwell_solver_id == MaxwellSolverAlgo::PSATD,
"PML are not implemented in RZ geometry with FDTD; please set a different boundary condition using boundary.field_lo and boundary.field_hi.");
AMREX_ALWAYS_ASSERT_WITH_MESSAGE( field_boundary_lo[1] != FieldBoundaryType::PML && field_boundary_hi[1] != FieldBoundaryType::PML,
"PML are not implemented in RZ geometry along z; please set a different boundary condition using boundary.field_lo and boundary.field_hi.");
#endif

if ( (do_pml_j_damping==1)&&(do_pml_in_domain==0) ){
Expand Down Expand Up @@ -1338,7 +1340,10 @@ WarpX::AllocLevelData (int lev, const BoxArray& ba, const DistributionMapping& d
WarpX::m_v_galilean,
WarpX::m_v_comoving,
safe_guard_cells,
WarpX::do_electrostatic);
WarpX::do_electrostatic,
WarpX::isAnyBoundaryPML(),
WarpX::do_pml_in_domain,
WarpX::pml_ncell);

if (mypc->nSpeciesDepositOnMainGrid() && n_current_deposition_buffer == 0) {
n_current_deposition_buffer = 1;
Expand Down Expand Up @@ -1559,6 +1564,10 @@ WarpX::AllocLevelMFs (int lev, const BoxArray& ba, const DistributionMapping& dm
if ( fft_periodic_single_box == false ) {
realspace_ba.grow(1, ngE[1]); // add guard cells only in z
}
if (field_boundary_hi[0] == FieldBoundaryType::PML && !do_pml_in_domain) {
// Extend region that is solved for the PML
realspace_ba.growHi(0, pml_ncell);
}
AllocLevelSpectralSolverRZ(spectral_solver_fp,
lev,
realspace_ba,
Expand Down