Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Implement PML for the outer RZ boundary with PSATD #2211

Merged
merged 39 commits into from
Jan 20, 2022

Conversation

dpgrote
Copy link
Member

@dpgrote dpgrote commented Aug 19, 2021

This uses the same method that is in FBPIC, which is quite different than what is used for Cartesian. This required very different coding, including a separate class PML_RZ. This supports the do_pml_in_domain option with the same meaning.

The only thing missing is a CI test.

I changed this to WIP since there is a bug that needs to be tracked down. When running with a moving window, the grid cell in the upper end of the guard (in both r and z) is not being updated properly. This causes problems since that cell is included in the calculation (the longitudinal FFT includes the z guard cells and the PML is calculated in the radial guard cells). An easy "fix" is to not use that last cell. - This was fixed by adding code to exchange that problematic grid cell.

@dpgrote dpgrote added enhancement New feature or request component: spectral Spectral solvers (PSATD, IGF) geometry: RZ axisymmetric 2D and quasi-3D component: boundary PML, embedded boundaries, et al. labels Aug 19, 2021
@dpgrote dpgrote requested a review from RemiLehe August 19, 2021 06:25
@RemiLehe RemiLehe self-assigned this Aug 23, 2021
@dpgrote dpgrote changed the title Implement PML for the outer RZ boundary with PSATD [WIP]Implement PML for the outer RZ boundary with PSATD Sep 11, 2021
@dpgrote dpgrote changed the title [WIP]Implement PML for the outer RZ boundary with PSATD Implement PML for the outer RZ boundary with PSATD Oct 21, 2021
@RemiLehe
Copy link
Member

@dpgrote It seems that the benchmarks need to be reset after some of the above changes. Is this expected? If yes, could you go ahead and make the change?

@dpgrote
Copy link
Member Author

dpgrote commented Nov 24, 2021

@RemiLehe Yes, I'm working on resetting the benchmarks.

Copy link
Member

@EZoni EZoni left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

A general question that applies to a few files here: When we include some new header files (e.g. #include "BoundaryConditions/PML_RZ.H"), should we do that only if we build WarpX in RZ geometry? Namely, should we wrap those #include directives within a #ifdef WARPX_DIM_RZ ... #endif block?

Copy link
Member

@EZoni EZoni left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thank you for making the last changes, Dave! Looks good to me. I just wonder if we should double check what I mentioned in this comment, but otherwise everything looks good to me.

@dpgrote
Copy link
Member Author

dpgrote commented Dec 6, 2021

@EZoni I went through and put all of the RZ PML code inside of blocks checking for RZ and for PSATD (since RZ PML is only implemented for PSATD).

@EZoni
Copy link
Member

EZoni commented Jan 7, 2022

@dpgrote Would it be possible for you to rebase this PR on development and fix the remaining conflicts? Then hopefully we can merge this in the next few days.

@EZoni EZoni self-requested a review January 13, 2022 16:38
Copy link
Member

@EZoni EZoni left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It looks like we could merge this now. @RemiLehe Do you approve as well, or did you have more changes to request?

@RemiLehe RemiLehe merged commit c7c8a71 into ECP-WarpX:development Jan 20, 2022
@RemiLehe
Copy link
Member

Thanks a lot for this PR!!
And sorry for taking so long to merge it :(

roelof-groenewald added a commit to ModernElectron/WarpX that referenced this pull request Jan 24, 2022
* Use signed distance instead of imp. func when computing distance to EB. (ECP-WarpX#2682)

* Docs: Add Crusher-OLCF (ECP-WarpX#2741)

Document on how to compile and run on Crusher (OLCF).
This is the new Pre-Frontier MI250X machine at Oak Ridge.

Tested :)

Requires ECP-WarpX#2742

* CI: Reduce Runtime of Some PSATD Tests (ECP-WarpX#2704)

* Run Tests on 2 MPI Procs.

* Reset Benchmarks

* Make pml_psatd_dive_divb_cleaning Smaller

* Field probe line detector (ECP-WarpX#2513)

* FieldProbe using Particle

Update FieldProbe.cpp

Update FieldProbeParticleContainer.H

Updates FieldProbe and FieldProbeParticleContainer

* Make <diag>.integrate optional

The param parser query keeps te default value if no entry is found.

* Fixed number particle needed for AddNParticles

* Removing unnecessary type definition

* Added Doxygen-style comments to FieldProbe.cpp
Corrected Poynting calculation by implementing vacuum permeability

* Added Doxygen comments

* Implement virtual function ReducedDiags::AllocData() + comments

* InitData implemented

* Fixed Doxygen commenting.

* Now uses WarpX physics constant for vaccuum permeability

* forgotton comments to MultiReducedDiags

* Update Source/Diagnostics/ReducedDiags/FieldProbe.H

Co-authored-by: Axel Huebl <axel.huebl@plasma.ninja>

* Update Source/Diagnostics/ReducedDiags/FieldProbe.H

Co-authored-by: Axel Huebl <axel.huebl@plasma.ninja>

* Update Source/Diagnostics/ReducedDiags/FieldProbe.H

Co-authored-by: Axel Huebl <axel.huebl@plasma.ninja>

* Update Source/Diagnostics/ReducedDiags/FieldProbe.H

Co-authored-by: Axel Huebl <axel.huebl@plasma.ninja>

* Update Source/Diagnostics/ReducedDiags/FieldProbe.cpp

Co-authored-by: Axel Huebl <axel.huebl@plasma.ninja>

* Update FieldProbe.H

* Update FieldProbe.cpp

* Update Source/Diagnostics/ReducedDiags/ReducedDiags.H

Co-authored-by: Axel Huebl <axel.huebl@plasma.ninja>

* Update Source/Diagnostics/ReducedDiags/MultiReducedDiags.H

Co-authored-by: Axel Huebl <axel.huebl@plasma.ninja>

* Update Source/Diagnostics/ReducedDiags/MultiReducedDiags.H

Co-authored-by: Axel Huebl <axel.huebl@plasma.ninja>

* Update Source/Diagnostics/ReducedDiags/FieldProbe.cpp

Co-authored-by: Axel Huebl <axel.huebl@plasma.ninja>

* Update FieldProbeParticleContainer.H

* Update Source/Diagnostics/ReducedDiags/FieldProbeParticleContainer.H

Co-authored-by: Axel Huebl <axel.huebl@plasma.ninja>

* Update FieldProbeParticleContainer.cpp

* Update FieldProbe.cpp

* Update FieldProbe.H

* Update Source/Diagnostics/ReducedDiags/FieldProbeParticleContainer.H

Co-authored-by: Axel Huebl <axel.huebl@plasma.ninja>

* Update Source/Diagnostics/ReducedDiags/FieldProbeParticleContainer.H

Co-authored-by: Axel Huebl <axel.huebl@plasma.ninja>

* Update FieldProbeParticleContainer.cpp

* Update Source/Diagnostics/ReducedDiags/FieldProbe.cpp

Co-authored-by: Axel Huebl <axel.huebl@plasma.ninja>

* Update Source/Diagnostics/ReducedDiags/FieldProbe.cpp

Co-authored-by: Axel Huebl <axel.huebl@plasma.ninja>

* Update Source/Diagnostics/ReducedDiags/FieldProbe.cpp

Co-authored-by: Axel Huebl <axel.huebl@plasma.ninja>

* Update Source/Diagnostics/ReducedDiags/FieldProbe.cpp

Co-authored-by: Axel Huebl <axel.huebl@plasma.ninja>

* Update Source/Diagnostics/ReducedDiags/FieldProbe.cpp

Co-authored-by: Axel Huebl <axel.huebl@plasma.ninja>

* Update Source/Diagnostics/ReducedDiags/FieldProbe.cpp

Co-authored-by: Axel Huebl <axel.huebl@plasma.ninja>

* Update Source/Diagnostics/ReducedDiags/FieldProbe.cpp

Co-authored-by: Axel Huebl <axel.huebl@plasma.ninja>

* Update Source/Diagnostics/ReducedDiags/FieldProbe.cpp

Co-authored-by: Axel Huebl <axel.huebl@plasma.ninja>

* Update Source/Diagnostics/ReducedDiags/FieldProbe.cpp

Co-authored-by: Axel Huebl <axel.huebl@plasma.ninja>

* Update Source/Diagnostics/ReducedDiags/FieldProbe.cpp

Co-authored-by: Axel Huebl <axel.huebl@plasma.ninja>

* Update Source/Diagnostics/ReducedDiags/FieldProbe.cpp

Co-authored-by: Axel Huebl <axel.huebl@plasma.ninja>

* Update Source/Diagnostics/ReducedDiags/ReducedDiags.cpp

Co-authored-by: Axel Huebl <axel.huebl@plasma.ninja>

* Update Source/Diagnostics/ReducedDiags/FieldProbe.cpp

Co-authored-by: Axel Huebl <axel.huebl@plasma.ninja>

* Update Source/Diagnostics/ReducedDiags/FieldProbe.cpp

Co-authored-by: Axel Huebl <axel.huebl@plasma.ninja>

* Update Source/Diagnostics/ReducedDiags/FieldProbe.cpp

Co-authored-by: Axel Huebl <axel.huebl@plasma.ninja>

* Update Source/Diagnostics/ReducedDiags/FieldProbe.cpp

Co-authored-by: Axel Huebl <axel.huebl@plasma.ninja>

* Update Source/Diagnostics/ReducedDiags/FieldProbe.cpp

Co-authored-by: Axel Huebl <axel.huebl@plasma.ninja>

* Update Source/Diagnostics/ReducedDiags/FieldProbe.cpp

Co-authored-by: Axel Huebl <axel.huebl@plasma.ninja>

* Update Source/Diagnostics/ReducedDiags/FieldProbe.cpp

Co-authored-by: Axel Huebl <axel.huebl@plasma.ninja>

* Update Source/Diagnostics/ReducedDiags/FieldProbe.cpp

Co-authored-by: Axel Huebl <axel.huebl@plasma.ninja>

* Update Source/Diagnostics/ReducedDiags/FieldProbe.cpp

Co-authored-by: Axel Huebl <axel.huebl@plasma.ninja>

* Update Source/Diagnostics/ReducedDiags/FieldProbe.cpp

Co-authored-by: Axel Huebl <axel.huebl@plasma.ninja>

* Update Source/Diagnostics/ReducedDiags/FieldProbe.cpp

Co-authored-by: Axel Huebl <axel.huebl@plasma.ninja>

* Update Source/Diagnostics/ReducedDiags/FieldProbe.cpp

Co-authored-by: Axel Huebl <axel.huebl@plasma.ninja>

* Changed enumerated class to struct w/ enumeration. Can remove "static_cast<int>"

* FieldProbeParticleContainer::iterator implemented

* Cleaned up output += operator, fixed output comments

* style fix

* Replaces Tabs with 4 spaces

* Defined modes and interp order to avoid GPU compilation errors

* 1 more tab fix

* EoL white spaces

* fixed a typoX

* Explicitly capturing "this" in parallel for to combat error saying "error ECP-WarpX#3223-D: Implicit capture of 'this' in extended lambda expression"

* removed unncessacesy double define

* moved output out of ParallelFor. temp variable for integrate

* Parse integrate, integrate all time steps, output setup for integrate and regular

* Fixed integrate bug.

* ammend header. integreate variable name change.

* Integrate values in input file

* updates to timing for integrate

* Update Source/Diagnostics/ReducedDiags/FieldProbeParticleContainer.cpp

Co-authored-by: Axel Huebl <axel.huebl@plasma.ninja>

* whitespace

* Update Source/Diagnostics/ReducedDiags/FieldProbeParticleContainer.cpp

Co-authored-by: Axel Huebl <axel.huebl@plasma.ninja>

* Update Source/Diagnostics/ReducedDiags/FieldProbeParticleContainer.cpp

Co-authored-by: Axel Huebl <axel.huebl@plasma.ninja>

* Functionality to create 2D line of particles. Input included. No output yet

* ammend compiler errors

* Apply suggestions from code review - Style

Co-authored-by: Axel Huebl <axel.huebl@plasma.ninja>

* Update reduce_diag_names

* 2D array setup- not complete

* field_probe_integrate change

* review amends

* Apply suggestions from code review - Style

Co-authored-by: Axel Huebl <axel.huebl@plasma.ninja>

* Vectors + AddNParticles

Co-authored-by: Axel Huebl <axel.huebl@plasma.ninja>

* Update FieldProbe.cpp

* Update FieldProbe.H

* bug fix and inputs

* reintroduce raw_fields functionality

* docs update and correction

* whitespaces

* Fix GPU Compile (raw_fields)

* changed f_probe to m_probe apropriately

* Typos

Co-authored-by: David Grote <dpgrote@lbl.gov>
Co-authored-by: Axel Huebl <axel.huebl@plasma.ninja>

* Better name for ParticleVal

* used map for observables and units

* Apply suggestions from code review

Co-authored-by: Axel Huebl <axel.huebl@plasma.ninja>

* Simplified output. Fixed double integration error

* Update FieldProbe.H

Removed unneeded variable

* comments and fixed rawFields

* white spaces

* Apply suggestions from code review

Co-authored-by: Axel Huebl <axel.huebl@plasma.ninja>

* Update FieldProbe.H

* Guard on write

* Update Source/Diagnostics/ReducedDiags/FieldProbe.cpp

Co-authored-by: Axel Huebl <axel.huebl@plasma.ninja>

* Fix Syntax Error in Write

* Fix Init: Only 1 Particle (MPI)

Only one MPI rank adds a particle, which we done distribute
into the right rank.

* Fix MPI Deadlock: No Early Return

We just want to skip the write to `m_data`, not the rest
of the logic.

* Vector storage, Add N particle, debugging

* Fix Probe in Domain Logic

General global check, not only on a single rank.

* comments

* Container: Add `const_iterator`

* Fix MPI Comms

* Cleaning

* Remove PrintAll Leftover

* 1-D Output vector

* Reduced Diags: Support LoadBalance

* Cleaning of "Definitions ()"

* Updating inputs for testing Line

* data type specification

* IO

* Update inputs

* Update inputs

* error in header. Send to IO CPU

* Apply suggestions from code review

Co-authored-by: Axel Huebl <axel.huebl@plasma.ninja>

* moved rank communication out of tile loop

* change m_data_vector. IO particle count

* Fixed input for rename. Gather particle number

* Apply suggestions from code review

Co-authored-by: Axel Huebl <axel.huebl@plasma.ninja>

* Gather

* Changed data output method to pushing values on vector
MPI Gather and Gatherv for data
Tell Evolve to run Load Balance
Tell InitData to run Load Balance
Define output method by printing valid particles

NOTE! Needs cleaning, commenting, removing some debugging tools

* Apply suggestions from code review

Co-authored-by: Axel Huebl <axel.huebl@plasma.ninja>

* Suggestions from review

* defensive programming on if / if else
added comments throughout
removed temporary debugging lines

* Documentation for line detector option

* Whitespaces

* MPI_Gather -> amrex::ParallelDescriptor::Gather

* ParallelDiscriptor, vectors at the end, no more 1990's malloc for capacity allocation

* whitespaces

* output optimized for CSV

* Python notebook for reference

* Input file for current test on CORI

* 2D plane functionality

* Regression test

* Delete DoubleSlit_2021_11_17.ipynb

* Whitespace fix

* pandas

* Error set to 2.5%, fixed source

* style

* zenodo orcid

* Apply suggestions from code review

Co-authored-by: Axel Huebl <axel.huebl@plasma.ninja>

* Review changes and swapped MPI direct call for Amrex::ParallelDescriptor

* [pre-commit.ci] auto fixes from pre-commit.com hooks

for more information, see https://pre-commit.ci

* Update WarpX-tests.ini

Fix Regression Test

* Update WarpX-tests.ini

Open PMD in cmakeSetupOpts

* Update dependencies.rst

* Update analysis_field_probe.py

* Update WarpX-tests.ini

* Analysis Script: Executable

```
chmod a+x scriptname.py
```

and use explicitly `python3`

* openPMD: optional for this test

* Inputs: add `geometry.dims = 2`

* Remove: diag1.write_species = 0

- segfaults for plotfiles (bug?)
- not needed, since we have no particles anyway

* Fix: typo in analysis

* test requirements: pandas

* Fix: Types

* as string: `<red_diag>.probe_geometry`

change this to a string, which is more user-friendly

* Python Script: Simplify + Style

* C++: Clean Up

* Azure: Run `apt update`

Co-authored-by: Axel Huebl <axel.huebl@plasma.ninja>
Co-authored-by: David Grote <dpgrote@lbl.gov>
Co-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com>

* Fix ASSERT for Hybrid Scheme & MR (ECP-WarpX#2744)

* CI: Use New "verbose" mode (ECP-WarpX#2747)

repeat captured errors to stderr, e.g., for CI runs

* Set geometry earlier in picmi (ECP-WarpX#2745)

* Set geometry earlier in Python

* Fix comment typos

* CI: Use new "archive_output = 0" mode (ECP-WarpX#2749)

Avoid `.tgz`-ing the output files, so we can interact directly
with plotfiles with our benchmark scripts.

Implementation proposed in:
AMReX-Codes/regression_testing#117

* .editorconfig: add missing newline

pretty sections

* Python: Fix UB in Inputs Passing (ECP-WarpX#2726)

Trying to fix the macOS PyPy 3.7 error seen in conda-forge/warpx-feedstock#37
Testing in conda-forge/warpx-feedstock#38

After googling for a while, the original implementation was likely based on https://code.activestate.com/lists/python-list/704158, which contains bugs.

1) Bug: `create_string_buffer`

Allocating new, null-terminated char arrays with `ctypes.create_string_buffer` does lead to scrambled arrays in pypy3.7.
As far as I can see, this [should have also worked](https://docs.python.org/3/library/ctypes.html), but maybe there is a bug in the upstream implementation or the original code created some kind of use-after-free on a temporary while the new implementation just shares the existing byte address.

This leads to errors such as the ones here:
conda-forge/warpx-feedstock#38 (comment)

The call `argvC[i] = ctypes.c_char_p(enc_arg)` is equivalent in creating a `NULL`-terminated char array.

2) Bug: Last Argv Argument

The last argument in the array of char arrays `argv` in ANSII C needs to be a plain `NULL` ptr.
Before this PR, this has been allocated but never initialized, leading to undefined behavior (read as: crashes).

Reference: https://stackoverflow.com/a/39096006/2719194

3) Cleanup: there is a pre-defined `ctypes.c_char_p` we can use for pointer of char.

* Do Not Read/Use Centering Info if do_nodal=1 (ECP-WarpX#2754)

* Docs: Python Dev Install `--no-deps` (ECP-WarpX#2750)

`--force-reinstall` will also re-install all dependencies, unless
`--no-deps` is also passed. In the case of re-installing developer
builds, this is what we want with pre-configured environments.

Using `--no-build-isolation` with the same flag does not achieve the
same effect.

* Refactor python callback handling (ECP-WarpX#2703)

* added support to uninstall an external Poisson solver and return to using the default MLMG solver; also updated some callbacks.py calls to Python3

* refactor callback handling - use a map to handle all the different callbacks

* warpx_callback_py_map does not need to link to C

* Apply suggestions from code review

Co-authored-by: Axel Huebl <axel.huebl@plasma.ninja>

* further suggested changes from code review

* added function ExecutePythonCallback to reduce code duplication

* moved ExecutePythonCallback to WarpX_py

* added function IsPythonCallbackInstalled

Co-authored-by: Axel Huebl <axel.huebl@plasma.ninja>

* RZ: Do Not Add geometry.coord_sys (ECP-WarpX#2759)

* CI: Run `initial_distribution` on 1 MPI Process (ECP-WarpX#2760)

* Fix override default particle tiling (ECP-WarpX#2762)

* Override the default tiling option for particles *before* WarpX is initialized.

* bump AMReX version to see if tests pass.

* fix typo

* style

* use queryAdd

* namespace

* RigidInjection_BTD: Specify H5 Backend (ECP-WarpX#2761)

We default to `.bp` files when available.
This results for this test in:
```
amrex::Abort::0:: Currently BackTransformed diagnostics type does not support species output for ADIOS backend. Please select h5 as openpmd backend !!!
```

* Docs: Reorder HPC Profiles + Batch Scripts (ECP-WarpX#2757)

* Docs: Reorder Summit Files

* Docs: Reorder Spock Files

* Docs: Reorder Cori Files

* Docs: Reorder Perlmutter Files

* Docs: Reorder Juwels Files

* Docs: Reorder Lassen Files

* Docs: Reorder Quartz Files

* Docs: Reorder Ookami Files

* Docs: Also Move Summit Profile Script

* Listing Captions: Location in Source

* Sphinx: Clean Warnings/Formatting (ECP-WarpX#2758)

* Sphinx: Clean Warnings/Formatting

Remove formatting errors in Sphinx that caused warnings/ill-formed
formatting.

* Move `boundary.reflect_all_velocities`

Co-authored-by: Neïl Zaim <49716072+NeilZaim@users.noreply.github.com>

* Fix: character after verbatim

Not allowed and does render broken.

* Fix broken `.. directive::`

Co-authored-by: Neïl Zaim <49716072+NeilZaim@users.noreply.github.com>

* Fix unstable Python_particle_attr_access CI tests (ECP-WarpX#2766)

* explicitly set the numpy random seed in Python_particle_attr_access tests

* also shrink boundaries in which particles are injected for good measure

* also explicitly set the numpy random seed in Python_restart_runtime_components CI test

* Docs: Clang 7+ (ECP-WarpX#2763)

Seen in openPMD/openPMD-api#1164 for
`<variant>`, clang 6 is not to be recommended for C++17 compilation
unless by expert users that know how to change the stdlib.

Thus, let's only recommend Clang 7+.

Ubuntu 18.04 (bionic/oldstable) ships clang 6 by default, but
Ubuntu 20.04 (focal/stable) is already at clang 10.

* Doc: Perlmutter Note `_g` Batch Script (ECP-WarpX#2767)

Add one more note.

* Implement PML for the outer RZ boundary with PSATD (ECP-WarpX#2211)

* Initial version of RZ PSATD PML BCs

* Cleaned up some bugs

* Add support of do_pml_in_domain option

* Cleaned up stuff for building

* Fix PMLPsatdAlgorithm macro

* Removed unneeded variable from SpectralSolverRZ

* Change length 3 arrays to length 2 (for 2D)

* Cleanup around DampPML

* Added more checks of pml[lev]

* Added CI test for RZ PML

* Added code to update the corner guard cells

* Further updates

* Added CI test

* Fixed EOL space

* Updated CI benchmarks, removing round off fields

* Changes to CI missed on previous commit

* Various fixes for clean up

* More fixes for clean up

* Further cleanup

* Updated benchmark

* Fixed benchmarks file

* Minor cleanup

* Added round off benchmark values

* Fixed testname in analysis_pml_psatd_rz.py

* Update comment in analysis file

* Put pml_rz code in RZ and PSATD macro blocks

* [pre-commit.ci] auto fixes from pre-commit.com hooks

for more information, see https://pre-commit.ci

* Add geometry.dims input to CI test input file, inputs_rz

* Cleanup to match recent changes

Co-authored-by: Remi Lehe <remi.lehe@normalesup.org>
Co-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com>

* AMReX: Update latest (ECP-WarpX#2752)

* Access species specific charge density from python (ECP-WarpX#2710)

* added python wrapper function to deposit a specific species density in rho_fp

* added 1D ES input file with MCC that uses the charge deposition functionality

* reset rho_fp[lev] before depositing

* updated documentation

* switch to using simulation.extension in Poisson solver

* Apply suggestion from code review

Co-authored-by: Phil Miller <phil.miller@intensecomputing.com>

* suggested changes from code review

* add comment explaining why a direct Poisson solver is used

* removed direct solver in 1D example since it is actually slower than the MLMG solver

* Apply suggestions from code review

Co-authored-by: Axel Huebl <axel.huebl@plasma.ninja>

* added docstring for warpx_depositChargeDensity

* fixed order of imports in new PICMI input file

Co-authored-by: Phil Miller <phil.miller@intensecomputing.com>
Co-authored-by: Axel Huebl <axel.huebl@plasma.ninja>

* update reference values for CI test; add SyncRho call to deposit rho

Co-authored-by: Andrew Myers <atmyers@lbl.gov>
Co-authored-by: Axel Huebl <axel.huebl@plasma.ninja>
Co-authored-by: Edoardo Zoni <59625522+EZoni@users.noreply.github.com>
Co-authored-by: Tiberius Rheaume <35204125+TiberiusRheaume@users.noreply.github.com>
Co-authored-by: David Grote <dpgrote@lbl.gov>
Co-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com>
Co-authored-by: David Grote <grote1@llnl.gov>
Co-authored-by: Neïl Zaim <49716072+NeilZaim@users.noreply.github.com>
Co-authored-by: Remi Lehe <remi.lehe@normalesup.org>
Co-authored-by: Phil Miller <phil.miller@intensecomputing.com>
@dpgrote dpgrote deleted the RZ_psatd_pml branch February 5, 2022 01:41
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
component: boundary PML, embedded boundaries, et al. component: spectral Spectral solvers (PSATD, IGF) enhancement New feature or request geometry: RZ axisymmetric 2D and quasi-3D
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants