Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

CFL-limited adaptive timestepping for electrostatic solver #5176

Merged

Conversation

archermarx
Copy link
Contributor

@archermarx archermarx commented Aug 27, 2024

This PR implements CFL-limited adaptive timestepping for the electrostatic and magnetostatic solvers. This relies on #5169 (edit: now merged!) and would help fix a lot of the problems in #5113.

The algorithm is pretty simple. At the end of each simulation step, the timestep is updated to be CFL * min(dx,dy,dz) / max_v, where max_v is the maximum speed of any particle in the simulation. For accuracy using leapfrog-like methods where position and velocity are stored at different times, we store both the current and next timestep in dt and dt_next, respectively. During a particle push, the position is pushed by dt and the momentum by dt/2 + dt_next/2. Then, dt is set to dt_next and dt_next is computed using the formula above. I found this approach demonstrated in Chen et al, which included a helpful diagram, although their time centering is offset from ours

image

EDIT: After some feedback from @roelof-groenewald, we now set the timestep at user-defined intervals as opposed to every step. When the timestep is updated, we first synchronize position and velocity. We then set the timestep according to CFL * min(dx,dy,dz) / max_v, where max_v is the maximum speed of any particle in the simulation. We then push the velocity backward one half step and continue using the new timestep.

For PICMI, the cfl number for electrostatic simulations is now controlled by the warpx_cfl parameter in the ElectrostaticSolver class. Users may also specify a warpx.max_dt and warpx.dt_update_interval. The former parameter specifies the maximum allowable timestep and additionally sets the initial timestep. The latter determines how often the timestep is updated. Adaptive timestepping is disabled if this parameter is <= 0.

TODO

  • Determine if this would work for the hybrid solver (no, CFL is not main limiter)
  • Add timestep diagnostic
  • Add version of ES sphere test

@archermarx archermarx added enhancement New feature or request component: electrostatic electrostatic solver labels Aug 27, 2024
@archermarx archermarx marked this pull request as ready for review September 3, 2024 14:33
@archermarx
Copy link
Contributor Author

archermarx commented Sep 3, 2024

Marking as ready for review. Ran a long simulation over the weekend using this feature at two different CFL numbers and encountered no issues.

@RemiLehe RemiLehe self-requested a review September 4, 2024 18:37
@RemiLehe RemiLehe self-assigned this Sep 4, 2024
Python/pywarpx/picmi.py Outdated Show resolved Hide resolved
@archermarx
Copy link
Contributor Author

Addressed most of the comments. Remaining open issues concern the hybrid solver. If we think that will work OK with this timestepping method, then we can use it for all non-EM solver cases. I also want to add a test case or two. Would modifying one of the existing ES test cases to not use a constant dt be sufficient, or should I add an extra test?

Copy link
Member

@roelof-groenewald roelof-groenewald left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This is really cool! I do worry a bit that the need to pass dt_next to so many functions for which it usually is not needed tangles things up in a way that could make it hard to maintain in the future, so I wonder if there is a way to avoid this. Said another way, I wonder if this new algorithm can be added without touching as many functions that are used by other algorithms as well.
For example, to get adaptive time stepping in the ES scheme one could run a simulation for a small number of steps and create a checkpoint (during which particle positions and velocities are synchronized), and then restart that simulation with a different timestep. Could we do this in a way that doesn't require actually checkpointing and restarting? The WarpX evolve loop includes a check for whether positions and velocities are synchronized (is_synchronized) and moves the momentum back half a timestep if it is. Could this algorithm instead re-synchronize the positions and velocities every time the timestep is changed (setting is_synchronized = True)? At the next loop the momentum would then be moved back based on the new timestep and the push should work as usual. Does that make sense? What do you think?

Comment on lines 2748 to 2750

warpx_max_dt: float, optional
The maximum allowable timestep when using adaptive timestepping (only available for non-electromagnetic solvers)
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Seeing as this is specific to the Poisson solver I think it would make more sense to put this argument in ElectrostaticSolver.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Sure, that makes sense

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I don't think the adaptive time stepping would be particularly useful for the hybrid code since the allowable timestep in that model is set by the frequency of Whistler waves at the smallest wavelength the grid can support rather than a particle CFL condition.

@archermarx
Copy link
Contributor Author

This is really cool! I do worry a bit that the need to pass dt_next to so many functions for which it usually is not needed tangles things up in a way that could make it hard to maintain in the future, so I wonder if there is a way to avoid this. Said another way, I wonder if this new algorithm can be added without touching as many functions that are used by other algorithms as well. For example, to get adaptive time stepping in the ES scheme one could run a simulation for a small number of steps and create a checkpoint (during which particle positions and velocities are synchronized), and then restart that simulation with a different timestep. Could we do this in a way that doesn't require actually checkpointing and restarting? The WarpX evolve loop includes a check for whether positions and velocities are synchronized (is_synchronized) and moves the momentum back half a timestep if it is. Could this algorithm instead re-synchronize the positions and velocities every time the timestep is changed (setting is_synchronized = True)? At the next loop the momentum would then be moved back based on the new timestep and the push should work as usual. Does that make sense? What do you think?

That's a neat idea! I'll see if I can implement that instead.

@archermarx
Copy link
Contributor Author

Ok, I refactored according to your suggestion. I think it's much cleaner now.

@archermarx
Copy link
Contributor Author

I could use a bit of help with the last failing test. It's this one, in cartesian2D:

194 - test_2d_pml_x_psatd_restart.analysis (Failed)

I don't do anything to the restarts at all, so I'm unsure why this would be off. Here's the test output:


tolerance = 1e-12

field: ('boxlib', 'Bx'); error = 0.0
field: ('boxlib', 'divE'); error = 0.0
field: ('boxlib', 'rho'); error = 0.0

ERROR: Benchmark and output file checksum have different value for key [lev=0,divE]
Benchmark: [lev=0,divE] 1.891054630780910e+05
Test file: [lev=0,divE] 1.891054630778933e+05
Absolute error: 1.98e-07
Relative error: 1.05e-12
ERROR: Benchmark and output file checksum have different value for key [lev=0,rho]
Benchmark: [lev=0,rho] 1.673495200001335e-06
Test file: [lev=0,rho] 1.673495199999600e-06
Absolute error: 1.73e-18
Relative error: 1.04e-12

----------------
New file for test_2d_pml_x_psatd:
{
  "lev=0": {
    "Bx": 1.1599355635529139e-08,
    "By": 1.51580110019918e-08,
    "Bz": 8.995705405878174e-09,
    "Ex": 3.936315546012292,
    "Ey": 4.13448272729307,
    "Ez": 3.307082741814466,
    "divE": 189105.46307789328,
    "rho": 1.6734951999995999e-06
  }
}

Copy link
Member

@roelof-groenewald roelof-groenewald left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks for updating the implementation! I just have a few minor comments.

As for the test that is suddenly failing, I'd have to guess it is due to the timestep in that simulation having slightly changed since the logic to set it is subtly (and slightly) different. Before it would have been:

deltat = cfl * std::min(dx[0], dx[1]) / PhysConst::c;

whereas now it is

deltat = cfl / PhysConst::c * std::min({AMREX_D_DECL(x[0], x[1], x[2])});

I'm guessing something in that, probably the order of operations, caused a floating point change in deltat which broke the test.

Source/Evolve/WarpXComputeDt.cpp Outdated Show resolved Hide resolved
Source/Evolve/WarpXComputeDt.cpp Outdated Show resolved Hide resolved
Source/WarpX.H Show resolved Hide resolved
@roelof-groenewald roelof-groenewald self-assigned this Sep 16, 2024
@archermarx
Copy link
Contributor Author

archermarx commented Sep 17, 2024

Thanks for updating the implementation! I just have a few minor comments.

As for the test that is suddenly failing, I'd have to guess it is due to the timestep in that simulation having slightly changed since the logic to set it is subtly (and slightly) different. Before it would have been:

deltat = cfl * std::min(dx[0], dx[1]) / PhysConst::c;

whereas now it is

deltat = cfl / PhysConst::c * std::min({AMREX_D_DECL(x[0], x[1], x[2])});

I'm guessing something in that, probably the order of operations, caused a floating point change in deltat which broke the test.

That's what I was thinking, but the weird part is it's only the restart that fails. The regular test_2d_pml_x_psatd test passes normally. I'm not sure how to fix that, since they both use the same checksum. I'll try switching the order of operations and see what happens.

@archermarx
Copy link
Contributor Author

Ok, looks like tests pass now, yay! Also addressed the most recent round of comments.

Copy link
Member

@roelof-groenewald roelof-groenewald left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Looks good! Thanks for adding this feature!

@roelof-groenewald roelof-groenewald merged commit f8f4e57 into ECP-WarpX:development Sep 18, 2024
37 checks passed
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
component: electrostatic electrostatic solver enhancement New feature or request
Projects
None yet
Development

Successfully merging this pull request may close these issues.

4 participants