Skip to content

Conversation

@ZedThree
Copy link
Member

@ZedThree ZedThree commented Nov 7, 2025

Includes #3196

Ports the following from Hermes-3 fci-more-fixes-update-more:

  • FV::Div_par_mod
  • FV::Div_par_fvv
  • Div_par_K_Grad_par_mod (not strictly FV!)
  • FV::Superbee

Plus adds MMS tests for these and the existing FV operators:

  • FV::Div_par
  • FV::Div_par_K_Grad_par

with all the available slope limiters, as appropriate.


Because the slope limiters kind of necessarily introduce inaccuracies, I've reduced the expected order of the operators, with Upwind having order 1 and the rest 1.5. This feels like a bit of a fudge, but it's not clear to me how to treat this better.


The ported operators will obviously conflict with the Hermes-3 ones. So 2/3 things to do:

  • Remove the ported ones from Hermes-3 when updating the bundled BOUT++
  • Move all the operators here into bout::FV:: namespace
  • Move all the operators there into hermes::FV:: namespace

Copy link
Contributor

@github-actions github-actions bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

clang-tidy made some suggestions

Copy link
Contributor

@github-actions github-actions bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

clang-tidy made some suggestions

Copy link
Contributor

@github-actions github-actions bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

clang-tidy made some suggestions

@ZedThree
Copy link
Member Author

ZedThree commented Nov 7, 2025

Hmm, FV::Div_par_fvv MMS fails for FCI, it gets order ~1.3, but there is a comment on the penalty term:

// Penalty terms. This implementation is very dissipative.

in which case I would perhaps expect oscillations to be damped, but it seems to have the opposite effect (at least in some places):

image

(zoomed in one region for one y slice)

@dschwoerer
Copy link
Contributor

I think this operator needs that the parallel slices for the metric coefficients are set. That is only done for
Div_par_K_Grad_par_mod uses J_up - which is only available in the f3dwy branch, but not here.

// Div_par operators require B parallel slices:
// Coordinates::geometry doesn't ensure this (yet)
auto& Bxy = mesh->getCoordinates()->Bxy;
auto& J = mesh->getCoordinates()->J;
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Note that communicating J is likely wrong for any realistic grid. You need #3197
For a simple slab test, that may well work, but not for real grids.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

What's the issue? And what's required to fix it?

Copy link
Contributor

@dschwoerer dschwoerer Nov 10, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The parallel slices can have a completely different geometry. Thus one needs J from the traced parallel slices. The communication interpolates J, which works for lab-based quantities, but not for mesh based quantities.

One solution is to also write out the parallel metric components in zoidberg, and read them in in BOUT++, and prevent them from being overwritten:
https://github.com/boutproject/zoidberg/blob/more-6/zoidberg/zoidberg.py#L421
https://github.com/boutproject/BOUT-dev/blob/f3dwy/src/mesh/parallel/fci.cxx#L134

See also:
https://bout.zulipchat.com/#narrow/channel/535664-BSTING/topic/Connection.20between.20J.20and.20B.20in.20FCI/with/554530388

@ZedThree
Copy link
Member Author

I've tried this version of the penalty term for Div_par_fvv as suggested by @dschwoerer, but it has the same behaviour as the version here.

Does anyone have a derivation of this term, or where it comes from?

Here's what the penalty term itself looks like, compared to the solution:

image

And here's the result and the error:

image

So the penalty term dominates the error, which I guess is expected. But maybe a grid-scale independent dissipation is not desirable?

@bendudson Any ideas for ways forward here? Is this another case where an oscillatory solution is not realistic, and this method is more suited to monotonic(ish) fields?

We could just set the order for this method to be 1 and be done with it, but that feels less than ideal

ZedThree and others added 21 commits December 3, 2025 14:25
- Include all relevant headers, remove unused ones, add some forward
  declarations
- Make data `private` instead of `protected`
- Add getter instead of `const` member
- Change member reference to pointer
- Move ctor to .cxx file
- Use `std::array` instead of C array
- Bunch of other small clang-tidy fixes
The Div_par operators use parallel slices of B -- with 2D metrics,
these are just the field itself, in 3D we need the actual
slices. While `Coordinates::geometry` does communicate the fields, it
puts off calculating the parallel slices because that needs the fully
constructed `Coordinates`.

Upcoming changes should fix this and remove the need to explicitly
communicate `Coordinates` members.
Co-authored-by: David Bold <dschwoerer@users.noreply.github.com>
@ZedThree ZedThree requested a review from dschwoerer December 3, 2025 14:36
@ZedThree
Copy link
Member Author

ZedThree commented Dec 3, 2025

I think the conclusion of FV::Div_par_fvv is that it's just a first order operator, at least for general oscillatory inputs

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants