Skip to content

Conversation

@bendudson
Copy link
Contributor

  • Fix Div_par(f, v) for FCI

When using FCI yup/down fields, each poloidal plane
uses a different coordinate system. Quantities like J
therefore can't be averaged between planes.

Magnetic field strength is a scalar that can be interpolated,
and is used here to calculate the divergence of a parallel flow.
@bendudson bendudson marked this pull request as draft March 15, 2025 22:15
Copy link
Contributor

@github-actions github-actions bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

clang-tidy made some suggestions

(fluxRight - fluxLeft) / (coord->dy(i, j, k) * coord->J(i, j, k));
}
Field3D result{emptyFrom(f)};
BOUT_FOR(i, f.getRegion("RGN_NOBNDRY")) {
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

warning: no header providing "BOUT_FOR" is directly included [misc-include-cleaner]

src/mesh/difops.cxx:27:

- #include <bout/assert.hxx>
+ #include "bout/region.hxx"
+ #include <bout/assert.hxx>


#if CHECK > 0
if (!std::isfinite(result[i])) {
output.write("{} {} {} {}\n", f_up[i], v_up[i], f_down[i], v_down[i]);
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

warning: no header providing "output" is directly included [misc-include-cleaner]

src/mesh/difops.cxx:27:

- #include <bout/assert.hxx>
+ #include "bout/output.hxx"
+ #include <bout/assert.hxx>

if (!std::isfinite(result[i])) {
output.write("{} {} {} {}\n", f_up[i], v_up[i], f_down[i], v_down[i]);
output.write("{} {} {} {} {}\n", B[i], B_up[i], B_down[i], dy[i], sqrt(g_22[i]));
throw BoutException("Non-finite value in Div_");
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

warning: no header providing "BoutException" is directly included [misc-include-cleaner]

src/mesh/difops.cxx:25:

- #include "bout/build_defines.hxx"
+ #include "bout/boutexception.hxx"
+ #include "bout/build_defines.hxx"

bendudson and others added 7 commits March 15, 2025 15:21
Use Bxy rather than J on neighboring slices: B can be compared
between slices, but J cannot (different coordinate system).
Each poloidal slice has a different coordinate system, so metrics
can't be directly compared or averaged. Only Bxy and the parallel
connection length (dy, g_22) make sense to calculate.
Don't calculate parallel slices in a derivative: The result is
unlikely to be correct.
If the argument to `sqrt` has parallel slices, then the
slices will be operated on and the result will have parallel slices.

To avoid unnecessary work, discard slices before calling.
Returns a shallow copy without parallel slices.
Enables user to avoid performing calculations on slices if not needed.
Always perform calculations in the yup/down slices if present
in BOTH arguments.
i.e. automagic always on for arithmetic operators.

To avoid unnecessary work, use `withoutParallelSlices` to pass
arguments without parallel slices.
Copy link
Contributor

@dschwoerer dschwoerer left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think we should talk about this, I do not think we want to remove the part about the metric coefficients?

/* Operate on parallel slices */ \
result.splitParallelSlicesAndAllocate(); \
for (size_t i{0}; i != f.numberParallelSlices(); ++i) { \
BOUT_FOR(d, result.getRegion(rgn)) { result.yup(i)[d] = func(f.yup(i)[d]); } \
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

but here we need a shifted region (at least for FCI) ...

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

My thinking with this was that if we are operating over a given region of a field, then we also want to operate over all the yup/down fields connected to those points. Hence the region indices should be the same for yup/down. I guess you have a different model for how this should work?


/// Returns a shallow copy without parallel slices
Field3D withoutParallelSlices() const {
Field3D result{getMesh(), getLocation(), getDirections(), getRegionID()};
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Would that not be more general:

Suggested change
Field3D result{getMesh(), getLocation(), getDirections(), getRegionID()};
auto result{emptyFrom(*this)};

return setName(coords->J / sqrt(coords->g_22) * Grad_par(f_B, outloc, method),
"Div_par({:s})", f.name);
return setName(Bxy * Grad_par(f_B, outloc, method), "Div_par({:s})", f.name);
}
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

But this about the metric coefficients, not about the strength about the metric field.
Only for Clebsch coordinates this is the same, but not for FCI.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It works because the area of a flux tube is inversely proportional to B. If we're mapping along flux tubes from one plane to the next, then the ratio of the flux tube areas is (inverse) the ratio of B on the two planes.

Comment on lines 270 to 272
result[i] = B[i]
* ((f_up[i] * v_up[i] / B_up[i]) - (f_down[i] * v_down[i] / B_down[i]))
/ (dy[i] * sqrt(g_22[i]));
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This is only valid in clebsch

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Ah no. This is always valid for all coordinate systems, not just Clebsch. It's just a vector identity from Div(B) = 0

Div(b v) = Div(B v / |B|) = Div(B) v/|B| + B dot Grad(v / |B|)

so the divergence of a parallel vector Div_par(v) is always B * Grad_par( v / B )

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Because the mesh is locally field aligned - I see 👍

Comment on lines 276 to 277
output.write("{} {} {} {}\n", f_up[i], v_up[i], f_down[i], v_down[i]);
output.write("{} {} {} {} {}\n", B[i], B_up[i], B_down[i], dy[i], sqrt(g_22[i]));
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We probably want to move that to the exception message below, and also add i to make it easier to figure out why that is non-finite ...

bendudson and others added 10 commits March 17, 2025 21:12
Added comment to explain why it's ok to use the yup/down
slices of the metric components if they are read from the grid file.
Iterated for `i < localmesh->ystart - abs_offset()` so when ystart is 1
and the boundary offset is 1, no boundary is applied.
Reverting some changes back to David's version.
Prefer to have consistent behavior so that code works and can be
optimized without breaking.

The field.withoutParallelSlices() method can be used to strip slices
before calling an operation, if slices are not needed.
If the field has parallel slices, assign the value also to the slices.
applyParallelBoundary can be a no-op for non-fci
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants