Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Grid virtual casing #217

Open
wants to merge 33 commits into
base: master
Choose a base branch
from
Open
Show file tree
Hide file tree
Changes from all commits
Commits
Show all changes
33 commits
Select commit Hold shift + click to select a range
689370a
grid based virtual casing implemented
missing-user Nov 29, 2024
8339136
documentation
missing-user Nov 29, 2024
5d26ac4
accuracy estimates of virtual casing method
missing-user Nov 29, 2024
5502bb3
switch between grid based VC or adaptive VC
missing-user Nov 29, 2024
9ece12b
variable renaming, added to outputlist
missing-user Nov 30, 2024
e89bbcf
add to inputlist copy
missing-user Nov 30, 2024
87b10c4
unused variables
missing-user Nov 30, 2024
ce379ca
variable naming and documentation
missing-user Dec 7, 2024
f3a3fa4
Revert broken WIP code
missing-user Dec 7, 2024
20d9a04
rename to vcNt, vcNz
missing-user Dec 7, 2024
745e4c4
Revert derivative comment deletion
missing-user Dec 7, 2024
57a2d3f
typo prevented compilation
missing-user Dec 7, 2024
b2a3f0d
MPI parallel casinggrid
missing-user Dec 12, 2024
1e8ec20
typo in loop counter
missing-user Dec 12, 2024
544cdd0
logging only on rank0
missing-user Dec 12, 2024
497f187
Merge branch 'PrincetonUniversity:master' into grid-virtual-casing2
missing-user Dec 12, 2024
389b4b1
Exploit symmetries in VirtualCasing
missing-user Dec 12, 2024
dfff9a6
remove NaN check
missing-user Dec 12, 2024
f7adb3c
Fix casinggrid bug
missing-user Dec 13, 2024
e2c430b
documented, benchmarked, increased accuracy casinggrid
missing-user Dec 13, 2024
3e13472
Merge branch 'grid-virtual-casing2' of https://github.com/missing-use…
missing-user Dec 13, 2024
8c80363
correct stride regardless of Nt>Nz or Nt<Nz
missing-user Dec 13, 2024
7491933
max relative error instead of mean
missing-user Dec 14, 2024
e3cb69e
add to tests
missing-user Dec 16, 2024
461ebec
increased testcase resolution, improved logging
missing-user Dec 16, 2024
d8ecd62
missing zero initialization broke MPI, casinggrid can now skip parts …
missing-user Dec 16, 2024
78ff6dc
wrong build.yml xspec path
missing-user Dec 16, 2024
df4b737
reduced required resolution for gridVC
missing-user Dec 16, 2024
90cf04d
Merge branch 'grid-virtual-casing2' of https://github.com/missing-use…
missing-user Dec 16, 2024
0b6ab77
captalize vcNt vcNz
missing-user Dec 17, 2024
2cf26c2
update documentation
missing-user Dec 19, 2024
8148182
Merge remote-tracking branch 'upstream/master' into grid-virtual-casing2
missing-user Jan 31, 2025
a93400e
higher default resolution, vcNz is defined as resolution for one fiel…
missing-user Jan 31, 2025
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
6 changes: 6 additions & 0 deletions .github/workflows/build.yml
Original file line number Diff line number Diff line change
Expand Up @@ -69,6 +69,12 @@ jobs:
export OMP_NUM_THREADS=1
mpiexec -n 2 --allow-run-as-root ${SPEC_PATH}/xspec G3V08L3Fr.001.sp
python3 -m py_spec.ci.test compare.h5 G3V08L3Fr.001.sp.h5 --tol 1e-10
- name: current_constraint_free_boundary_grid
run: |
cd ${SPEC_PATH}/ci/G3V08L3FrGrid
export OMP_NUM_THREADS=1
mpiexec -n 2 --allow-run-as-root $SPEC_PATH/xspec G3V08L3FrGrid.001.sp
python3 -m py_spec.ci.test compare.h5 G3V08L3FrGrid.001.sp.h5 --tol 1e-10



6 changes: 6 additions & 0 deletions .github/workflows/build_cmake.yml
Original file line number Diff line number Diff line change
Expand Up @@ -75,4 +75,10 @@ jobs:
export OMP_NUM_THREADS=1
mpiexec -n 2 --allow-run-as-root $SPEC_PATH/install/bin/xspec G3V08L3Fr.001.sp
python3 -m py_spec.ci.test compare.h5 G3V08L3Fr.001.sp.h5 --tol 1e-10
- name: current_constraint_free_boundary_grid
run: |
cd ${SPEC_PATH}/ci/G3V08L3FrGrid
export OMP_NUM_THREADS=1
mpiexec -n 2 --allow-run-as-root $SPEC_PATH/install/bin/xspec G3V08L3FrGrid.001.sp
python3 -m py_spec.ci.test compare.h5 G3V08L3FrGrid.001.sp.h5 --tol 1e-10

234 changes: 234 additions & 0 deletions ci/G3V08L3FrGrid/G3V08L3FrGrid.001.sp

Large diffs are not rendered by default.

Binary file added ci/G3V08L3FrGrid/compare.h5
Binary file not shown.
15 changes: 15 additions & 0 deletions spec_refs.bib
Original file line number Diff line number Diff line change
Expand Up @@ -331,6 +331,21 @@ @article{y2017_loizu
language = "en"
}

@article{y2020_hudson,
doi = {10.1088/1361-6587/ab9a61},
url = {https://dx.doi.org/10.1088/1361-6587/ab9a61},
year = {2020},
month = {jul},
publisher = {IOP Publishing},
volume = {62},
number = {8},
pages = {084002},
author = {S R Hudson and J Loizu and C Zhu and Z S Qu and C Nührenberg and S Lazerson and C B Smiet and M J Hole},
title = {Free-boundary MRxMHD equilibrium calculations using the stepped-pressure equilibrium code},
journal = {Plasma Physics and Controlled Fusion},
abstract = {The stepped-pressure equilibrium code (SPEC) (Hudson et al 2012 Phys. Plasmas 19, 112 502) is extended to enable free-boundary multi-region relaxed magnetohydrodynamic (MRxMHD) equilibrium calculations. The vacuum field surrounding the plasma inside an arbitrary 'computational boundary', is computed, and the virtual-casing principle is used iteratively to compute the normal field on so that the equilibrium is consistent with an externally produced magnetic field. Recent modifications to SPEC are described, such as the use of Chebyshev polynomials to describe the radial dependence of the magnetic vector potential, and a variety of free-boundary verification calculations are presented.}
}

@article{y2023_baillod,
title = "{Equilibrium\textit{$\beta$}-limits} dependence on bootstrap
current in classical stellarators",
Expand Down
215 changes: 160 additions & 55 deletions src/bnorml.f90
Original file line number Diff line number Diff line change
Expand Up @@ -31,7 +31,8 @@
!l tex (Note that the computational boundary does not change, so this needs only to be determined once.)
!l tex \item At each point on the computational boundary (i.e., on the discrete grid),
!l tex \link{casing} is used to compute the plasma field using the virtual casing principle.
!l tex I think that \link{casing} returns the field in Cartesian coordinates, i.e., ${\bf B} = B_x {\bf i} + B_y {\bf j} + B_z {\bf k}$.
!l tex \link{casing} returns the normal field ${\bf B} \cdot {\bf n}$ on the computational boundary, with ${\bf n}$ being the unit normal vector to the
!l tex computational boundary. Internally it computes the field in Cartesian coordinates, i.e., ${\bf B} = B_x {\bf i} + B_y {\bf j} + B_z {\bf k}$
!l tex \item In toroidal geometry, the vector transformation from Cartesian to cylindrical is given by
!l tex \be \begin{array}{cccccccccccccccccccccc}
!l tex B^R & = & & + B_x \cos \z & + & B_y \sin \z & & \\
Expand All @@ -51,22 +52,19 @@
!> \ingroup grp_free-boundary
!>
!> **free-boundary constraint**
!> <ul>
!> <ul>
!> <li> The normal field at the computational boundary, \f$\partial {\cal D}\f$, should be equal to
!> \f$\left({\bf B}_P + {\bf B}_C\right)\cdot {\bf e}_\theta \times {\bf e}_\zeta\f$,
!> where \f${\bf B}_P\f$ is the "plasma" field (produced by internal plasma currents) and is computed using virtual casing,
!> and \f${\bf B}_C\f$ is the "vacuum" field (produced by the external coils) and is given on input. </li>
!> <li> The plasma field, \f${\bf B}_P\f$, can only be computed after the equilibrium is determined,
!> but this information is required to compute the equilibrium to begin with; and so there is an iteration involved. </li>
!> but this information is required to compute the equilibrium to begin with; and so there is an iteration involved \cite y2020_hudson . </li>
!> <li> Suggested values of the vacuum field can be self generated; see xspech() for more documentation on this. </li>
!> </ul>
!>
!> **compute the normal field on a regular grid on the computational boundary**
!> <ul>
!> <li> For each point on the compuational boundary, casing() is called to compute the normal field produced by the plasma currents. </li>
!> <li> \todo There is a very clumsy attempt to parallelize this which could be greatly improved.
!>
!> </li>
!> <li> An FFT gives the required Fourier harmonics. </li>
!> </ul>
!> \see casing.f90
Expand All @@ -81,11 +79,11 @@ subroutine bnorml( mn, Ntz, efmn, ofmn )

use constants, only : zero, half, one, two, pi, pi2, ten

use numerical, only : small
use numerical, only : small, vsmall

use fileunits, only : ounit, lunit

use inputlist, only : Wmacros, Wbnorml, Igeometry, Lcheck, vcasingtol, vcasingper, Lrad
use inputlist, only : Wmacros, Wbnorml, Igeometry, Lcheck, vcasingtol, vcasingits, vcasingper, Lrad, Lvcgrid, vcNz, vcNt, Nfp

use cputiming, only : Tbnorml

Expand All @@ -96,7 +94,8 @@ subroutine bnorml( mn, Ntz, efmn, ofmn )
im, in, Ate, Aze, Ato, Azo, &
Nt, Nz, cfmn, sfmn, &
ijreal, ijimag, jireal, jiimag, &
globaljk, tetazeta, virtualcasingfactor, gteta, gzeta, Dxyz, Nxyz
globaljk, virtualcasingfactor, gteta, gzeta, Dxyz, Nxyz, &
Jxyz, Pbxyz, YESstellsym, prevcstride

!-!-!-!-!-!-!-!-!-!-!-!-!-!-!-!-!-!-!-!-!-!-!-!-!-!-!-!-!-!-!-!-!-!-!-!-!-!-!-!-!-!-!-!-!-!-!-!-!-!-!-!-!-!-!-!-!-!-!-!-!-!-!-!-!-!-!-!-!-!-!-!-!-!-!-!-!-!-!-!

Expand All @@ -105,12 +104,12 @@ subroutine bnorml( mn, Ntz, efmn, ofmn )
INTEGER, intent(in) :: mn, Ntz
REAL , intent(out) :: efmn(1:mn), ofmn(1:mn)

INTEGER :: lvol, Lcurvature, Lparallel, ii, jj, kk, jk, ll, kkmodnp, jkmodnp, ifail, id01daf, nvccalls, icasing, ideriv
REAL :: lss, zeta, teta, cszeta(0:1), tetalow, tetaupp, absacc, gBn
REAL :: Jxyz(1:Ntz,1:3), Bxyz(1:Ntz,1:3), dAt(1:Ntz), dAz(1:Ntz), distance(1:Ntz)
INTEGER :: lvol, Lparallel, ii, jj, kk, jk, ll, kkmodnp, jkmodnp, ifail, id01daf, nvccalls, icasing
REAL :: zeta, teta, gBn
REAL :: Bxyz(1:Ntz,1:3), distance(1:Ntz)
REAL :: absvcerr, relvcerr, resulth, resulth2, resulth4, deltah4h2, deltah2h
INTEGER :: vcstride, Nzwithsym, vcNznfp

!REAL :: vcintegrand, zetalow, zetaupp
! external :: vcintegrand, zetalow, zetaupp

BEGIN(bnorml)

Expand All @@ -121,11 +120,9 @@ subroutine bnorml( mn, Ntz, efmn, ofmn )
!-!-!-!-!-!-!-!-!-!-!-!-!-!-!-!-!-!-!-!-!-!-!-!-!-!-!-!-!-!-!-!-!-!-!-!-!-!-!-!-!-!-!-!-!-!-!-!-!-!-!-!-!-!-!-!-!-!-!-!-!-!-!-!-!-!-!-!-!-!-!-!-!-!-!-!-!-!-!-!

ijreal(1:Ntz) = zero ! normal plasma field; 15 Oct 12;
!ijimag(1:Ntz) = zero

!jireal(1:Ntz) = zero
!jiimag(1:Ntz) = zero
ijimag(1:Ntz) = zero

vcNznfp = vcNz * nfp
!-!-!-!-!-!-!-!-!-!-!-!-!-!-!-!-!-!-!-!-!-!-!-!-!-!-!-!-!-!-!-!-!-!-!-!-!-!-!-!-!-!-!-!-!-!-!-!-!-!-!-!-!-!-!-!-!-!-!-!-!-!-!-!-!-!-!-!-!-!-!-!-!-!-!-!-!-!-!-!

#ifdef DEBUG
Expand All @@ -137,13 +134,128 @@ subroutine bnorml( mn, Ntz, efmn, ofmn )
#endif

!-!-!-!-!-!-!-!-!-!-!-!-!-!-!-!-!-!-!-!-!-!-!-!-!-!-!-!-!-!-!-!-!-!-!-!-!-!-!-!-!-!-!-!-!-!-!-!-!-!-!-!-!-!-!-!-!-!-!-!-!-!-!-!-!-!-!-!-!-!-!-!-!-!-!-!-!-!-!-!
! In stellerator symmetric geometry, the virtual casing field also exhibits symmetriy which we can exploit
if (YESstellsym) then
! we have to compute approximately half points due to symmetry, but including the middle and the edge points! (Closed interval [0, pi])
Nzwithsym = min(Nz, (Nz + 3) / 2 )
else
Nzwithsym = Nz
endif

#ifdef COMPARECASING
! When comparing results, both methods should always run
if (.true.) then
#else
if ( Lvcgrid.eq.1 ) then
#endif
! Precompute Jxyz(:,1:3) and the corresponding positions on the high resolution plasma boundary
Pbxyz(:,1:3) = zero
Jxyz(:,1:3) = zero
!$OMP PARALLEL DO SHARED(Pbxyz, Jxyz) PRIVATE(jk, jj, kk, teta, zeta) COLLAPSE(2)
do kk = 0, vcNznfp-1 ;
do jj = 0, vcNt-1 ;
zeta = kk * pi2 / vcNznfp
teta = jj * pi2 / vcNt ;
jk = 1 + jj + kk*vcNt

! Each MPI rank only computes every a 1/ncpu surfacecurrent() calls
select case( Lparallel )
case( 0 ) ! Lparallel = 0
if( myid.ne.modulo(kk,ncpu) ) cycle
case( 1 ) ! Lparallel = 1
if( myid.ne.modulo(jk-1,ncpu) ) cycle
case default ! Lparallel
FATAL( bnorml, .true., invalid Lparallel in parallelization loop )
end select

! pbxyz and jxyz are both [out] parameters
call surfacecurrent( teta, zeta, Pbxyz(jk,1:3), Jxyz(jk,1:3) )
enddo
enddo

! MPI reductions for positions and currents to accumulate them on all ranks (valid because initialized to zero)
! and Broadcast the total currents and evaluation points back to all ranks
call MPI_Allreduce(MPI_IN_PLACE, Pbxyz(:,1:3), 3*vcNt*vcNznfp, MPI_DOUBLE_PRECISION, MPI_SUM, MPI_COMM_SPEC, ierr )
if (ierr.ne.MPI_SUCCESS) then
FATAL( bnorml, .true., error in MPI_Allreduce for Pbxyz )
endif
call MPI_Allreduce(MPI_IN_PLACE, Jxyz(:,1:3), 3*vcNt*vcNznfp, MPI_DOUBLE_PRECISION, MPI_SUM, MPI_COMM_SPEC, ierr )
if (ierr.ne.MPI_SUCCESS) then
FATAL( bnorml, .true., error in MPI_Allreduce for Jxyz )
endif

! iterate over resolutions of the virtual casing grid to get an estimate of the accuracy. Write the result into ijimag
deltah2h = 1e12
!> The surface currents get precomputed at all points, but maybe we don't have to integrate over the whole grid.
!> Start with a large stride over the plasmaboundary (= how many points to skip at each step), then get progressively finer.
!> Do at least vcasingits iterations (sqrt(vcasingits) resolution per dimension) and halve the stride until the accuracy is reached.
prevcstride = min(prevcstride, INT(0.5*log(real(vcNt*vcNznfp/vcasingits))/log(2.0)))
prevcstride = max(prevcstride, 1) !> At least one step in the resolution cascade is required to determine an estimate of the current accuracy.
do vcstride = prevcstride, 0, -1
!$OMP PARALLEL DO SHARED(Dxyz, Nxyz, Pbxyz, Jxyz, ijreal, ijimag) FIRSTPRIVATE(jk, gBn) COLLAPSE(2)
do kk = 0, Nzwithsym-1 ;
do jj = 0, Nt-1 ;
jk = 1 + jj + kk*Nt

! kk = 0 terms are also symmetryic (relevant e.g. for tokamaks)
if ((Nz.eq.1) .and. (jj.gt.((Nt+1)/2))) cycle

! Each MPI rank only computes every a 1/ncpu surfacecurrent() calls
! Identical MPI parallelization scheme as for Lvcgrid=0
select case( Lparallel )
case( 0 ) ! Lparallel = 0
if( myid.ne.modulo(kk,ncpu) ) cycle
case( 1 ) ! Lparallel = 1
if( myid.ne.modulo(jk-1,ncpu) ) cycle
case default ! Lparallel;
FATAL( bnorml, .true., invalid Lparallel in parallelization loop )
end select ! end of select case( Lparallel )

gBn = zero
globaljk = jk ! only to check against the precomputed values against dvcfield
call casinggrid( Dxyz(1:3,jk), Nxyz(1:3,jk), Pbxyz, Jxyz, 2**vcstride, gBn)

ijimag(jk) = ijreal(jk) ! previous solution (lower resolution)
ijreal(jk) = gBn ! current solution (higher resolution)
enddo ! end of do jj
enddo ! end of do kk

deltah4h2 = deltah2h
deltah2h = maxval(abs(ijimag - ijreal)) ! mean delta between the h and h/2 solutions

! Order of the integration method: log(deltah4h2/deltah2h)/log(2.0) = 1
absvcerr = deltah2h
relvcerr = 2 * deltah2h / maxval(abs(ijreal)) ! overestimate the relative error by a factor of two
if (myid.eq.0) then
write(ounit, '("bnorml : ", 10x ," : relvcerr = ",es13.5," ; absvcerr = ",es13.5," ; resolution reduction = ",i8)') relvcerr, absvcerr, 2**vcstride
! print *, "Convergence order: ", log(deltah4h2/deltah2h)/log(2.0)
endif
! Tolerance was already reached, exit the loop. (Different threads may exit at different times)
if (absvcerr.lt.vcasingtol .and. relvcerr.lt.vcasingtol) then
prevcstride = vcstride
exit
endif
enddo ! end of do vcstride
#ifdef COMPARECASING
! To compare with the casing() implementation, copy the results into ijimag
if(Lvcgrid.eq.1) then
ijimag(1:Ntz) = ijreal(1:Ntz)
endif
endif
! When comparing results, both methods should always run
if (.true.) then
#else
else ! if not Lvcgrid
#endif

do kk = 0, Nz-1 ; zeta = kk * pi2nfp / Nz
do kk = 0, Nzwithsym-1 ;
zeta = kk * pi2 / Nz

if( Igeometry.eq.3 ) then ; cszeta(0:1) = (/ cos(zeta), sin(zeta) /)
endif
do jj = 0, Nt-1 ;
teta = jj * pi2 / Nt ; jk = 1 + jj + kk*Nt

do jj = 0, Nt-1 ; teta = jj * pi2 / Nt ; jk = 1 + jj + kk*Nt
! kk = 0 terms are also symmetryic (relevant e.g. for tokamaks)
if ((Nz.eq.1) .and. (jj.gt.((Nt+1)/2))) cycle

globaljk = jk ! this is global; passed through to vcintegrand & casing;

Expand All @@ -156,35 +268,18 @@ subroutine bnorml( mn, Ntz, efmn, ofmn )
FATAL( bnorml, .true., invalid Lparallel in parallelization loop )
end select ! end of select case( Lparallel ) ; 09 Mar 17;

tetazeta(1:2) = (/ teta, zeta /) ! this is global; passed through to zetalow & zetaupp; 14 Apr 17;

!#ifdef COMPARECASING
!
! tetalow = tetazeta(1) - vcasingper * pi ; tetaupp = tetazeta(1) + vcasingper * pi ; absacc = vcasingtol
!
! id01daf = 1 ; call D01DAF( tetalow, tetaupp, zetalow, zetaupp, vcintegrand, absacc, gBn, nvccalls, id01daf ) ! 04 May 17;
!
! ijimag(jk) = gBn
!
!#endif

WCALL( bnorml, casing, ( teta, zeta, gBn, icasing ) ) ! tetazeta is global; 26 Apr 17;

WCALL( bnorml, casing, ( teta, zeta, gBn, icasing ) )
ijreal(jk) = gBn

!#ifdef COMPARECASING
! write(ounit,1000) myid, zeta, teta, ijreal(jk), ijimag(jk), ijreal(jk)-ijimag(jk)
!#endif
!
!#ifdef CASING
! write(ounit,1000) myid, zeta, teta, ijreal(jk), ijimag(jk), ijreal(jk)-ijimag(jk)
!#endif

1000 format("bnorml : ", 10x ," : myid=",i3," : \z =",f6.3," ; \t =",f6.3," ; B . x_t x x_z =",2f22.15," ; ":"err =",es13.5," ;")
#ifdef COMPARECASING
! write(ounit,1000) myid, zeta, teta, ijreal(jk), ijimag(jk), ijreal(jk)-ijimag(jk)
write(ounit,1000) myid, zeta, teta, ijreal(jk), ijimag(jk), ijreal(jk)-ijimag(jk)
1000 format("bnorml : ", 10x ," : myid=",i3," : \z =",f6.3," ; \t =",f6.3," ; B . x_t x x_z =",2f22.15," ; ":"err =",es13.5," ;")
#endif

enddo ! end of do jj;
enddo ! end of do kk;

endif ! end of if (Lvcgrid )
!-!-!-!-!-!-!-!-!-!-!-!-!-!-!-!-!-!-!-!-!-!-!-!-!-!-!-!-!-!-!-!-!-!-!-!-!-!-!-!-!-!-!-!-!-!-!-!-!-!-!-!-!-!-!-!-!-!-!-!-!-!-!-!-!-!-!-!-!-!-!-!-!-!-!-!-!-!-!-!

1001 format("bnorml : ", 10x ," : "a1" : (t,z) = ("f8.4","f8.4" ) ; gBn=",f23.15," ; ":" error =",f23.15" ;")
Expand All @@ -206,11 +301,6 @@ subroutine bnorml( mn, Ntz, efmn, ofmn )
case( 0 ) ! Lparallel = 0 ; 09 Mar 17;

RlBCAST(ijreal(1+kk*Nt:Nt+kk*Nt),Nt,kkmodnp) ! plasma; 03 Apr 13;
!RlBCAST(ijimag(1+kk*Nt:Nt+kk*Nt),Nt,kkmodnp)

!RlBCAST(jireal(1+kk*Nt:Nt+kk*Nt),Nt,kkmodnp)
!RlBCAST(jiimag(1+kk*Nt:Nt+kk*Nt),Nt,kkmodnp)

case( 1 ) ! Lparallel = 1 ; 09 Mar 17;

do jj = 0, Nt-1
Expand All @@ -220,10 +310,6 @@ subroutine bnorml( mn, Ntz, efmn, ofmn )
jkmodnp = modulo(jk-1,ncpu)

RlBCAST(ijreal(jk),1,jkmodnp) ! plasma; 03 Apr 13;
!RlBCAST(ijimag(jk),1,jkmodnp)

!RlBCAST(jireal(jk),1,jkmodnp)
!RlBCAST(jiimag(jk),1,jkmodnp)

enddo

Expand All @@ -235,6 +321,25 @@ subroutine bnorml( mn, Ntz, efmn, ofmn )

enddo ! 11 Oct 12;

!-!-!-!-!-!-!-!-!-!-!-!-!-!-!-!-!-!-!-!-!-!-!-!-!-!-!-!-!-!-!-!-!-!-!-!-!-!-!-!-!-!-!-!-!-!-!-!-!-!-!-!-!-!-!-!-!-!-!-!-!-!-!-!-!-!-!-!-!-!-!-!-!-!-!-!-!-!-!-!
! In stellerator symmetric geometry, the virtual casing field also exhibits symmetriy which we can exploit
if (YESstellsym) then
! We are dealing with half open intervals in theta and phi [0, 2pi[ so we need to skip the first row and column when mirroring
if (Nz.eq.1) then
do jj = 0, Nt/2;
jk = 1 + jj
ijreal(Ntz + 1 - jk) = -ijreal(jk+1)
enddo
endif

do kk = 0, Nzwithsym-2;
do jj = 0, Nt-1;
jk = 1 + jj + kk*Nt

ijreal(Ntz + 1 - jk) = -ijreal(jk+Nt+1)
enddo
enddo
endif
!-!-!-!-!-!-!-!-!-!-!-!-!-!-!-!-!-!-!-!-!-!-!-!-!-!-!-!-!-!-!-!-!-!-!-!-!-!-!-!-!-!-!-!-!-!-!-!-!-!-!-!-!-!-!-!-!-!-!-!-!-!-!-!-!-!-!-!-!-!-!-!-!-!-!-!-!-!-!-!

ijreal(1:Ntz) = ijreal(1:Ntz) * virtualcasingfactor
Expand Down
Loading