Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Fix wraparound bug #246

Merged
merged 14 commits into from
May 20, 2024
Merged

Fix wraparound bug #246

merged 14 commits into from
May 20, 2024

Conversation

Lazersmoke
Copy link
Contributor

This PR fixes the wraparoud bug by using the 1/(T-|t|) algorithm. It also incidentally allows the edge case nomega = 1, which was not previously an allowed value for dynamical_correlations.

Compared to previously, the trajectory will only be computed up to half the previous length (the rest is the wraparound zero padding).

In the interest of breaking up PRs, this PR fixes the same 1/nomega normalization bug as #237 (but in the way which is relevant here) and this is the same wraparound bug as fixed in #217 , but this time without computing both (Sx,Sy) and (Sy,Sx).

@Lazersmoke
Copy link
Contributor Author

Here's a few plots in support of this PR.

In this one, set in real time, the old behavior is in orange (both new and old are computed from the exact same trajectories, which causes the old one to have half the length). Essentially the bug is that the left and right portions of the blue line (new behavior) were superimposed and added together to obtain the (old) orange line.

The green line is the new behavior with an additional cosine squared filter applied, to eliminate the sharp feature.

image

On the level of the spectrum, not much has changed in this example (this PR does not necessarily fix any particular artifact; but it's a correctness fix so it may fix things in situations we don't know about)

image

N.B. the old buggy one is also half energy resolution due to the superimposition I mentioned before.

For this example, I've deliberately gone to low temperature so that the thermal averaging doesn't cause the signal to decay on its own. If you're familiar with NMR notations, the thermal averaging with Langevin at finite temperature (and ImplicitMidpoint at zero temperature) is capable of capturing T2-type decay--but I have suppressed this using low temperature. Using the cosine filter fakes a T1-type decay; but a true T1-type decay can also be implemented by using a finite temperature in ImplicitMidpoint.

@kbarros
Copy link
Member

kbarros commented Apr 28, 2024

This is really nice work. Although Sam's original motivation may have been real-time correlations, it looks like this approach will solve the "energy bleeding" problem in $S(q,\omega)$ data that becomes severe at Goldstone modes (The previous mitigation strategy, :symmetrize, would no longer be needed). Attached is an image with four panels. Left column is Sunny#main, while right column will be enabled by the new feature branch. Top-left is default Sunny. Bottom-left is Sunny#main with the option process_trajectory=:symmetrize to SpinWaveTheory (correcting for a factor-of-2 error). Top-right is this feature branch with defaults (correcting for a missing 1/nw factor), and bottom-right is the feature branch with a cosine-squared filter. Some smoothing of the bottom right panel is, I believe, a reasonable behavior given the finite-length of the trajectory data, which imposes an energy resolution cutoff in any case (although I'm not sure if the current implementation is applying this minimal level of smoothing).

image

Plots generated using David's test script in the details below:

latvecs = lattice_vectors(1, 1, 1.2, 90, 90, 90)
cryst = Crystal(latvecs, [[0,0,0]])

units = Sunny.Units.theory
seed = 101
sys_rcs = System(cryst, (10, 10, 1), [SpinInfo(1, S=1, g=1)], :dipole; units, seed)

## Model parameter
J = 1.0
h = 0.5 
D = 0.05

## Set exchange interactions
set_exchange!(sys_rcs, J, Bond(1, 1, [1, 0, 0]))

## Single-ion anisotropy
Ss = spin_operators(sys_rcs, 1)
set_onsite_coupling!(sys_rcs, D*Ss[3]^2, 1)


## External field
set_external_field!(sys_rcs, h*[0,0,3])

##
randomize_spins!(sys_rcs)
minimize_energy!(sys_rcs)
out = minimize_energy!(sys_rcs)
println(out)

Δt = 0.025
kT = 0.2
λ = 0.1
langevin = Langevin(Δt; kT, λ)

for _ in 1:10_000
    step!(sys_rcs, langevin)
end

sc = dynamical_correlations(sys_rcs; Δt=2Δt, nω=100, ωmax=5.0)
add_sample!(sc, sys_rcs)

nsamples = 50
for _ in 1:nsamples
    for _ in 1:1_000
        step!(sys_rcs, langevin)
    end
    add_sample!(sc, sys_rcs)
end

density = 100
qs, xticks = reciprocal_space_path(sc.crystal, [(0.0, 0.5, 0.0), (0.5, 0.5, 0.0), (1.0, 0.5, 0.0)], density)
data = intensities_interpolated(sc, qs, intensity_formula(sc, :trace; kT); interpolation=:round);

formula = intensity_formula(sc, :trace; kT=Inf)
qs = available_wave_vectors(sc)
is = intensities_interpolated(sc, qs, formula; negative_energies=true)
total_weight = sum(is) / *(size(sys_rcs.coherents)...)
println("Total spectral weight (classical): ", total_weight)
heatmap(1:size(data, 1), available_energies(sc), data; axis=(xticks=xticks,), colorrange=(0.0, 0.5))

The core idea of the new approach is that the end of a finite-length dynamical trajectory should not be treated as periodically wrapping to the beginning of the trajectory. One can still take advantage of FFT from time to frequency space if one adds a zero-buffer to the end of the trajectory. The above-mentioned $1/(T-|t|)$ factor accounts for the reduced statistical samples at increasing time shifts $|t|$. There is a hard cutoff at time-shifts $t$ with magnitude approaching the total trajectory length $T$. For all time shifts $|t|$ smaller than $T$, this "bare" approach gives statistically unbiased samples of the time-correlations. Note, however, that there is a hard cutoff in the availability of time-correlation data beyond the maximum time shift $|t| = T$. Given this, it is advantageous to apply this cutoff smoothly. The cosine squared filter is one way, and ensures that estimated time-correlations go smoothly to 0 as $|t| \to T$. The quadratic vanishing of $\cos^2(x - \pi)$ for small $x \sim T - |t|$ ensures that that the poor statistics when $|t| \sim T$ do not cause problematic diverges in the estimated time correlations.

Details are in the attached note. With some revision, it can evolve into a specification of the Sunny behavior.
causal_correlations.pdf

@ddahlbom
Copy link
Member

ddahlbom commented Apr 28, 2024

I agree this is very nice work. @Lazersmoke shared a few examples explicitly examining the convergence of LL to LSWT as T->0. The behavior with respect to satisfying the quantum sum rule by applying the C2Q factor was much the same as before. There is a particular temperature where it the procedure works well. Below this temperature, the sum is overestimated. In other words, addressing this low-T "problem" seems to be orthogonal to the question of merging this PR.

In addition to having the procedure documented precisely, a few practical questions that should be resolved before merging.

  1. If the user gives dynamical_correlations exactly the same parameters as before, how will the experience differ in terms of
    (a) memory requirements
    (b) execution time
    (c) anything else (besides getting more correct results)?
  2. As we talked about at the meeting, what should the interface be for the window option? I believe Sam suggested a required window keyword. Would the initial options be :rectangular and :cosine?
  3. Do we want to keep a process_trajectory keyword? I would suggest moving to a symmetrize keyword that takes a boolean argument.
  4. It looks like all the test failures are due to reference intensities, which are expected to change. So this should just be a matter of a quick update.

@Lazersmoke
Copy link
Contributor Author

Thanks for the list David!

(a) memory requirements
(b) execution time

The updated algorithm is a little bit slower, since there's an additional fft/ifft to apply the time-domain rescaling by 1/(T-t). It's not avoidable as far as I know. This is currently out-of-place but could be made in-place as an optimization at a later time without affecting UI.

The size of SampledCorrelations in memory will actually stay the same, and the integration time is cut in half instead. This is forced because the number of frequencies in the output is fixed by the user's nomega parameter.

what should the interface be for the window option?

I think Kip wanted :cosine by default ("a reasonable behavior given the finite-length of the trajectory data"), which is fine by me. I only ask that it's possible to disable the filter, since the :cosine filter is a little bit lossly (inverting it requires dividing by small numbers and is ill-conditioned).

The windowing, and 3 & 4 are up to how Kip would like to handle these. This branch is specifically for this PR and edits are allowed so feel free to adjust as you like!

@kbarros
Copy link
Member

kbarros commented May 18, 2024

In a collaborative effort, we have updated this branch in the following ways:

With this PR, David's test script linked above now works very nicely. The intensity scale is the same as on main, the classical sum rule is exactly respected, and the artifacts at the Goldstone mode are completely gone.

image

@ddahlbom
Copy link
Member

Just spent an hour two running this through its paces (trying different models, combinations of options). Everything looks good from my perspective. I think this will be a big improvement.

Curious users can learn about it from an extensive
comment in the source code.
For experts, a keyword argument with the same name still exists, but now accepts a function rather than symbol.
@kbarros
Copy link
Member

kbarros commented May 19, 2024

Slightly revised note specifying the new algorithm:
structure_factor_from_classical_dynamics.pdf

@kbarros
Copy link
Member

kbarros commented May 19, 2024

Here are some more polished results. I've modified David's script from before to run on a 20x20 lattice, and to now collect 1000 samples.

Updated script
using Sunny, GLMakie
Makie.inline!(true)

latvecs = lattice_vectors(1, 1, 1.2, 90, 90, 90)
cryst = Crystal(latvecs, [[0,0,0]])

units = Sunny.Units.theory
seed = 101
sys_rcs = System(cryst, (20, 20, 1), [SpinInfo(1, S=1, g=1)], :dipole; units, seed)

## Model parameter
J = 1.0
h = 0.5 
D = 0.05

## Set exchange interactions
set_exchange!(sys_rcs, J, Bond(1, 1, [1, 0, 0]))

## Single-ion anisotropy
Ss = spin_operators(sys_rcs, 1)
set_onsite_coupling!(sys_rcs, D*Ss[3]^2, 1)


## External field
set_external_field!(sys_rcs, h*[0,0,3])

##
randomize_spins!(sys_rcs)
minimize_energy!(sys_rcs)
out = minimize_energy!(sys_rcs)
println(out)

Δt = 0.025
kT = 0.2
λ = 0.1
langevin = Langevin(Δt; kT, λ)

for _ in 1:10_000
    step!(sys_rcs, langevin)
end

# Add `process_trajectory=:symmetrize` here
sc = dynamical_correlations(sys_rcs; Δt=2Δt, nω=100, ωmax=5.0)
add_sample!(sc, sys_rcs)

nsamples = 1000
for _ in 1:nsamples
    for _ in 1:1_000
        step!(sys_rcs, langevin)
    end
    add_sample!(sc, sys_rcs)
end

density = 100
qs, xticks = reciprocal_space_path(sc.crystal, [(0.0, 0.5, 0.0), (0.5, 0.5, 0.0), (1.0, 0.5, 0.0)], density)
data = intensities_interpolated(sc, qs, intensity_formula(sc, :trace; kT); interpolation=:round);

formula = intensity_formula(sc, :trace; kT=Inf)
qs = available_wave_vectors(sc)
is = intensities_interpolated(sc, qs, formula; negative_energies=true)
total_weight = sum(is) / *(size(sys_rcs.coherents)...)
println("Total spectral weight (classical): ", total_weight)

# Include a factor of 2 for `process_trajectory=:symmetrize``
heatmap(1:size(data, 1), available_energies(sc), data; axis=(xticks=xticks,), colorrange=(0.0, 0.5))
image

Some takeaways:

  1. The energy-blurring artifact on 0.5.9 at the zero-energy point (1/2,1/2,0) is remarkably bad.
  2. On 0.5.9, the process_trajectory=:symmetrize option is quite good at reducing the energy-blurring, but not perfect.
  3. This PR seems to completely eliminate the energy-blurring artifact. It is apparent, however, that the energy resolution is reduced by about a factor of 2 (bottom left panel).
  4. A 2x loss of energy resolution is understandable given that this PR reduces the dynamical trajectory length by a factor of 2, at fixed nω and ωmax. Therefore a more fair benchmark is to enlarge nω by a factor of two (bottom right panel). To get the colors on the same scale, a 2x rescaling of intensity is required due to this bug: The scale of "broadened intensities" should be independent of calculator (SpinWaveTheory or dynamical_correlations) #264

Here are the numerical costs to collect 10 samples, for the three different calculation methods (let's assume :symmetrize has negligible overhead). In seconds:

Prior to this PR With this PR With this PR, double nω
4.43s 2.55s 4.70s

So this PR offers a considerable speedup at fixed (but with sacrifice to the energy resolution), and is comparable in performance even when is doubled from 100 to 200, yielding some subjective (to our eyes) improvement to the "true" resolution.

@kbarros kbarros merged commit 94c0333 into SunnySuite:main May 20, 2024
2 checks passed
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants