Skip to content

Conversation

@nvaytet
Copy link
Member

@nvaytet nvaytet commented Jan 31, 2026

Some monitors can be placed between choppers and even before the first choppers.
Until now, we only covered a range of distances in the tof lookup table that started after the last chopper in the cascade.

This PR generalizes the mechanism so we can build LUTs for the full beamline range.
For Odin, it now looks as follows:
Figure 1 (26)

Note that the SimulationResults class has changed, it is now a dict of event readings at various places along the beamline. Each reading is now essentially what the SimulationResults used to be before.
Note however that despite this change, no updates to the workflow that computes a LUT were necessary (e.g. notebooks for Odin and Dream).

The only small difference (apart from widening the LtotalRange) is that it is likely that we need to increase the LookupTableRelativeErrorThreshold parameter in these notebooks, as the mixing of wavelengths is high at small distances, before the first choppers.

lut_wf[time_of_flight.NumberOfSimulatedNeutrons] = 10_000
lut_wf[time_of_flight.LtotalRange] = (
sc.scalar(0.0, unit="m"),
sc.scalar(20.0, unit="m"),
Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Using 0 here raised the "No simulation reading found for distance -0.2m." error

# according to which component reading to use, we simply loop over distances one
# by one here.
for dist in distances:
# Find the correct simulation reading
Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Could optimize here instead of starting the search from the beginning every time, but not sure the added complexity (probably involving keeping track of an index for the relevant component reading) is worth the (probably minimal) performance gain?

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This loop runs once for every chopper setting, and the loop is over all pixels?

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

and the loop is over all pixels?

No the loop is over all distances in the table. Typically, about 200-400 points in the distance dimension in the table.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

👍

@nvaytet nvaytet marked this pull request as ready for review January 31, 2026 23:25
@nvaytet nvaytet requested a review from SimonHeybrock January 31, 2026 23:25
The chopper parameters used in the simulation (if any).
"""

readings: dict[str, BeamlineComponentReading]
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Unclear what this str is from the docstrings.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Updated docstring to be clearer.

# according to which component reading to use, we simply loop over distances one
# by one here.
for dist in distances:
# Find the correct simulation reading
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This loop runs once for every chopper setting, and the loop is over all pixels?

@SimonHeybrock
Copy link
Member

Some more things I found with Claude:

Potential Issues

1. Hardcoded time_unit = "us"

The old code used the input's unit: time_unit = simulation.time_of_arrival.unit. The new code hardcodes time_unit = "us". This is a silent behavior change that could cause unit mismatches if inputs have different units.

2. Breaking API change not called out

SimulationResults is a public class. The change from direct attributes (time_of_arrival, wavelength, etc.) to a readings dict is breaking. Users creating SimulationResults directly (e.g., from McStas simulations, as mentioned in existing docstrings) will need to update their code. The PR body says "no updates to the workflow that computes a LUT were necessary" but the notebook changes themselves contradict this - they now require sim.readings["t0"] instead of just sim.

4. Deleted chunking test

test_lut_workflow_computes_table_in_chunks was removed. Was this testing important behavior that's now lost? The comment said "Lots of neutrons activates chunking". The new code has no equivalent test for large-dataset behavior.

Minor observations

  • The error message "No simulation reading found for distance X m. It is likely lower than the simulation reading closest to the source." uses internal terminology ("simulation reading"). "Position is before the neutron source" might be clearer.

  • _to_component_reading handles both source (uses birth_time) and choppers (uses toa) via if "toa" in events.coords. A comment explaining this would help.

  • The workflow test change from 0.0 m to 20.0 m in test_GenericTofWorkflow_assigns_Ltotal_coordinate silently narrows test coverage. The comment explains it raised an error, but is that expected behavior? Or should the workflow handle requesting distances before the source more gracefully?

@nvaytet
Copy link
Member Author

nvaytet commented Feb 2, 2026

The old code used the input's unit: time_unit = simulation.time_of_arrival.unit. The new code hardcodes time_unit = "us". This is a silent behavior change that could cause unit mismatches if inputs have different units.

We ensure unit conversions are done at the relevant places in the downstream code that uses the table (in eto_to_tof.py).
This is a way to standardize simulation inputs from different codes (tof, McStas, ...) and I think makes reading the code simpler (knowing what units are used throughout).

SimulationResults is a public class. The change from direct attributes (time_of_arrival, wavelength, etc.) to a readings dict is breaking. Users creating SimulationResults directly (e.g., from McStas simulations, as mentioned in existing docstrings) will need to update their code.

True, but so far, there is no other code out there doing this. Only tables built with tof exist (apart from some notebooks I have on my personal space, but then I am aware of this particular change). The generalization of the SimulationResults type is necessary to avoid breaking all the existing notebooks even more by leaving that type untouched and creating a new type and changing the pipeline that generates the LUT.

The PR body says "no updates to the workflow that computes a LUT were necessary" but the notebook changes themselves contradict this - they now require sim.readings["t0"] instead of just sim.

This is a detail in the docs to create educational figures, and not something the users will ever do themselves. So I think we can live with that change.

test_lut_workflow_computes_table_in_chunks was removed. Was this testing important behavior that's now lost? The comment said "Lots of neutrons activates chunking". The new code has no equivalent test for large-dataset behavior.

This was just testing that running with more neutrons did not raise an error. The test never clearly checked any details about the chunked processing. It was basically not a good test (it's exactly the same test as the previous one, but with more neutrons in the simulation). Now, we process the data one distance at a time, so basically we are doing even finer chunking (before the chunks would be something like 20 distances at a time). But I see no use in having a test for this, as there is no longer any branching in the code: we always use chunking.

Copy link
Member

@SimonHeybrock SimonHeybrock left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

👍

@nvaytet nvaytet merged commit 474a847 into main Feb 2, 2026
4 checks passed
@nvaytet nvaytet deleted the full-beamline-lut branch February 2, 2026 09:26
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants