|
1 | | -# Use `scipp` |
| 1 | +# 4. Use `scipp` |
2 | 2 |
|
3 | 3 | ## Status |
4 | 4 |
|
5 | 5 | Current |
6 | 6 |
|
7 | 7 | ## Context |
8 | 8 |
|
9 | | -A decision needs to be made about whether to use scipp, numpy, uncertainties or develop our own library for the purpose of providing support for generating uncertanties on our counts data. |
| 9 | +We need to choose a library which helps us to transform "raw" neutron or muon data from the DAE, into processed |
| 10 | +quantities that we scan over. |
| 11 | + |
| 12 | +Desirable features include: |
| 13 | +- Uncertainty propagation, following standard uncertainty propagation rules. While this could apply to any data in |
| 14 | +principle, it will be especially relevant for neutron/muon counts data. |
| 15 | +- Unit handling & conversions |
| 16 | + - Simple unit conversions, like microns to millimetres. |
| 17 | + - Neutron-specific unit conversions, like time-of-flight to wavelength |
| 18 | +- Ability to handle the typical types of data we would acquire from the DAE and process as part of a scan: |
| 19 | + - Histograms of neutron/muon counts |
| 20 | + - N-dimensional arrays |
| 21 | + - Event-mode data (in future) |
| 22 | + |
| 23 | +Candidate solutions include: |
| 24 | + |
| 25 | +- `mantid` |
| 26 | +- `scipp` |
| 27 | +- `uncertainties` |
| 28 | +- `numpy` + home-grown uncertainty-propagation |
10 | 29 |
|
11 | 30 | ## Decision |
12 | 31 |
|
13 | | -We will be using scipp. |
| 32 | +- Default to using `scipp` for most cases |
| 33 | +- Explore using `mantid` via autoreduction APIs, where we need to do more complex reductions |
14 | 34 |
|
15 | 35 | ## Justification & Consequences |
16 | 36 |
|
17 | | -- `scipp` is being developed at ESS with past input from STFC, so is well suited for neutron counts data. |
18 | | -- `scipp` has a `numpy`-like interface but handles units and uncertainties by default under-the-hood. |
19 | | -- Neither `numpy` or `uncertanties` have exactly the functionality we would need, so the solution using them would be a mix of the libraries and our own code, there would be more places to go wrong. Maintainability. |
20 | | -- `uncertainties` package tracks correlations so may have bad scaling on "large" arrays like counts data from the DAE. |
21 | | -- Developing our own uncertainties library will take time to understand and then implement. All of the functionality that we need has been done beforehand, so better to not waste time & effort. |
22 | | -- Less expertise with this library on site (mitigation: don't do too much which is very complicated with it) |
23 | | -- Potentially duplicates some of `mantid`'s functionality: (mitigation: use `scipp` for "simple" things, use `mantid` in future if people want to do "full" data reduction pipelines) |
| 37 | +### `numpy` |
| 38 | + |
| 39 | +Using `numpy` by itself is eliminated on the basis that we would need to write our own uncertainty-propagation code, |
| 40 | +which is error prone. |
| 41 | + |
| 42 | +`numpy` by itself may still be used in places where uncertainty propagation is not needed. |
| 43 | + |
| 44 | +### `uncertainties` |
| 45 | + |
| 46 | +The `uncertainties` package tracks correlations so may have bad scaling on "large" arrays, where correlation matrices |
| 47 | +can become large in some cases. Would need to be combined with another library, e.g. `pint`, in order to also support |
| 48 | +physical units. No neutron-specific functionality. |
| 49 | + |
| 50 | +### `mantid` |
| 51 | + |
| 52 | +Mantid is not easily installable (e.g. via `pip` at present). |
| 53 | + |
| 54 | +While we have a way to call out to mantid via a REST API, initial tests have shown that the latency of this approach |
| 55 | +is around 15 seconds. This means it is unsuitable for many types of scans, for example alignment scans, where count |
| 56 | +times are far lower than 15 seconds. |
| 57 | + |
| 58 | +However, for complex reductions, we should still consider the option of passing data out to mantid. This is especially |
| 59 | +true if reductions depend significantly on instrument geometry, on instrument-specific corrections, or on other details |
| 60 | +for which mantid is best-equipped to deal with. |
| 61 | + |
| 62 | +Calling out to mantid via an API should also be considered if a reduction step may use significant compute resource. |
| 63 | + |
| 64 | +### `scipp` |
| 65 | + |
| 66 | +`scipp` will be our default way of taking raw data from the DAE and processing it into a scanned-over quantity. |
| 67 | + |
| 68 | +However, in cases where the reduction is expensive (in terms of compute cost) or complex (either implementation, or |
| 69 | +in terms of required knowledge of geometry/instrument-specific corrections), then we should consider using mantid via |
| 70 | +the autoreduction API in those cases instead. |
0 commit comments