Open
Description
Feature Request
During the HAWC collaboration meeting in Puerto Vallarta, it was noted that 3ML analysis was much slower with respect gammapy analysis. Quentin looked into possible bottleneck of the HAL code. Here is a summary (verbatim from him):
- The PSF for extended sources is taken at the center of the ROI and not at the position of each source, could be a problem for large ROIs:
https://github.com/search?q=repo%3AthreeML/hawc_hal%20point_source_image&type=code
- The PSF convolutor for extended source uses the default psf_integration_method=‘exact’ while for point source it is set to
fast
Line 209 in ce74038
Lines 215 to 217 in ce74038
hawc_hal/hawc_hal/psf_fast/psf_convolutor.py
Lines 24 to 25 in ce74038
- For extended sources the psf convolution is not cached if the source parameters do not change
Lines 999 to 1030 in ce74038
- The following should be moved to a function that return npred for each source so it can be cached if its parameters do not change, it seems that only the psf interpolation is cached
Lines 974 to 992 in ce74038
- many functions use this to compute npred but there is no caching
Line 966 in ce74038
- the likelihood could be evaluated on the flat geometry without need to reproject to healpix, could help if reprojection is slow
Lines 1032 to 1045 in ce74038
- this loop could be parallelized
Line 844 in ce74038
- this loop could be parallelized
Line 972 in ce74038
- not sure to understand how the parameter change property is used but it could differentiate spatial and spectral parameters
because if only spectral parameters change it is not necessary to repeat the PSF convolution, one can rescale the cached value with a different weight in each energy bin. (edited)