-
Notifications
You must be signed in to change notification settings - Fork 5
Home
This page gives Kyle's thoughts on possible future improvements to CubeFit (or similar code) as of May 2017, and a bit of history.
CubeFit does a tolerable job estimating the underlying galaxy and fitting the relevant positions most of the time. However there are a few areas where there is room for improvement. Some of these are things that could be done in the near term ("current-gen") and some are thinking towards WFIRST ("next-gen").
Currently, extracting a SN spectrum is a two-step process:
- CubeFit is given a 3-d PSF model and ADR (atmospheric differential refraction) for each epoch (based on the photometric channel). This model is fixed throughout the the cubefit process.
- ExtractStar is given the galaxy-subtracted cubes (now only containing the SN) and fits a 3-d PSF model (including ADR) when extracting the SN spectrum.
This means that the PSF model used by CubeFit is not necessarily consistent with the model used by ExtractStar. In fact, CubeFit estimates the SN spectrum in each epoch as part of the fitting process, but that estimate might disagree with the final estimate from ExtractStar due to differences in the PSF model. ExtractStar also fits the SN position and ADR, so these can also be different. Inconsistencies in SN position is probably the most pressing issue: CubeFit has access to more information about the SN position than ExtractStar, because it fits all epochs simultaneously, but this information is basically thrown away when the SN position is refit in ExtractStar.
What ExtractStar does could be integrated into CubeFit, perhaps by iterating, alternating between position fitting and ADR/PSF fitting.
One thing I would have liked to look into is autoencoders as a means of reducing the parameter space characterizing galaxy shapes. See for example: http://people.eecs.berkeley.edu/~jregier/publications/regier2015deep.pdf
CubeFit has several optimization steps. All use the L-BFGS-B optimiation algorithm with analytic gradients supplied. This works pretty well, but it would be worth investigating other methods. It would also be worth checking if an even lower convergence tolerance gives acceptable results.
Instead of fitting the model to data in cubes, one can imagine propagating the model to the CCD frame and fitting directly to the CCD data. The advantage would be that the noise characteristics of the data are simpler at the CCD level. Another possible advantage might be the ability to optimize or sample parameters describing the transformation from the CCD to the cube. I'm not sure if there are such parameters or if they would benefit from knowledge of what the scene looks like.
This would require the reverse transformation from what is done now: instead of transforming the data from CCD to cube, we have to transform the 3-d model (as a cube) to the CCD.
CubeFit is a refactored version of code known as DDT, mainly written by Seb Bongard. DDT was written in the Yorick programming language. The original DDT code can still be found in the SNFactory CVS repository, but is no longer used in the pipeline. This wiki has a couple pages that are notes written when translating the DDT code. These are not really relevant any more but are kept around for posterity:
- DDT Yorick Layout: Notes taken while trying to understand DDT.
- Yorick Notes: A few differences between Yorick and Python.