diagnostics options for tinyVAST #327
Replies: 3 comments
-
I'm not sure. There many options, none of which seem ideal (perhaps besides OSA residuals, but (1) slow and (2) yes, in a holding pattern waiting on the testing). I tried them a long time ago and should try adding them back in now that I have my head wrapped around them better. I've been recommending MCMC-based randomized quantile residuals with fixed effects held at MLEs as a good practice, but those can be slow. I.e., as suggested in the "Validation based on a single sample from the posterior" section of Thygesen et al. 2017. Rufener et al. 2021. also did that: https://github.com/mcruf/LGNB/blob/8caa3e3cc64c3bd52fb2446f89a686249e8586e1/R/Validation_and_Residuals.R#L248-L278 And if it was good enough for Kasper, I figured it was good enough for me. Actually running the models to convergence (vs. stopping at something small like 100 iterations as in that example) would be time consuming. DHARMa residuals seem prone to looking off even when the model is 'correct' (say simulated from a known model) and I'm not sure the degree to which this is because of the spatial correlation vs. using the empirical Bayes random effects as opposed to taking a single sample from their distribution. The current default residuals in sdmTMB suffer from the same issue. They're the default mostly because they're fast. I was experimenting with taking a single sample of the random effects using their precision matrix with the fixed effects held at their MLEs the other day. Lines 325 to 343 in 30bd67c Perhaps that should become the default. That single draw approach is in line with Where Kasper helpfully cites Waagepetersen, R. (2006). A Simulation-based Goodness-of-fit Test for Random Effects in Generalized Linear Mixed Models. Scandinavian journal of statistics, 33(4), 721-731. I can't say I follow the standardizing part of that function. It has the benefit of still being fast to calculate and maybe being useful? I haven't tested it much yet. Presumably that draw could be combined with any of the other approaches including PIT residuals from analytical calculation of the quantile function (what I'm currently doing) or simulation residuals with DHARMa. A long time ago I started this: https://pbs-assess.github.io/sdmTMB/articles/web_only/residual-checking.html but that could probably use some updating. @Cole-Monnahan-NOAA probably has thoughts too. |
Beta Was this translation helpful? Give feedback.
-
I'm going to punt this to @Andrea-Havron-NOAA |
Beta Was this translation helpful? Give feedback.
-
Thanks Sean for the detailed summary! Agreed that the MCMC single-step seems like a good intermediate option, and that the MVN draw from random-effect precision (rather than joint precision of fixed and random effects) seems like a reasonable approximation that is likely better than the current conditional (on MLE for fixed and EB for random effects) residuals. FWIW, I'm for now promoting in tinyVAST the simulation-based conditional PIT residuals (e.g., here)... it can be easily shown in the vignettes (allowing DHARMa to be in the SUGGESTS rather than a dependency for users to install by default, thereby avoiding an equivalent of Closing for now, but definitely happy to hear from you @Andrea-Havron-NOAA if you are interested. |
Beta Was this translation helpful? Give feedback.
-
I'm trying to figure out what is the suggested or most popular workflow for accessing and visualizing residual diagnostics in the
sdmTMB
community. I see thatsdmTMBextras::dharma_residuals
uses simulation residuals similar to VAST, but thatsdmTMB::residuals
uses PIT residuals from analytical calculation of the quantile function. @seananderson and others, do you have any suggestion for which of those is worth porting over totinyVAST
, or are we still in a holding pattern waiting for more testing of OSA residuals?Beta Was this translation helpful? Give feedback.
All reactions