Description
I have used EOFBootstrapper from xeofs.validation to make a significant test recently and I got a result. However, my result seems incorrect as the first model and the third model passed the significant test but the second model did not. As we know, EOF analysis usually becomes more robustness as the number of models increases. Therefore, I‘m not clear on the result.
It's my code:
fg=xr.open_dataset("/mnt/e/wind_global/obs/masked/E-OBS_wind_monthly_mean_1×1_masked.nc").fg[:-6,:,:]
a=5
model = EOF(n_modes=a, use_coslat=True)
model.fit(fg, dim="time")
components=model.components()
scores=model.scores(normalized=False)
expvar=model.explained_variance_ratio()
n_boot = 100000
bs = EOFBootstrapper(n_bootstraps=n_boot)
bs.fit(model)
bs_expvar = bs.explained_variance()
ci_expvar = bs_expvar.quantile([0.005, 0.995], "n")
q005 = ci_expvar.sel(quantile=0.005)
q995 = ci_expvar.sel(quantile=0.995)
is_significant = q005 - q995.shift({"mode": -1}) > 0
n_significant_modes = (is_significant.where(is_significant).cumsum(skipna=False).max().fillna(0))
print("{:} modes are significant at alpha=0.01".format(n_significant_modes.values))
By the way, it seems that the code in your workbench, n_significant_modes = ( is_significant.where(is_significant is True).cumsum(skipna=False).max().fillna(0) )
have a little problem and I think it will be better by is_significant.where(is_significant).
Thank you in advance.