Open
Description
I've implemented an argument in mcmc.diagnostics()
to abbreviate the coefficient names and reduce the number of significant figures when printing correlation matrices and similar to make the output more concise. Here, I have the uncompacted and one compact to a target of 4 characters:
suppressPackageStartupMessages(library(ergm))
dummy <- capture.output(suppressMessages(example(anova.ergm)))
mcmc.diagnostics(fit2, which="text", compact=FALSE)
#> Sample statistics summary:
#>
#> Iterations = 13824:262144
#> Thinning interval = 512
#> Number of chains = 1
#> Sample size per chain = 486
#>
#> 1. Empirical mean and standard deviation for each variable,
#> plus standard error of the mean:
#>
#> Mean SD Naive SE Time-series SE
#> edges 2.179 6.781 0.3076 0.3076
#> nodefactor.atomic type.2 1.514 6.401 0.2903 0.2903
#> nodefactor.atomic type.3 1.426 5.248 0.2381 0.2381
#> gwesp.fixed.0.5 2.682 10.766 0.4883 0.4883
#>
#> 2. Quantiles for each variable:
#>
#> 2.5% 25% 50% 75% 97.5%
#> edges -10.00 -3.000 2.000 7.00 16.00
#> nodefactor.atomic type.2 -10.00 -3.000 1.000 6.00 14.00
#> nodefactor.atomic type.3 -7.00 -3.000 1.000 5.00 12.00
#> gwesp.fixed.0.5 -14.39 -5.393 1.612 9.67 25.65
#>
#>
#> Are sample statistics significantly different from observed?
#> edges nodefactor.atomic type.2 nodefactor.atomic type.3
#> diff. 2.179012e+00 1.514403e+00 1.425926e+00
#> test stat. 7.084062e+00 5.216007e+00 5.989654e+00
#> P-val. 1.399888e-12 1.828211e-07 2.102885e-09
#> gwesp.fixed.0.5 (Omni)
#> diff. 2.682109e+00 NA
#> test stat. 5.492195e+00 5.504993e+01
#> P-val. 3.969677e-08 1.440950e-10
#>
#> Sample statistics cross-correlations:
#> edges nodefactor.atomic type.2
#> edges 1.0000000 0.7998571
#> nodefactor.atomic type.2 0.7998571 1.0000000
#> nodefactor.atomic type.3 0.7244880 0.3761040
#> gwesp.fixed.0.5 0.9027025 0.7292210
#> nodefactor.atomic type.3 gwesp.fixed.0.5
#> edges 0.7244880 0.9027025
#> nodefactor.atomic type.2 0.3761040 0.7292210
#> nodefactor.atomic type.3 1.0000000 0.5947062
#> gwesp.fixed.0.5 0.5947062 1.0000000
#>
#> Sample statistics auto-correlation:
#> Chain 1
#> edges nodefactor.atomic type.2 nodefactor.atomic type.3
#> Lag 0 1.00000000 1.00000000 1.00000000
#> Lag 512 -0.02195735 0.03508149 0.03126410
#> Lag 1024 0.08764999 0.05550450 0.05974880
#> Lag 1536 0.02834908 0.01294194 0.03401496
#> Lag 2048 0.06153408 0.01725834 0.05078179
#> Lag 2560 0.03390777 0.01024789 -0.02653534
#> gwesp.fixed.0.5
#> Lag 0 1.00000000
#> Lag 512 -0.02110184
#> Lag 1024 0.04997392
#> Lag 1536 0.01666529
#> Lag 2048 0.02137263
#> Lag 2560 0.03347958
#>
#> Sample statistics burn-in diagnostic (Geweke):
#> Chain 1
#>
#> Fraction in 1st window = 0.1
#> Fraction in 2nd window = 0.5
#>
#> edges nodefactor.atomic type.2 nodefactor.atomic type.3
#> -0.3409008 -2.0079000 -0.5033567
#> gwesp.fixed.0.5
#> -0.2298850
#>
#> Individual P-values (lower = worse):
#> edges nodefactor.atomic type.2 nodefactor.atomic type.3
#> 0.73317825 0.04465392 0.61471354
#> gwesp.fixed.0.5
#> 0.81818117
#> Joint P-value (lower = worse): 0.2129953
#>
#> Note: MCMC diagnostics shown here are from the last round of
#> simulation, prior to computation of final parameter estimates.
#> Because the final estimates are refinements of those used for this
#> simulation run, these diagnostics may understate model performance.
#> To directly assess the performance of the final model on in-model
#> statistics, please use the GOF command: gof(ergmFitObject,
#> GOF=~model).
mcmc.diagnostics(fit2, which="text", compact=4)
#> Sample statistics summary:
#>
#> Iterations = 13824:262144
#> Thinning interval = 512
#> Number of chains = 1
#> Sample size per chain = 486
#>
#> 1. Empirical mean and standard deviation for each variable,
#> plus standard error of the mean:
#>
#> Mean SD Naive SE Time-series SE
#> edges 2.179 6.781 0.3076 0.3076
#> nodefactor.atomic type.2 1.514 6.401 0.2903 0.2903
#> nodefactor.atomic type.3 1.426 5.248 0.2381 0.2381
#> gwesp.fixed.0.5 2.682 10.766 0.4883 0.4883
#>
#> 2. Quantiles for each variable:
#>
#> 2.5% 25% 50% 75% 97.5%
#> edges -10.00 -3.000 2.000 7.00 16.00
#> nodefactor.atomic type.2 -10.00 -3.000 1.000 6.00 14.00
#> nodefactor.atomic type.3 -7.00 -3.000 1.000 5.00 12.00
#> gwesp.fixed.0.5 -14.39 -5.393 1.612 9.67 25.65
#>
#>
#> Are sample statistics significantly different from observed?
#> edge nodt ce.3 gwes (Omni)
#> diff. 2.2e+00 1.5e+00 1.4e+00 2.7e+00 NA
#> test stat. 7.1e+00 5.2e+00 6.0e+00 5.5e+00 5.5e+01
#> P-val. 1.4e-12 1.8e-07 2.1e-09 4.0e-08 1.4e-10
#>
#> Sample statistics cross-correlations:
#> edge nodt ce.3 gwes
#> edge 1.00 0.80 0.72 0.90
#> nodt 0.80 1.00 0.38 0.73
#> ce.3 0.72 0.38 1.00 0.59
#> gwes 0.90 0.73 0.59 1.00
#>
#> Sample statistics auto-correlation:
#> Chain 1
#> edge nodt ce.3 gwes
#> Lag 0 1.000 1.000 1.000 1.000
#> Lag 512 -0.022 0.035 0.031 -0.021
#> Lag 1024 0.088 0.056 0.060 0.050
#> Lag 1536 0.028 0.013 0.034 0.017
#> Lag 2048 0.062 0.017 0.051 0.021
#> Lag 2560 0.034 0.010 -0.027 0.033
#>
#> Sample statistics burn-in diagnostic (Geweke):
#> Chain 1
#>
#> Fraction in 1st window = 0.1
#> Fraction in 2nd window = 0.5
#>
#> edge nodt ce.3 gwes
#> -0.34 -2.01 -0.50 -0.23
#>
#> Individual P-values (lower = worse):
#> edge nodt ce.3 gwes
#> 0.733 0.045 0.615 0.818
#> Joint P-value (lower = worse): 0.21
#>
#> Note: MCMC diagnostics shown here are from the last round of
#> simulation, prior to computation of final parameter estimates.
#> Because the final estimates are refinements of those used for this
#> simulation run, these diagnostics may understate model performance.
#> To directly assess the performance of the final model on in-model
#> statistics, please use the GOF command: gof(ergmFitObject,
#> GOF=~model).
Created on 2023-01-09 with reprex v2.0.2
@CarterButts , @drh20drh20 , @martinamorris , @sgoodreau , @mbojan , @ anyone else, any thoughts about what the default should be?