-
Notifications
You must be signed in to change notification settings - Fork 0
Power analysis
At AG.DPP, it is mandatory that you perform power calculations (or at least elaborate on power considerations) in any study that leaves the lab. Ideally, we use such analyses to determine sample size a priori. Sometimes, however, we have to rely on convenience samples and can calculate power only post hoc. Both approaches are appropriate, but must be named as such. In the former case, we would state in our manuscript:
Sample size was determined a priori based on a meta-analytically derived effect size for the effect of interest of r = .28. With a power of 1-β = .80 and an α = .05 (two-tailed), the required sample size was N = 97 as determined using the R package
pwr
.
In the later case, we would write something like:
For our online survey, we set out to collect as many participants as possible within a time window of one month. We eventually had complete datasets of N = 348 participants. With this sample size, we were able to detect correlations of r ≥ .15 at a power of 1-β = .80 and an α = .05 (two-tailed) as determined using the R package
pwr
.
If the literature tells you what effect size to expect, this effect size is most likely inflated due to publication bias. Evidence from replication research (e.g., Open Science Collaboration, 2015) suggests that even replicable effects are about half of the size as those originally reported. It may thus be wise to divide the effect size you found in the literature by two. This may be too conservative if you ground your power calculation on meta-analytically derived effect sizes and use that estimate for your power analysis. Yet, also meta-analyses suffer from publication bias and may overestimate effect sizes. Thus, it is perhaps a good idea to use the lower bound of the confidence interval of the effect size in question as estimate for your power analysis. Calculate the required sample size with a desired power of at least 80% and a significance level that accounts for possible multiple testing.
If you have no idea what effect size to expect, Cohen‘s classification most likely will not reflect the typical effect sizes in your area of research. If there are no established guidelines (such as those of Gignac & Szodorai, 2016, for individual differences research), assume a correlation of r = .20 (or any derivative such as an explained variance of .04, see Fraley & Vazire, 2014). A small to medium effect is more likely than a large one.
The software used for power analysis makes no difference. Yet, G*Power is more powerful than other software such as jamovi or the R package pwr
under R. If there is no power analysis software for your specific effect size, run simulations.
As said, if there are no established routines for power analysis of your desired analysis routine, simulations may be the way to go. Yet, before we go at great lengths to program such simulations, we should always refer to the literature whether meanwhile there are published and perhaps even approved (by means of having been cites a lot) routines for the analysis type we want to apply. Such analysis routines include:
- mediation analysis
- structural equation modeling
- ...
Approaches for these types of analyses are outlined below.
There seem to be three commonly accepted approaches to run a power analysis for structural equation modeling (SEM) or any derivative of such as a more complex path analysis, the power of which cannot be determined in terms of regression analysis:
- Satorra and Saris (1985)
- MacCallum, Browne and Sugawara (1996)
- Muthén and Muthén (2002)
The third approach involves Monte Carlo simulations, which sounds too fancy for our mainly rather plain purposes, so we discard this method for now (but see Muthén & Muhten, 2002)
The first approach ...
That is why we usually should go for the second approach suggested by MacCallum et al. (1996). The nice thing is that there is an R package that implements the MacCallum et al. approach, and even better: one of the authors of this package is Edgar Erdfelder, who also contributed to G*Power. So we can be quite confident that this package will do its job properly.
For details on power analysis, see the following presentation:
ProTip of the day: On GitHub, you can use HTML code to express mathematical symbols via, e.g., α
for α or ≥
for ≥. (pls replace this footer by your ProTip as it comes in)