Wednesday, November 28, 2007

Practical Issues in Power Analysis

Below, I've added a new chart, based on things we discussed in class. William Trochim's Research Methods Knowledge Base, in discussing statistical power, sample size, effect size, and significance level, notes that, "Given values for any three of these components, it is possible to compute the value of the fourth." The table I've created attempts to convey this fact in graphical form.


You'll notice the (*) notation by "S, M, L" in the chart. Those, of course, stand for small, medium, and large effect sizes. As we discussed in class, Jacob Cohen developed criteria for what magnitude of result constitutes small, medium, and large for correlational studies and those studies comparing means of two groups (t-test type studies, but t itself is not an indicator of effect size).

When planning a new study, naturally you cannot know what your effect size will be ahead of time. However, based on your reading of the research literature in your area of study, you should be able to get an idea of whether findings have tended to be small, medium, or large, which you can convert to the relevant values for r or Cohen's d. These, in turn, can be submitted to power-analysis computer programs and online calculators.

I try to err on the side of expecting a small effect size. This will have the effect of requiring me to obtain a large sample size, to be able to detect a small effect, which seems like good practice, anyway.

UPDATE: Westfall and Henning (2013) argue that post hoc power analysis, which is what the pink column depicts in the above table, is "useless and counterproductive" (p. 508).