*(Updated October 26, 2014)*

I have just created a new graphic on how to interpret SPSS print-outs for the

**Independent-Samples**

*t*-test (where any given participant is in

*only one*of two mutually exclusive groups).

This new chart supplements the t-test lecture notes I showed recently, reflecting a change in my thinking about what to take from the SPSS print-outs.

One of the traditional assumptions of an Independent-Samples t-test is that, before we can test whether the difference between the two groups'

*means*on the dependent variable is significant (which is of primary interest), we must verify that the groups have similar variances (standard-deviations squared) on the DV. This assumption, which is known as homoscedasticity, basically says that we want some comparability to the two groups' distributions in terms of their being equally spread out, before we can compare their means (you might think of this as "outlier protection insurance," although I don't know if this is technically the correct characterization of the problem).

If the homoscedasticity (equal-spread) assumption is violated, all is not lost. SPSS provides a test for possible violation of this assumption (the Levene's test) and, if violated, an alternative solution to use for the t-test. The alternative t-test (known as the Welch t-test or

**t'**) corrects for violations of the equal-spread assumption by "penalizing" the researcher with a reduction of degrees of freedom. Fewer degrees of freedom, of course, make it harder to achieve a statistically significant result, because the threshold t-value to attain significance is higher.

Years ago, I created a graphic for how to interpret the Levene's test and implement the proper t-test solution (i.e., the one for equal variances or for unequal variances, as appropriate). Even with the graphic, however, students still found the output confusing. Stemming from these student difficulties and some literature of which I have become aware, I have changed my opinion.

I now subscribe to the opinion of Glass and Hopkins (1996) that,

**“We prefer the t’ in all situations”**(p. 305, footnote 30). Always using the t-test solution for when the two groups are assumed to have

*un*equal spread (as depicted in the top graphic) is advantageous for a few reasons.

It is simpler to always use one solution than go through what many students find to be a cumbersome process for selecting which solution to use. Also, despite the two solutions (assuming equal spread and not assuming equal spread) being different and having different formulas in part, the bottom-line conclusion one draws (e.g., that men drink significantly more frequently than do women) often is the same under both solutions. If anything, the preferred (not assuming equal spread) solution is a little more conservative; in other words, it makes it a little harder to obtain a significant difference between means than does the equal-spread solution. As a result, our findings will have to be a little stronger for us to claim significance, which is not a bad thing.

I've also created a graphic to interpret the SPSS output of a paired

*t*-test.

I have also taken a screenshot from this University of Georgia webpage, which gives the formula for a paired

*t*-test. The main focus is, of course, comparing the means of two variables, but as shown below, I have highlighted where the correlation

*r*between the two variables enters the formula.

**Reference**

Glass, G. V., & Hopkins, K. D. (1996).

*Statistical methods in psychology and education*(3rd ed.). Needham Heights, MA: Allyn & Bacon.