Wednesday, October 22, 2008

t-Test Overview

(Updated May 25, 2021)

We will now be covering t-tests (for comparing the means of two groups) for the next week or so. As we'll discuss, there are two ways to design studies for a t-test:

INDEPENDENT SAMPLES, where a participant in one group (e.g., Obama voters in the 2012 election) cannot be in the other group (Romney voters). The technical term is that the groups are "mutually exclusive." The Obama and Romney voters could be compared, for example, on their average income.

PAIRED/CORRELATED GROUPS, where the same (or matched) person(s) can serve in both groups. For example, the same participant could be asked to complete math problems both during a period where loud hard-rock music is played and during a period where quiet, soothing music is played. Or, if you were comparing men and women on some attitude measure and your participants were heterosexual married couples, that would be considered a correlated design.

The Naked Statistics book briefly discusses the formula for an independent-samples t-test on pp. 164-165. Here's a simplified graphic I found from the web (original source):

Notice from the "Xbar1 - Xbar2" portion that the t statistic is gauging the amount of difference between the two means, in the context of the respective groups' standard deviations (s) and sample sizes (n). Your obtained t value will be compared to the t distribution (which is similar to the normal z distribution) to see if it is extreme enough to be unlikely to stem from chance. You will also need to take account of "degrees of freedom," which for an independent-samples t-test are closely based on total sample size.

There's an online graphic that visually illustrates the difference between z (normal) and t distributions (click on this link and then, when the page comes up, on "Click to View"). As noted on this page from Columbia University, "tails of the t-distribution are thicker and extend out further than those of the Z distribution. This indicates that for a given confidence level, t-scores [needed for significance] are larger than Z scores."

More technically, as Westfall and Henning (2013) point out, "Compared to the standard normal distribution, the t-distribution has the same median (0.0) but with variance df/(df-2), which is larger than the standard normal's variance of 1.0" (p. 423). Remember that the variance is just the standard deviation squared.

In this table are shown values your obtained t statistic needs to exceed (known as "critical values") for statistical significance, depending on your df and target significance level (typically p < .05, two-tailed). Another website provides a nice overview of one- and two-tailed tests.

I have created a little tutorial on how to interpret SPSS output for independent-samples t-tests.

Finally, we take up the paired/correlated/dependent samples t-test at this link.