*(Updated September 25, 2014)*

We will now be covering

*t*-tests (for comparing the means of two groups) for the next week or so. This PowerPoint slideshow provides a good overview of

*t*-tests. As we'll discuss, there are two ways to design studies for a

*t*-test:

INDEPENDENT SAMPLES (starting on page 11 of the PowerPoint), where a participant in one group (Obama voters, say) cannot be in the other group (Romney voters). The technical term is that the groups are "mutually exclusive." The Obama and Romney voters could be compared, for example, on their average income.

PAIRED/CORRELATED GROUPS (starting on page 16 of the PowerPoint), where the same (or matched) person(s) can serve in both groups. For example, the same participant could be asked to complete math problems both during a period where loud hard-rock music is played and during a period where quiet, soothing music is played. Or, if you were comparing men and women on some attitude measure and your participants were heterosexual married couples, that would be considered a correlated design.

The

*Naked Statistics*book briefly discusses the formula for an independent-samples

*t*-test on pp. 164-165. Here's a simplified graphic I found from the web (original source):

Notice from the "Xbar1 - Xbar2" portion that the

*t*statistic is gauging the amount of difference between the two means, in the context of the respective groups' standard deviations (

*s*) and sample sizes (

*n*). Your obtained

*t*value will be compared to the

*t*distribution (which is similar to the normal

*z*distribution) to see if it is extreme enough to be unlikely to stem from chance. You will also need to take account of "degrees of freedom," which for an independent-samples

*t*-test are closely based on total sample size.

There's an online graphic that visually illustrates the difference between

*z*(normal) and

*t*distributions (click on this link and then, when the page comes up, on "Click to View"). As noted on this page from Columbia University, "tails of the

*t*-distribution are thicker and extend out further than those of the

*Z*distribution. This indicates that for a given confidence level,

*t*-scores [needed for significance] are larger than

*Z*scores."

More technically, as Westfall and Henning (2013) point out, "Compared to the standard normal distribution, the

*t*-distribution has the same median (0.0) but with variance

*, which is larger than the standard normal's variance of 1.0" (p. 423). Remember that the variance is just the standard deviation squared.*

**df/(df-2)**In this table are shown values your obtained

*t*statistic needs to exceed (known as "critical values") for statistical significance, depending on your df and target significance level (typically

*p*< .05, two-tailed). Another website provides a nice overview of one- and two-tailed tests.

Finally, this document goes into additional depth regarding the paired/correlated

*t*-test, showing how the correlation "

*r*" is included in the formulation (section 18.3).