An ANOVA (Analysis of Variance), sometimes called an F test, is closely related to the t test. The major difference is that, where the t test measures the difference between the means of two groups, an ANOVA tests the difference between the means of two or more groups.
A one-way ANOVA, or single factor ANOVA, tests differences between groups that are only classified on one independent variable. You can also use multiple independent variables and test for interactions using factorial ANOVA (see below). The advantage of using ANOVA rather than multiple t-tests is that it reduces the probability of a type-I error. Making multiple comparisons increases the likelihood of finding something by chance—making a type-I error. Let’s use socioeconomic status (SES) as an example. I have 8 levels of SES and I want to see if any of the eight groups are different from each other on their average happiness. In order to compare all of the means to each other, you would have to run 28 t tests. If your alpha is set at .05 for each test, times 28 tests, the new p is 1.4—you are virtually assured of making a type-I error. So, you across all those 28 tests you would find some significant differences between groups, but there are likely due to error. An ANOVA controls the overall error by testing all 8 means against each other at once, so your alpha remains at .05.
One potential drawback to an ANOVA is that you lose specificity: all an F tells you is that there is a significant difference between groups, not which groups are significantly different from each other. To test for this, you use a post-hoc comparison to find out where the differences are – which groups are significantly different from each other and which are not. Some commonly used post-hoc comparisons are Scheffe’s and Tukey’s.
Computation by hand of a One-way ANOVA is tedious; computation of a
two-way ANOVA is nearly impossible. Here is a general overview of how to
What is it?
A factorial ANOVA can examine data that are classified on multiple independent variables. For example, a two-way ANOVA (two factor ANOVA) can measure both the difference among treatments and among age of participants simultaneously. You can use more than two independent variables in an ANOVA (e.g., three-way, four-way).
A factorial ANOVA can show whether there are significant main effects of the independent variables and whether there are significant interaction effects between independent variables in a set of data. Interaction effects occur when the impact of one independent variable depends on the level of the second independent variable. Computation can be done on statistical software.
We wanted to see how studying method affected grades on a World Civilizations midterm for underclassmen and upperclassmen. Regardless of prior study preference, equal amounts of students were assigned randomly to one of the two categories. This is a two way ANOVA with two independent variables: Year in school (underclass versus upperclass) and Study type (along versus group). The dependent variable is grades (measured on a scale of 0 to 100)
It seems from this data that there is a main effect for type of studying, because on average the “alone” studiers scored higher (87.5) than the “group” studiers (87), and the upper classmen, on average, scored higher (88) than the underclassmen (86.5). There is also an interaction effect, because while the upperclassmen who studied alone scored an average of five points higher than underclassmen who studied alone, upperclassmen who studied in a group scored an average of two points lower than underclassmen who studied in a group. The effect of Year in school on grades depends on type of studying.
A two-way ANOVA would test whether the possible main effects and interaction effects are statistically significant—that the results aren’t from sampling error alone.
This will give you a table. On the right hand side, the significances
will be listed, along with the F values. On the far left side, under the
column “source,” your variables will be listed. The corresponding F values
and significances are for main effects and interaction effects. If the
significance reading is less than .05, you are able to reject the null
Patten, Mildred L. (2002). Understanding research methods: An Overview of the essentials (3rd ed.). Los Angeles: Pyrczak Publishing.
Pavkov, Thomas W., & Pierce, Kent A. (2003). Ready, set, go! A Student guide to SPSS(R) 11.0 for Windows. Boston: McGraw-Hill.
Pyrczak, Fred. (2002). Success at statistics: A Worktext with humor (2nd ed.). Los Angeles: Pyrczak Publishing.
Solso, Robert L., Johnson, Homer H., & Beal, M. Kimberly. (1998). Experimental psychology: a case approach (6th ed.). New York: Longman.
Yates, Daniel, Moore, David, & McCabe, George. (1999). The Practice of statistics: TI-83 graphing calculator enhanced . New York: W.H. Freeman and Company.