10.5: Exercise- Simplifying the Statistical Analysis
-
- Last updated
- Save as PDF
Most recent studies of the LRP (and other lateralized components, such as N2pc and CDA) obtain amplitude and latency scores from contralateral-minus-ipsilateral difference waves (like those shown in Figure 10.2.C). This has two advantages. First, it isolates the component of interest from all of the overlapping ERP components. Second, it reduces the number of factors in the statistical analysis.
Let’s see how it works. Clear out all the ERPsets from ERPLAB, and load the 40 ERPsets in the Chapter_10 > Data > ERPsets_CI_Diff folder. Then launch the Measurement Tool . Set it to measure from the 40 files in the ERPsets menu, using Bins 1 and 2 (Compatible and Incompatible) and Channel 5 (C3/C4). Specify mean_amplitude_CI_Diff.txt as the name of the output file. Click Viewer and make sure that the measurements look reasonable given the waveforms. Click Measurement Tool to go back to the Measurement Tool, and then click RUN to obtain the scores. You’ll see that we now have only two scores per participant, one for the Compatible condition and one for the Incompatible condition.
Now let’s do a statistical analysis on these scores. Start by doing a paired t test comparing the Compatible and Incompatible conditions. You should get a statistically significant t value of -5.885 (or +5.885, depending on which condition came first). And here’s something very important: This t test is exactly equivalent to the three-way interaction between Hemisphere, Hand, and Compatibility in the three-way ANOVA from the previous exercise. That is, it tests exactly the same null hypothesis, and it yields exactly the same p value (except for possible rounding error). The t value for this paired t test corresponds exactly to the F value from the three-way interaction once you realize that F is the same as t 2 . If we square 5.885, we get 34.63, which is the same (except for rounding error) as the F value for the three-way interaction in Table 10.1. So, a t test on difference scores can be a simpler and more convenient way of testing for an interaction, and it doesn’t cause a proliferation of p values.
Fixing a Problem
When I tried to load the data into JASP, all the scores ended up in a single column. After spending a few minutes trying to figure out what was causing this, I realized that the problem was that the column labels in the first row had commas in them, which confused JASP. When I replaced the commas with underscores, everything worked fine.
Now let’s ask whether we have a statistically significant negativity for the Compatible condition and a statistically significant positivity for the Incompatible condition. This just involves performing separate one-sample t tests for each of these conditions (comparing the means to zero). When I did that, I obtained a significant negativity for the Compatible condition ( t (39) = -6.107, p < .001) and a significant positivity for the Incompatible condition ( t (39) = 3.605, p < .001).
Isn’t this approach a lot simpler and more direct than a three-way ANOVA followed by a bunch of contrasts? We really have three primary hypotheses, and with this approach we have one simple test for each of them.