Skip to main content
Social Sci LibreTexts

11.18: Exercise- Scoring Amplitudes and Latencies and Performing Statistical Analyses

  • Page ID
    137748
  • \( \newcommand{\vecs}[1]{\overset { \scriptstyle \rightharpoonup} {\mathbf{#1}} } \) \( \newcommand{\vecd}[1]{\overset{-\!-\!\rightharpoonup}{\vphantom{a}\smash {#1}}} \)\(\newcommand{\id}{\mathrm{id}}\) \( \newcommand{\Span}{\mathrm{span}}\) \( \newcommand{\kernel}{\mathrm{null}\,}\) \( \newcommand{\range}{\mathrm{range}\,}\) \( \newcommand{\RealPart}{\mathrm{Re}}\) \( \newcommand{\ImaginaryPart}{\mathrm{Im}}\) \( \newcommand{\Argument}{\mathrm{Arg}}\) \( \newcommand{\norm}[1]{\| #1 \|}\) \( \newcommand{\inner}[2]{\langle #1, #2 \rangle}\) \( \newcommand{\Span}{\mathrm{span}}\) \(\newcommand{\id}{\mathrm{id}}\) \( \newcommand{\Span}{\mathrm{span}}\) \( \newcommand{\kernel}{\mathrm{null}\,}\) \( \newcommand{\range}{\mathrm{range}\,}\) \( \newcommand{\RealPart}{\mathrm{Re}}\) \( \newcommand{\ImaginaryPart}{\mathrm{Im}}\) \( \newcommand{\Argument}{\mathrm{Arg}}\) \( \newcommand{\norm}[1]{\| #1 \|}\) \( \newcommand{\inner}[2]{\langle #1, #2 \rangle}\) \( \newcommand{\Span}{\mathrm{span}}\)\(\newcommand{\AA}{\unicode[.8,0]{x212B}}\)

    This exercise demonstrates the final stages of processing, in which we obtain amplitude and/or latency scores from each participant and perform statistical analyses. There are many ways we could analyze the N170 data, but here we will look at three simple approaches.

    In the first approach, we’ll measure the mean amplitude in the N170 latency range (110-150 ms) for the faces and cars in the left-hemisphere and right-hemisphere electrode clusters. We’ll then perform a 2 ´ 2 ANOVA with factors of stimulus type (face vs. car) and electrode hemisphere (left vs. right). The N170 is typically larger for faces than for cars, and this effect is usually larger in the right hemisphere than in the left hemisphere. As a result, we would expect a main effect of stimulus type and a stimulus type ´ electrode hemisphere interaction. You’ll need to perform this statistical analysis using your own statistics package.

    Any differences between the faces and the cars in the first analysis could reflect differences in low-level features (e.g., luminance, spatial frequency) rather than differences between the face and car categories per se. To address this possibility, the experiment also presented phase-scrambled face and car images, which contain the same low-level features as the faces and cars but are unrecognizable as faces or cars. In our second analysis, we’ll make a face-minus-scrambled-face difference wave and a car-minus-scrambled-car difference wave. The ERP activity directly attributable to the low-level features should be subtracted away in these difference waves, and any differences between the two difference waves can be attributed to higher-level features.

    The third analysis will be just like the second analysis, except that it will be performed on the peak latency of the N170 rather than the mean amplitude. Because peaks are easily distorted by high-frequency noise, we’ll apply a stronger low-pass filter to the data prior to measuring the peak latency.

    Important: For the second and third analyses, the scripts will perform a paired t test comparing the two difference waves (only for the right-hemisphere electrode cluster). This uses a Matlab function called ttest, which is a part of the Statistics and Machine Learning Toolbox. You can see if you have that toolbox by typing ver on the Matlab command line. If you don’t have that toolbox, you can just delete (or comment out) those lines of the script.

    Go ahead an open the script (Step7_scoring.m) and take a look at it. As usual, the script begins by defining a set of variables that will be used later in the script. It also opens a couple files that we’ll use to store the names of the ERPset files. We’ll send these files to the Measurement Tool so that it knows which ERPsets to use for scoring the ERPs (just like we did with the grand averaging routine in the previous exercise).

    The main loop loads the ERPsets created by the script in the previous exercise. It then uses ERP Channel Operations (the pop_erpchanoperator routine) to create a left-hemisphere cluster channel and a right-hemisphere cluster channel. This is pretty simple, so we just send the equations directly in the script rather than saving them in a file.

    The next step is to create the difference waves using ERP Bin Operations (the pop_binoperator routine). It sends a file named BinOps_Diff.txt that contains the equations for making the difference waves. The channel and bin operations create updated versions of the ERP variable, and we save this ERPset to the hard drive. We also save the name of the ERPset in a file named Measurement_files.txt.

    Then we apply a low-pass filter with a half-amplitude cutoff at 15 Hz and a slope of 48 dB/octave, which help us measure the peak latency more precisely. The resulting ERPset is saved to the hard drive, and the name of the ERPset is saved in a file named Measurement_15Hz_files.txt.

    After we loop through all the subjects, we close the two files that are used to store the ERPset filenames. Then we start the first analysis stage, in which we obtain the mean amplitude scores from the parent waveforms. This is achieved by calling the pop_geterpvalues routine, which is the script equivalent of the Measurement Tool. We send it the name of the file that holds the names of all the unfiltered ERPsets that we created in the loop so that it knows which ERPsets should be measured. We also send it the start and end times of the measurement window (110 and 150 ms, which are the values recommended in the ERP CORE paper). We also send two arrays, one containing a list of the bins that we want to measure (Bins 1-4, which contain the parent waveforms) and one containing a list of the channels that we want to measure (35 and 36, the left- and right-hemisphere cluster channels). There are also some parameters that you should recognize from using the GUI version of the Measurement Tool. Finally, we tell it the name of the text file that it should use for saving the amplitude scores (MeanAmp.txt).

    Go ahead and run the script, and you’ll see that it creates the MeanAmp.txt file, with one line for each subject and one column for each of our 8 measurements (4 bins ´ 2 channels). Load these data into a statistical package. We’re going to ignore the bins for the scrambled stimuli and perform a 2 ´ 2 ANOVA with factors of stimulus type (face vs. car) and electrode hemisphere (left vs. right). If you run the analysis, you should get something like the ANOVA table shown in Table 11.3.

    Table 11.3. ANOVA table for the first N170 analysis (from JASP).

    Cases

    Sum of Squares

    df

    Mean Square

    F

    p

    Hemisphere

    3.228

    1

    3.228

    1.552

    0.248

    Residuals

    16.639

    8

    2.080

       

    StimType

    43.727

    1

    43.727

    62.393

    < .001

    Residuals

    5.607

    8

    0.701

       

    Hemisphere ´ StimType

    2.180

    1

    2.180

    3.485

    0.099

    Residuals

    5.004

    8

    0.625

       

    As predicted, the greater N170 amplitude for the faces than for the cars led to a significant main effect of stimulus type. The effect was somewhat greater in the right-hemisphere cluster than in the left-hemisphere cluster, but the stimulus type ´ electrode hemisphere interaction did not reach significance. We have only 9 participants, so this probably just reflects low power.

    The next part of the script obtains the mean amplitude scores from the difference waves. This time, however, it doesn’t save the scores in a file. Instead, we use the 'SendtoWorkspace', 'on' option to save the scores in a variable named ERP_MEASURES in the Matlab workspace. This variable is a 3-dimensional array with dimensions of bin, channel, and subject. For example, ERP_MEASURES(2, 1, 7) is the score for the second bin, the first channel, and the seventh subject. The script grabs these values and stores them in two one-dimensional arrays, one for the faces-minus-scrambled-faces bin and one for the cars-minus-scrambled-cars bin. The one dimension is subject, so each of these arrays has 9 values.

    We then send these two arrays to the ttest function like this:

    [h,p,ci,stats] = ttest(faces_minus_scrambled_meanamp, cars_minus_scrambled_meanamp);

    The function returns four variables: h is a 1 if the effect was significant and 0 otherwise (assuming an alpha of .05); p is the p value from the test; ci is the 95% confidence interval for the difference between the two means; and stats holds the actual t value along with the degrees of freedom. This routine can be used to perform a paired t test or a one-sample t test. The ttest2 function can be used for an independent-samples t test, which you would use to compare two groups of subjects. After the script calls the ttest function, it uses fprintf to print the results in the Command Window. As you can see by looking at your Command Window, there was a significant difference in amplitude between the faces-minus-scrambled-faces and cars-minus-scrambled-cars difference waves.

    The last part of the script measures the peak latency of the N170 instead of the mean amplitude. It uses a wider measurement window (which is often needed for latency measures), and it performs the measurements from the more aggressively filtered ERPsets. The script then calls the ttest function and prints the results in the Command Window. The peak latency was significantly earlier for the faces-minus-scrambled-faces waveform than for the cars-minus-scrambled-cars waveform (consistent with Figures 11.1D and 11.1E).

    This is the last script for this chapter. You’ve now gone through every major step of EEG and ERP processing, all the way from reading in the raw EEG to conducting a statistical analysis. Congratulations!

    But remember, you should go back and forth between scripts and the GUI rather than relying solely on scripts. For example, you should use the Viewer option in the Measurement Tool to look at the scores alongside each averaged ERP waveform to verify that the measurement process is working properly. However, the script is also useful, because it makes it easier to repeat the processing if you need to make a change somewhere earlier in the pipeline. Also, if you make your scripts and data available when you publish a paper, other researchers can see exactly how you implemented each step of processing rather than relying on the relatively brief and vague description of the processing that is typically provided in a Method section.

    Scripting is a skill that takes a long time to master, and you may initially wonder if it’s worthwhile. It may seem like it’s faster to do everything in the GUI than to spend hours debugging scripts that do the same thing. But in the long run, scripting is incredibly useful, and you will get faster with experience. You’ll still make lots of mistakes—I certainly do!—but you’ll be able to find and fix them much more rapidly once you have more experience.


    This page titled 11.18: Exercise- Scoring Amplitudes and Latencies and Performing Statistical Analyses is shared under a CC BY 4.0 license and was authored, remixed, and/or curated by Steven J Luck directly on the LibreTexts platform.

    • Was this article helpful?