Skip to main content
Social Sci LibreTexts

11.17: Exercise- Averaging with a Custom aSME Time Window

  • Page ID
    137747
  • \( \newcommand{\vecs}[1]{\overset { \scriptstyle \rightharpoonup} {\mathbf{#1}} } \) \( \newcommand{\vecd}[1]{\overset{-\!-\!\rightharpoonup}{\vphantom{a}\smash {#1}}} \)\(\newcommand{\id}{\mathrm{id}}\) \( \newcommand{\Span}{\mathrm{span}}\) \( \newcommand{\kernel}{\mathrm{null}\,}\) \( \newcommand{\range}{\mathrm{range}\,}\) \( \newcommand{\RealPart}{\mathrm{Re}}\) \( \newcommand{\ImaginaryPart}{\mathrm{Im}}\) \( \newcommand{\Argument}{\mathrm{Arg}}\) \( \newcommand{\norm}[1]{\| #1 \|}\) \( \newcommand{\inner}[2]{\langle #1, #2 \rangle}\) \( \newcommand{\Span}{\mathrm{span}}\) \(\newcommand{\id}{\mathrm{id}}\) \( \newcommand{\Span}{\mathrm{span}}\) \( \newcommand{\kernel}{\mathrm{null}\,}\) \( \newcommand{\range}{\mathrm{range}\,}\) \( \newcommand{\RealPart}{\mathrm{Re}}\) \( \newcommand{\ImaginaryPart}{\mathrm{Im}}\) \( \newcommand{\Argument}{\mathrm{Arg}}\) \( \newcommand{\norm}[1]{\| #1 \|}\) \( \newcommand{\inner}[2]{\langle #1, #2 \rangle}\) \( \newcommand{\Span}{\mathrm{span}}\)\(\newcommand{\AA}{\unicode[.8,0]{x212B}}\)

    The series of scripts in the previous exercise produce a dataset for each participant that is ready for averaging. In the present exercise, we’ll compute the averaged ERPs, including calculating aSME for a custom time window of 110-150 ms (the measurement window for the N170). We’ll then create a grand average across participants and plot it. Finally, we’ll print a summary of the aSME values to the Command Window.

    Go ahead an open the script (Step6_averaging_and_SME.m) and take a look at it. As usual, the script begins by defining a set of variables that will be used later in the script. It also opens a file named ERPset_files.txt. This file will be used when we make the grand average. When we made grand averages in previous chapters, we loaded all the single-participant ERPsets into ALLERP and then told the grand averaging routine which of these ERPsets to include the grand average. However, it’s sometimes more convenient just to work with one ERPSet at a time, save the ERPsets as files on the hard drive, and not keep all the ERPsets loaded in memory. In this case, we can send the grand averaging routine a list of the filenames of the ERPsets, which are themselves stored in a text file. This is what ERPset_files.txt is used for in the present script. I could have just loaded all the ERPsets into ALLERP, but I wanted to demonstrate this alternative approach and show you how to open a text file and write to it.

    The main loop in the script begins by loading the dataset that was created by the previous script, which is all ready for averaging. The next few lines define the custom aSME time window. This information is stored in a data structure called a DQ_spec (data quality specification). This structure both defines the time windows prior to obtaining the data quality measures and stores the data quality measures once they’ve been calculated. It’s a little complicated, so ERPLAB provides a routine called make_DQ_spec to create it.

    We use this routine to create a variable named custom_aSME_spec. There are potentially many different types of data quality metrics that can be stored in a DQ_spec variable. The first metric stored in a DQ_spec structure is a measure of the baseline noise. The second is the standard error of the mean at each individual time point. The third is the aSME. It’s possible for you to define additional types, the most common of which is the bSME (bootstrapped SME). Here, we’re going to use aSME, but we’re going to specific a custom time window for the aSME calculation so that we get an aSME value that corresponds to our N170 measurement window (110–150 ms). We do this by finding out how many time windows have already been defined by default and then adding a new one.

    The next step is to call the averaging routine (pop_averager), sending it the custom_aSME_spec variable that we just created so that it will compute the aSME for our custom time window (along with the default time windows). This routine returns an ERPset that we store in the ERP variable. We then create a name for the ERPset and save the ERPset to a file on the hard drive. We also save the name of this file (including the path) in the ERPset_files.txt file so that we have it when we make the grand average later.

    The ERP variable includes a field named ERP.dataquality that stores the data quality metrics that were calculated during the averaging process. The script shows how you can grab the aSME values from ERP.dataquality for the channel we will ultimately use to score the N170 amplitude (PO8, Channel 27). For each subject, we get an array of four aSME values for this channel, one for each of the four bins. We store this in a two-dimensional array named aSME_custom_values, which has one dimension for subjects and another dimension for bins. The aSME data inside ERP.dataquality are stored in a 3D array with dimensions of channels, time ranges, and bins, and we use a Matlab function called squeeze to convert this 3D array into the 1D array of values for each bin for the current subject:

    aSME_custom_values(subject,:) = squeeze(ERP.dataquality(where_aSME).data(measurement_channel,custom_entry_index,:));

    When you’re first learning to write scripts in Matlab, you’ll probably find that you frequently get confused about how arrays work and when you need to use a function like squeeze to obtain the desired results. I still sometimes get confused, and I often make mistakes when writing code that operates on complicated arrays. But I’ve learned how in interpret Matlab’s error messages, and I often search the Internet for solutions. I also recommend getting a good Matlab book and spending some time learning the logic behind how Matlab operates on arrays. As I mentioned earlier, my lab uses a book called Matlab for Behavioral Scientists (Rosenbaum et al., 2014).

    After the main loop finishes, the script makes a grand average using the filenames stored in ERPset_files.txt. It then plots the grand average so that you can see how to script the plotting routine (pop_ploterps).

    Finally, the script prints out the custom aSME values that we saved in the variable named aSME_custom_values for each participant. It also prints the mean across participants for each bin, along with the RMS (root mean square). The RMS is like the mean, but it does a better job of capturing how the noise level from the individual participants will impact the variability in N170 amplitude scores across participants, the effect size, and the statistical power (see Luck et al., 2021). I recommend taking a look at the aSME values for each participant and then looking at their EEG and ERPs to see if you can understand why some participants have worse (larger) aSME values than others.

    The aSME quantifies the data quality for the mean voltage within a given time window, so it’s most directly useful when you’re scoring amplitude using the mean voltage (which is how we scored N170 amplitude in the ERP CORE paper). If you’re using some other scoring method (e.g., peak amplitude, fractional peak latency), you need to use a more complicated method called bootstrapping to obtain the SME values. You’ll also need to use bootstrapping if you’ll be obtaining scores from difference waves or if you apply any other kind of processing to the ERP data after averaging but before scoring (e.g., filtering or channel operations). ERPLAB currently requires scripting to compute bootstrapped SME values, and of the example scripts at the end of Chapter 10 demonstrate how to do this. Chapter 10 also contains a script that demonstrates how to obtain behavioral data.


    This page titled 11.17: Exercise- Averaging with a Custom aSME Time Window is shared under a CC BY 4.0 license and was authored, remixed, and/or curated by Steven J Luck directly on the LibreTexts platform.