Skip to main content
Social Sci LibreTexts

2.10: Exercise- Data Quality

  • Page ID
    108227
  • \( \newcommand{\vecs}[1]{\overset { \scriptstyle \rightharpoonup} {\mathbf{#1}} } \) \( \newcommand{\vecd}[1]{\overset{-\!-\!\rightharpoonup}{\vphantom{a}\smash {#1}}} \)\(\newcommand{\id}{\mathrm{id}}\) \( \newcommand{\Span}{\mathrm{span}}\) \( \newcommand{\kernel}{\mathrm{null}\,}\) \( \newcommand{\range}{\mathrm{range}\,}\) \( \newcommand{\RealPart}{\mathrm{Re}}\) \( \newcommand{\ImaginaryPart}{\mathrm{Im}}\) \( \newcommand{\Argument}{\mathrm{Arg}}\) \( \newcommand{\norm}[1]{\| #1 \|}\) \( \newcommand{\inner}[2]{\langle #1, #2 \rangle}\) \( \newcommand{\Span}{\mathrm{span}}\) \(\newcommand{\id}{\mathrm{id}}\) \( \newcommand{\Span}{\mathrm{span}}\) \( \newcommand{\kernel}{\mathrm{null}\,}\) \( \newcommand{\range}{\mathrm{range}\,}\) \( \newcommand{\RealPart}{\mathrm{Re}}\) \( \newcommand{\ImaginaryPart}{\mathrm{Im}}\) \( \newcommand{\Argument}{\mathrm{Arg}}\) \( \newcommand{\norm}[1]{\| #1 \|}\) \( \newcommand{\inner}[2]{\langle #1, #2 \rangle}\) \( \newcommand{\Span}{\mathrm{span}}\)\(\newcommand{\AA}{\unicode[.8,0]{x212B}}\)

    Now that you’ve seen how noise can distort averaged ERP waveforms, let’s look at how we can quantify the noise level (or, to put it in more positive terms, the data quality). When you averaged the data, recall that there was a Data Quality Quantification section in the averaging GUI, and you left it set to On - default parameters (see Screenshot 2.15). Here, we’ll look at the data quality metrics that were created by default.

    Recall that the averaging routine printed some text to the command window, ending with the following:

    Data Quality measure of aSME
    Median value of 1.0008 at elec FP2, and time-window 0:100ms, on bin 1, Prime word, related to subsequent target word
    Min value of 0.16593 at elec Oz, and time-window -200:100ms, on bin 1, Prime word, related to subsequent target word
    Max value of 3.5935 at elec F4, and time-window 600:700ms, on bin 3, Target word, related to previous prime, followed by correct response

    This is a summary of a large set of data quality measures that are computed by default when you average, using a metric of data quality called the standardized measurement error (SME; the specific version of SME used here is the analytic SME or aSME). You can read a full description of the SME metric in Luck et al. (2021), and you watch a short video overview here along with a corresponding infographic here. Briefly, the SME values provided by default give you the standard error of measurement for the mean voltage within a set of time ranges (e.g., 0-100 ms, 100-200 ms, etc.). The larger the SME value, the less precisely the voltage in that part of the waveform is likely to reflect the true voltage for that participant (i.e., the voltage that would be obtained if there were no noise or we could average over an infinite number of epochs). The averaging routine computes a separate aSME value for each combination of bin, channel, and time period.

    The summary printed in the command window shows the best (minimum), worst (maximum), and median aSME values. You can see that the best aSME value was 0.16593 µV at the Oz electrode site in Bin 1 during the first portion of the baseline period (-200 to -100 ms). The worst aSME value was 3.5935 µV at the F4 electrode site in Bin 3 near the end of the epoch (from 600 to 700 ms). The median was 1.0008 µV, so the worst value was more than three times as large as the “typical” value.

    Do you remember that we saw big differences between primes that were followed by related versus unrelated words starting at time zero in the F4 channel, which logically must have been noise? It’s no accident that the aSME value was largest for the same channel.

    Let’s take a look at all the aSME values that were computed when the averaged ERPs were created. Make sure that the ERPset from the previous exercise is still loaded (by checking the ERPsets menu). Now select EEGLAB > ERPLAB > Data Quality options > Show Data Quality measures in a table. You’ll see a table of aSME values like that shown in Screenshot 2.19. Each row is a different channel and each column is a different 100-ms time range. You can select which bin is shown with a popup menu near the top of the window. We’re currently looking at Bin 1.

    Screenshot 2.19

    0vM5zDxOjOqDIkoGeew4BACo5p066pMyGBRjeu9qnnjVKF7A2fakJt_w6xG3_dX14Bd7EGoMomGcnIMR3jWllkoMiqLVbhWBM-rywYpOjQ8V-FQCP3eL8TlNzPMAf96_l2Xpffw

    A huge amount of information is shown in this table. To help you find the cases with the worst data quality (the largest values), select the Color heatmap option. Now the cells of the table are colored according to the magnitude of the aSME values.

    Notice that the values tend to get larger during later time windows. This is because the baseline correction procedure brings all the single-trial EEG epochs toward 0 µV during the prestimulus period, and noise will cause the voltage to drift away from this baseline over time. The more random variation there is over trials, the harder it is to precisely measure the amplitude, so this drift causes larger aSME values.

    You should also notice that the values tend to be largest in the F4 channel and second largest in the Pz channel. These are the same channels where we saw the greatest noise-related differences between Bins 1 and 2 in the averaged ERP waveforms (Screenshot 2.18). Look at the other 3 bins as well (using the Selected BIN popup menu). These channels are also noisy in those bins, indicating that these channels were just generally noisy. If you go back and look at the EEG epochs, it’s not obvious that these channels are noisier than the others. That’s the value of having an quantitative metric of data quality: It’s possible to objectively determine which channels (or which participants) have unusually noisy data. In later chapters, we’ll discuss what to do when a channel or participant is particularly noisy.

    The data quality metrics are computed by default when you create averaged ERPs. Often, however, you want to assess the data quality prior to averaging (e.g., to determine whether a specific artifact rejection procedure will increase or decrease your data quality). To accomplish this, you select the appropriate EEG dataset (which must be epoched) and then select EEGLAB > ERPLAB > Compute data quality metrics (without averaging).


    This page titled 2.10: Exercise- Data Quality is shared under a CC BY 4.0 license and was authored, remixed, and/or curated by Steven J Luck directly on the LibreTexts platform.

    • Was this article helpful?