Skip to main content
Social Sci LibreTexts

9.3: Exercise- Evaluating the Impact of Artifact Correction

  • Page ID
    87976
  • \( \newcommand{\vecs}[1]{\overset { \scriptstyle \rightharpoonup} {\mathbf{#1}} } \) \( \newcommand{\vecd}[1]{\overset{-\!-\!\rightharpoonup}{\vphantom{a}\smash {#1}}} \)\(\newcommand{\id}{\mathrm{id}}\) \( \newcommand{\Span}{\mathrm{span}}\) \( \newcommand{\kernel}{\mathrm{null}\,}\) \( \newcommand{\range}{\mathrm{range}\,}\) \( \newcommand{\RealPart}{\mathrm{Re}}\) \( \newcommand{\ImaginaryPart}{\mathrm{Im}}\) \( \newcommand{\Argument}{\mathrm{Arg}}\) \( \newcommand{\norm}[1]{\| #1 \|}\) \( \newcommand{\inner}[2]{\langle #1, #2 \rangle}\) \( \newcommand{\Span}{\mathrm{span}}\) \(\newcommand{\id}{\mathrm{id}}\) \( \newcommand{\Span}{\mathrm{span}}\) \( \newcommand{\kernel}{\mathrm{null}\,}\) \( \newcommand{\range}{\mathrm{range}\,}\) \( \newcommand{\RealPart}{\mathrm{Re}}\) \( \newcommand{\ImaginaryPart}{\mathrm{Im}}\) \( \newcommand{\Argument}{\mathrm{Arg}}\) \( \newcommand{\norm}[1]{\| #1 \|}\) \( \newcommand{\inner}[2]{\langle #1, #2 \rangle}\) \( \newcommand{\Span}{\mathrm{span}}\)\(\newcommand{\AA}{\unicode[.8,0]{x212B}}\)

    One of our goals for artifact rejection and correction is to reduce noise that would otherwise decrease the data quality in our averaged ERPs. Another goal is to minimize the confounds that happen when the artifacts differ across groups or conditions. To evaluate these issues, we need to calculate the SME values and the averaged ERP waveforms for the corrected data, compare them with the original data, and compare them with the data after applying artifact rejection. I’ve done all this and provided the ERPsets in the Chapter_9 folder. Go ahead and load these three ERPsets (10_MMN_uncorrected, 10_MMN_rawblinkcorrection, and 10_MMN_blinkrejection).

    Creating the ERPsets

    If you want to create these ERPsets yourself, you’ll need to add an EventList to the dataset, run BINLISTER (using BDF_MMN.txt), and epoch the data. You’ll need to do this once for the original data (prior to correction) and once for the new dataset in which IC 1 was removed. And for the original uncorrected data, you’ll want to have two versions, one with artifact detection (step-like artifacts, threshold = 50) and one without artifact detection. This will allow you to make three ERPsets, one without correction or rejection, once with correction only, and one with rejection only. Make sure to customize the data quality parameters to add a window of 125-225 ms.

    Now plot the ERP waveforms for each ERPset. To make them comparable, uncheck the auto y-scale box and put -12 12 in the Y range text box before plotting. The results for several key channels are summarized in Figure 9.2.

    2 Averaged_ERPs_10_MMN.png
    Figure 9.2. Averaged ERP waveforms from Subject 10 in the ERP CORE MMN experiment without any correction or rejection (A), after correcting for blinks by removing IC 1 (B), and after rejecting trials containing blinks (C). Note that the VEOG-bipolar channel was excluded from artifact correction in (B) and was therefore identical to (A).

    Large differences between standards and deviants can be seen in the Fp2 and VEOG channels when nothing was done to deal with the blinks (Figure 9.2.A). You can see a polarity reversal of the deviance effect between the Fp2 and VEOG-lower channels, consistent with blink activity. That was also true for Subject 1 in the previous chapter (see Figure 7.2.A), although the specific pattern was different. The finding of different blink-related activity for standards and deviants indicates that blinks are a confound that we need to eliminate.

    The blink-related voltage deflections were eliminated by both artifact correction rejection. Note, however, that the VEOG-bipolar channel was excluded from correction, so the large deflections remain in this channel. Following both correction and rejection, the voltage is more negative for deviants than for standards in FCz in the 125-225 ms time range, which is our primary measure of the MMN. This effect was larger when no rejection or correction was performed, so part of this effect may have been coming from the blink activity that we can see during this time range in the Fp2 and VEOG electrodes. This provides even more evidence that we need to deal with the blinks to avoid confounding our measure of brain activity with blink activity (or vertical eye movements).

    When artifact correction was performed, the difference between standards and deviants at Fp2 and VEOG-lower was largely eliminated. However, some difference remained in these channels when artifact rejection was performed. Which of these reflects the true pattern of brain activity? It’s difficult to be certain on the basis of these waveforms. However, given that the MMN appears to be generated primarily in the supratemporal plane (Näätänen & Kreegipuu, 2012), it should be substantially larger at FCz than at Fp2, and it should not invert in polarity between Fp2 and VEOG-lower (when referenced to P9/P10). I therefore suspect that some ocular activity escaped rejection (most likely vertical eye movements, which may fail to reach the 50 µV threshold for rejection that I used for detecting blinks). As a result, it seems likely that correction brought us “closer to the truth” than rejection in this particular case. And that’s our ultimate goal!

    It's also important to assess the impact of correction and rejection on data quality. For each of the three ERPsets, you should display the data quality measures in a table. Let’s focus on the aSME values for FCz from 125-225 ms. For both the deviants (Bin 1) and the standards (Bin 2), the aSME was worst (highest) for the data without correction or rejection (Bin 1 = 0.9774, Bin 2 = 0.5688), and was improved (reduced) by rejection (0.8338, 0.5360), and was improved even more by correction (0.7802, 0.4523). This pattern makes sense because both rejection and correction minimize the uncontrolled variation produced by the blinks, but rejection reduces the number of trials whereas correction does not.

    From these results, correction seems to be the better method for this particular participant in terms of both minimizing ocular confounds and maximizing data quality. That fits with my experience: When implemented correctly, ICA-based artifact correction tends to be better than rejection for dealing with blinks. And sometimes the difference is quite large, especially when a large number of trials would need to be rejected.

    However, we didn’t really implement correction very well in this example. When ICA is working properly, the scalp maps of most of the ICs should look like nice gradual gradients with a unipolar pattern (a single positive or negative focus, like IC 1 in Screenshot 9.2) or a dipolar pattern (opposing positive and negative focus, like IC 7 in Screenshot 9.2). Scalp distributions that cover the entire head (like IC 2) or are complex and irregular (like ICs 19, 26, and 30) are a problem. They don’t resemble the topography we’d expect for brain activity or common artifacts, and they’re a sign that multiple sources are being mixed together rather than being separated. A few such maps are okay, especially in the latter ICs that don’t account for much variance. But you really don’t want to see a map like IC 2 in the top half of the ICs. In a later exercise, we’ll see how to improve the ICA decomposition (and make it faster as well). First, however, we need to discuss how ICA works.

     


    This page titled 9.3: Exercise- Evaluating the Impact of Artifact Correction is shared under a CC BY 4.0 license and was authored, remixed, and/or curated by Steven J Luck directly on the LibreTexts platform.