Skip to main content
Social Sci LibreTexts

7.5: Background- Interpolating Bad Channels

  • Page ID
    87971
  • \( \newcommand{\vecs}[1]{\overset { \scriptstyle \rightharpoonup} {\mathbf{#1}} } \) \( \newcommand{\vecd}[1]{\overset{-\!-\!\rightharpoonup}{\vphantom{a}\smash {#1}}} \)\(\newcommand{\id}{\mathrm{id}}\) \( \newcommand{\Span}{\mathrm{span}}\) \( \newcommand{\kernel}{\mathrm{null}\,}\) \( \newcommand{\range}{\mathrm{range}\,}\) \( \newcommand{\RealPart}{\mathrm{Re}}\) \( \newcommand{\ImaginaryPart}{\mathrm{Im}}\) \( \newcommand{\Argument}{\mathrm{Arg}}\) \( \newcommand{\norm}[1]{\| #1 \|}\) \( \newcommand{\inner}[2]{\langle #1, #2 \rangle}\) \( \newcommand{\Span}{\mathrm{span}}\) \(\newcommand{\id}{\mathrm{id}}\) \( \newcommand{\Span}{\mathrm{span}}\) \( \newcommand{\kernel}{\mathrm{null}\,}\) \( \newcommand{\range}{\mathrm{range}\,}\) \( \newcommand{\RealPart}{\mathrm{Re}}\) \( \newcommand{\ImaginaryPart}{\mathrm{Im}}\) \( \newcommand{\Argument}{\mathrm{Arg}}\) \( \newcommand{\norm}[1]{\| #1 \|}\) \( \newcommand{\inner}[2]{\langle #1, #2 \rangle}\) \( \newcommand{\Span}{\mathrm{span}}\)\(\newcommand{\AA}{\unicode[.8,0]{x212B}}\)

    In the video demonstration, the C5 channel seemed quite problematic, and I mentioned that we might want interpolate that channel. Also, the F8 channel had a lot of high-frequency noise, so we should consider interpolating that channel as well. In this exercise, we’ll discuss how to decide whether a channel should be interpolated and then look at the actual interpolation process.

    Interpolation is performed separately at each time point, using the voltage values from the other channels at that time point. There are several algorithms that can be used to estimate a reasonable value for one channel on the basis of the other channels, and they all work reasonably well. I like to use the spherical algorithm provided by EEGLAB. It treats the electrodes as if they’re located on a spherical head, and it fits a polynomial function to the distribution of voltages, leaving out the to-be-interpolated channel. Then, the value of the polynomial function at the to-be-interpolated location is used as the estimated voltage at that location. This process is repeated independently at each time sample. You get a pretty reasonable waveform at the interpolated location, but keep in mind that it’s just an imperfect estimate of the true waveform.

    The decision about whether to interpolate a given channel ultimately comes down to our fundamental goal of accurately answering the scientific question that the experiment was designed to address. I’ll provide some general guidance, but ultimately you need to think about whether interpolation serves that goal. Does it get you closer to the truth or farther from the truth? Imagine that you’ve submitted a manuscript to a journal, and one of the reviewers visits your lab to see how you actually processed your data (which would never happen in reality, of course). Would you be happy to explain to this reviewer how you decided whether to interpolate? Or would you feel a little embarrassed?

    Let’s start with an extreme case. Imagine that an electrode was broken and the signal from that electrode was pure noise. And imagine that the broken electrode wasn’t being used in any of the key analyses. For example, we measured MMN amplitude at FCz for the main analyses in the ERP CORE paper (Kappenman et al., 2021), so the C5 and F8 electrodes didn’t play a major role our analyses (although they did make a minor contribution when we plotted scalp maps). One way to deal with the broken electrode would be to completely discard this participant’s data. This would reduce the sample size in our analyses, which tends to decrease our ability to draw accurate conclusions about the population, so that’s not a great option. The other main option would be to interpolate the data from the broken electrode. Given that the broken electrode doesn’t contribute to the main analyses, being able to include the participant by interpolating and thereby increasing your sample size seems like it serves the truth much more than excluding the participant from all analyses.

    Now let’s consider an extreme case in the opposite direction. Imagine that you’re analyzing data collected in another lab with really poor recording methods (or a really challenging participant), and the data from FCz and the 8 closest electrodes look terrible. Not only is the key channel for the analysis bad, but so are the surrounding channels, which will make it difficult to interpolate accurately. In this case, including the participant in the final analyses seems like it will not add real information and will do more harm than good.

    I posted a message on Twitter asking how researchers decide whether any channels should be interpolated. Some people indicated that they do it informally by visual inspection. Bad channels are relatively rare in my lab, so we also use visual inspection. Other researchers indicated that they used an automated method, such as the Clean Rawdata plugin in EEGLAB or the PREP pipeline (Bigdely-Shamlo et al., 2015). Many researchers who said that they use an automated method indicated that the algorithm fails often enough that they visually confirm the results. I’ve never tried the automated methods, but the algorithms seem reasonable (especially the PREP pipeline), and I would expect them to work well when verified by visual inspection. However, they mainly rely on statistical criteria, such as how well a given channel correlates with other channels (with a low correlation suggesting a problem given that true brain signals spread broadly across the scalp and therefore produce high correlations between nearby electrode sites).

    This will certainly work for detecting things like loose electrodes. However, if an electrode is properly connected but the signal contains biological noise (e.g., muscle activity), the question is whether the noise decreases your ability to precisely quantify the amplitude or latency value that will be the dependent variable in your statistical analyses (because this is how we reach our fundamental goal or accurately answering the scientific question that the experiment is designed to address). The Standardized Measurement Error (SME) is ideally suited for this purpose because it quantifies the extent to which the noise in the data produces error in the specific amplitude or latency value you will be obtaining from the averaged ERPs. With this in mind, let’s consider the actual cases of the C5 and F8 electrodes from Subject #1 in the ERP CORE MMN experiment.


    This page titled 7.5: Background- Interpolating Bad Channels is shared under a CC BY 4.0 license and was authored, remixed, and/or curated by Steven J Luck directly on the LibreTexts platform.

    • Was this article helpful?