11.11: Exercise- Looping Through Data from Multiple Participants
-
- Last updated
- Save as PDF
In this exercise, we’ll get to see how loops are used to load the datasets from a set of participants. We’ll add some processing steps that make the script more useful in the next exercise, but I wanted to keep things simple for now. To get started, quit EEGLAB, close any open scripts, type clear all , and open Script4.m . But don’t launch EEGLAB—we’ll have the script do that!
Go ahead and run Script4.m to see what it does. It should launch EEGLAB, load the datasets for Subjects 1-10 (except Subject 5), and refresh EEGLAB to make the datasets available in the Datasets menu.
Now let’s look at the script and see how it works. The first line of code launches EEGLAB, which creates several variables that we will find useful (e.g., EEG and ALLEEG ). The next line of code creates the DIR variable, as in the previous scripts, which holds the location of the script (and should be the Chapter_11 folder). Then the script creates a new variable named Data_DIR , which appends '/N170_Data/' onto the DIR variable. This gives us a path to the folder containing the single-participant data folders.
The next step is to define a variable named Dataset_filename , which has a value of '_N170.set' . We’ll eventually combine this variable with the subject ID to get the entire filename for a given participant (e.g., 1_N170.set ).
Then we define variables for the list of subjects and the number of subjects, just as in the previous example. Note that these steps embody the principle that all values used by a script should be defined as variables at the top of the script. It’s a little extra up-front work to do this, but it dramatically reduces the likelihood of bugs later (especially when you take a previous script and modify for a new purpose).
The next step is to loop through the subjects. The first part of this is just like what we did in the previous script, including setting ID to be a string with the current subject’s ID. Then the script creates a variable named Subject_DIR , which specifies the folder that holds that data for the subject currently being processed by the loop (e.g., …/Chapter_11/N170/1/ for the first subject). We do this by concatenating the Data_DIR variable with the ID variable and then a / character. We also create a variable named Subject_filename by concatenating the ID variable with the Dataset_filename variable. This gives us a value of 1_N170.set for the first subject.
We then load the dataset, using Subject_filename as the filename and Subject_DIR as the path. The dataset is stored in the EEG variable, and our last step in the body of the loop is to add this dataset to the ALLEEG variable using the eeg_store routine. The zero we specify as the last parameter for this routine tells it to add the new dataset to the end of ALLEEG .
After the loop finishes, eeglab redraw is called to update the EEGLAB GUI.
There are actually 40 participants in this experiment, each with a dataset. This script is a much faster way of loading these 40 datasets than using the GUI to separately load each one. Because all the key values are specified as variables at the top of the script, you can easily find them and modify them so that you can use the same script with another experiment, assuming that the data are organized in the same way on your computer. You’d just need to modify the list of subject IDs (the SUB variable), the name of the folder holding the data ( Data_DIR ), and the base dataset name ( Dataset_filename ). This will be much faster and easier if you’re consistent in how you organize the data for each experiment (see the text box below).
Consistency
There is a famous line from the poet Ralph Waldo Emerson that is frequently misquoted as “Consistency is the hobgoblin of little minds.” People sometimes use this incorrect version of the quote to belittle people for being consistent. However, the actual quote is “A foolish consistency is the hobgoblin of little minds” (Emerson, 1841 p. 14; my emphasis). It’s not the least bit foolish to be consistent about your data organization, your filenames, your variable names, etc. You will save yourself huge amounts of time and grief by developing a good organizational strategy early in your career and then sticking to it (but with thoughtful changes when necessary).