Skip to main content
Social Sci LibreTexts

5.4: Adding Time to the Design of Naturalistic Studies

  • Page ID
    10352
  • \( \newcommand{\vecs}[1]{\overset { \scriptstyle \rightharpoonup} {\mathbf{#1}} } \) \( \newcommand{\vecd}[1]{\overset{-\!-\!\rightharpoonup}{\vphantom{a}\smash {#1}}} \)\(\newcommand{\id}{\mathrm{id}}\) \( \newcommand{\Span}{\mathrm{span}}\) \( \newcommand{\kernel}{\mathrm{null}\,}\) \( \newcommand{\range}{\mathrm{range}\,}\) \( \newcommand{\RealPart}{\mathrm{Re}}\) \( \newcommand{\ImaginaryPart}{\mathrm{Im}}\) \( \newcommand{\Argument}{\mathrm{Arg}}\) \( \newcommand{\norm}[1]{\| #1 \|}\) \( \newcommand{\inner}[2]{\langle #1, #2 \rangle}\) \( \newcommand{\Span}{\mathrm{span}}\) \(\newcommand{\id}{\mathrm{id}}\) \( \newcommand{\Span}{\mathrm{span}}\) \( \newcommand{\kernel}{\mathrm{null}\,}\) \( \newcommand{\range}{\mathrm{range}\,}\) \( \newcommand{\RealPart}{\mathrm{Re}}\) \( \newcommand{\ImaginaryPart}{\mathrm{Im}}\) \( \newcommand{\Argument}{\mathrm{Arg}}\) \( \newcommand{\norm}[1]{\| #1 \|}\) \( \newcommand{\inner}[2]{\langle #1, #2 \rangle}\) \( \newcommand{\Span}{\mathrm{span}}\)\(\newcommand{\AA}{\unicode[.8,0]{x212B}}\)

    What do you mean by “adding time” to a design?

    Remember the data cube proposed by Cattell (pictured in Figure 17.3)? When we say we are “adding time” to a design, we mean that we are adding “occasions” or “times of measurement” or “repeated measures” to a design.

    Like in a longitudinal study?

    Yes, that is one excellent way to add time, but we are referring to all kinds of studies that add a second, third, or fourth measurement wave, whether or not those waves are far enough apart to satisfy the conventional understanding of “longitudinal” (which typically implies that enough time has passed for some kind of development to have occurred). Maybe the most general description is “time series” because the design includes a series of different times of measurement.

    What are the advantages of adding time? Let’s say that we add just one more time of measurement, so we have two waves in our study. The first advantage is that now we have a way to check the first condition of causality, namely, that causes precede their effects. So we can check out a time-ordered correlation. Continuing with our example from Figure 23.1, with time in the design, we can look at whether the potential cause at Time 1 predicts the potential outcome at Time 2. This is depicted in Figure 23.2. So we are excited to be able to use the word “predict” correctly to describe our correlation. However, this is still just a zero-order bivariate correlation, so it does not allow a causal inference—it still has all the problems with so-called “third variables,” the term we use to stand in for all the alternative causal explanations that could underlie the connections between our predictor and its possible consequence.

    Insert Figure 23.2

    But we can use our two time points to start looking for how people change from Time 1 to Time 2 in our consequence and to see whether those changes can be predicted from where each person was on the potential cause at Time 1. This is also pictured in Figure 23.2, and has been referred to as a “launch” model because it tries to examine whether an individuals’ initial levels on an antecedent can predict the individuals’ (mini)-trajectory on the target consequence (see Figure 23.3, from Skinner, Zimmer-Gembeck, & Connell, 1998). The term “launch” is used because such a model assumes that the initial levels of the potential causal variable may act like a slingshot or catapult or rocket launcher to create the direction and angle of change in the object that is hurled, that is, the target outcome (in our analogies, the stone, or ball, or rocket).

    So what do we like about this kind of “launch” design?

    Well, this design is bringing us a little closer to our time machine. We get to look at developmental trajectories as our outcomes (and if we add more time points, they will look more like trajectories), we are looking directly at individual differences in trajectories, and we are looking at predictors of individual differences in those trajectories. So, in our example, we can ask, “Does teacher involvement at the beginning of the school year predict changes in students’ engagement from the beginning to the end of the school year?”. And if the empirical answer is “yes” (i.e., the antecedent is a significant predictor of change from Time 1 to Time 2), we can say things like “Students whose teachers were warmer and more involved with them at the beginning of the school year, also showed increases in their engagement over the school year; whereas students whose teachers were less involved with them at the beginning of the school year, showed corresponding declines in their engagement as the year progressed.” This is a descriptive statement, but is is consistent with a causal hypothesis.

    Any other advantages?

    Yes. We can also, using the same design, look at the “reciprocal” predictions, in that we can take our antecedent variable and examine how it changes from Time 1 to Time 2, and see whether the variable we had been thinking of as a consequence (which we now consider as a possible antecedent) predicts these changes. This sounds more confusing than it is—see Figure 23.3. In our example, we would be asking “Does students’ engagement at the beginning of the year predict changes in how involved their teachers are with them over the year?” And, if the empirical answer is “yes,” we can say things like “Students who were more engaged in fall experienced increasing involvement from their teachers as the year progressed, whereas students who started the school year lower in engagement experienced declines in their teachers’ involvement from fall to spring.” Again, descriptive, but causally interesting.

    One of the most important things about a design with two points of measurement (remember, we just added one more point) is that it allows researchers to begin to pull apart the different directions of effects. A concurrent correlation contains information about both direction of effects, which cannot logically be untangled, but these two analyses that we just ran—the first looks at the feedforward prediction of teacher involvement on change sin subsequent student engagement, whereas the second looks at the feedback prediction of student engagement on changes in subsequent teacher involvement. So the answers to the questions posed by these two sets of analyses could be different—we could get two “yeses” or two “noes” of one of each. And if we get two yesses, we have the possibility of a feedback loop, which feels like we are getting some hints about potential dynamics in the system.

    What about all those pesky third variables, those alternative explanations?

    Well, we have good news and bad news about them.

    What is the good news?

    The good news is that we have reduced them some. If you start thinking about the third variables in the concurrent correlation in our illustration, that is, all the factors that are positively correlated with both teacher involvement and student engagement, an enormous number come to mind (e.g., achievement, SES, supportive parents, IQ, a sense of relatedness, and so on), and here is the kicker, these are only the ones we can imagine, there are also unknown confounders. However, when we include in our design and analyses intra-individual change over time, we are using people as their own controls. This means that out of our potential consequence at Time 2, we are taking the consequence at Time 1, which has in it by definition everything (known and unknown) that led up to the consequence at Time 1 (e.g., achievement, SES, supportive parents, IQ, a sense of relatedness, and so on) as well as all the unknowns that created or predicted the consequence at Time 1.

    So, for example, if we think that achievement is a possible alternative causal explanation for the zero-order correlation between teacher involvement at Time 1 and student engagement at Time 2 (meaning that high performing students are more engaged and teachers pay more attention to them), when we control for student engagement Time 1, we take out all of the achievement that was responsible for engagement up to that point, so we have controlled for that as a potential confounder. By controlling for the same outcome at an earlier point in time, we have scraped off all the known and unknown predictors of engagement up until Time 1 that could be a potential confounder, or a plausible pre-existing difference, or an alternative causal chain. (We still have all the measurement problems as before—about practice, reactivity, and developmental equivalence, of course, but the list of possible third variables is considerably shorter.)

    So then what is the bad news?

    The bad news is that the nortorious third variables are not completely eliminated. Since we are still kind of looking at a correlation—the correlation between teacher involvement at Time 1 and changes in student engagement from Time 1 to Time 2, we are still on the hunt for possible alternative causes of both. Remember, before we were looking for things that were correlated with both teacher involvement and student engagement, but now, with this design, we can narrow down our candidates for third variables to those that are correlated with both teacher involvement and changes in student engagement (see Figure 23.4). An excellent example of the incorporation of such variables in research designs that examine change over time can be found in the study of effects of children’s peer groups on their engagement (see box).

    Insert Figure 23.4

    Peer groups and individual development: Selection or group influences?

    The study of the effects of children’s peer groups on their development presents researchers with a series of knotty problems. This area starts with a very well-known state of affairs, called homophily, which means that “birds of a feather flock together.” In other words, kids hang out with other kids who are like them in lots of ways. So we see a high correlation between a child’s characteristics and those of the peers in his or her group. But what does this mean? Do kids select other kids who are like themselves to hang out with? Or do kids become more similar to each other over time?

    If kids are socializing each other, then we have an intervention lever. For example, if kids are smoking and hanging out with other kids who are smoking, we just have to pry them away from these “bad influences” and then they will quit smoking. However-- if these are primarily selection effects, then we can pry them away from their smoking friends, but as soon as our backs are turned, they will go out and find new friends who smoke. Because parents and teachers often have the impression that “good” kids go “bad” when they hang out with “bad” peers, the research area was mired in the idea that homophiliy was evidence for peer influences for a long time.

    By now, however, you can see that homophiliy could be due to either selection or socialization or both. One strategy for teasing these effects apart is the one we have been discussing (for other strategies that also use within-individual change, see Rutter, 2007). Once we have good measures of children on some important characteristic (and let’s take student engagement again) and of their peer groups on the same characteristic, we can start with our concurrent bivariate correlation—and land on homophily again, in that engaged students tend to hand out with other engaged students and disaffected students with other disaffected students. If we stop here, we are stuck, but if we add measures of both individual and peers at a second time, we can look at the extent to which peer group engagement at Time 1 predicts changes in individual engagement form Time 1 to Time 2. If we keep the correlation between individuals and their groups at Time 1 in our model, we are essentially controlling for initial similarity or for selection effects.

    Now we need to think through the third variables that might be correlated with both peer groups at Time 1 and with individual change in engagement. What about achievement again? Smart kids may hang out with more engaged peers (who are also presumably smart) and they may also become increasingly engaged over time. What about teacher support? Teachers may provide more support to students who hand out with more engaged peers and so promote the student’s individual engagement; or teachers may be unsupportive to students who hang out with disaffected peers and so undermine their own engagement. We can check whether these variables are correlated with both peer group engagement at Time 1 and changes in individual engagement. If they are, we can add them to our models as control variables.

    What should be thinking about in adding time to our study design?

    So many things. But let’s start with some basic questions that we almost never know the answer to—What are the right windows and the right time gaps between measurement points to capture the action the causal process and its possible cyclicity (both feedforward and feedback)? If we are thinking about teacher involvement and student engagement, it seems like the beginning of a new school year would be a good moment for them to be calibrating to each other, but how long would that take—a week, a month, six weeks? Who knows? One rule of thumb is to use more measurement points than you think you will need, so you can look over different time gaps for your possible process.