Skip to main content
Social Sci LibreTexts

4.2: What makes causality such a difficult issue?

  • Page ID
    10342
  • \( \newcommand{\vecs}[1]{\overset { \scriptstyle \rightharpoonup} {\mathbf{#1}} } \) \( \newcommand{\vecd}[1]{\overset{-\!-\!\rightharpoonup}{\vphantom{a}\smash {#1}}} \)\(\newcommand{\id}{\mathrm{id}}\) \( \newcommand{\Span}{\mathrm{span}}\) \( \newcommand{\kernel}{\mathrm{null}\,}\) \( \newcommand{\range}{\mathrm{range}\,}\) \( \newcommand{\RealPart}{\mathrm{Re}}\) \( \newcommand{\ImaginaryPart}{\mathrm{Im}}\) \( \newcommand{\Argument}{\mathrm{Arg}}\) \( \newcommand{\norm}[1]{\| #1 \|}\) \( \newcommand{\inner}[2]{\langle #1, #2 \rangle}\) \( \newcommand{\Span}{\mathrm{span}}\) \(\newcommand{\id}{\mathrm{id}}\) \( \newcommand{\Span}{\mathrm{span}}\) \( \newcommand{\kernel}{\mathrm{null}\,}\) \( \newcommand{\range}{\mathrm{range}\,}\) \( \newcommand{\RealPart}{\mathrm{Re}}\) \( \newcommand{\ImaginaryPart}{\mathrm{Im}}\) \( \newcommand{\Argument}{\mathrm{Arg}}\) \( \newcommand{\norm}[1]{\| #1 \|}\) \( \newcommand{\inner}[2]{\langle #1, #2 \rangle}\) \( \newcommand{\Span}{\mathrm{span}}\)\(\newcommand{\AA}{\unicode[.8,0]{x212B}}\)

    It continues ad infinitum, like everything else that we encounter in relational meta-theoryworld. Shadish, Cook, and Campbell (2002) give us words to understand the importance of this issue when they distinguish “causal description” in which researchers identify the causal factors, from “causal explanation” in which researchers specify the mechanisms or mediating processes by which causality operates (see box). Some researchers refer to these as “the active ingredients;” we think of them as “what is on the arrows” between antecedents and outcomes,” the critical question of “how does that work?”.

    Shadish, Cook, & Campbell (2002) on Causal Description vs. Causal Explanation.
    The unique strength of experimentation is in describing the consequences attributable to deliberately varying a treatment. We call this causal description. In contrast, experiments do less well in clarifying the mechanisms through which and the conditions under which that causal relationship holds-what we call causal explanation (p. 9).
    For full explanation, they would then have to show how the causally efficacious parts of the treatment influence the causally affected parts of the outcome through identified mediating processes (p. 10).
    This benefit of causal explanation helps elucidate its priority and prestige in all sciences and helps explain why, once a novel and important causal relationship is discovered, the bulk of basic scientific effort turns toward explaining why and how it happens. Usually this involves decomposing the cause into its causally effective parts, decomposing the effects into its causally affected parts, and identifying the processes through which the effective causal parts influence the causally affected outcome parts (p. 10).

    But the whole causality thing can’t really be so hard, right?

    Yes, it seems like researchers should be very good at identifying causes. In fact, when we think of humans as a species, it seems like we all are already good at this—at figuring out who is doing what to whom—or how would be have survived so long? Hey, and what about those 4- month-olds we read about in the last chapter, who fell into a deep depression when the dangling mobiles were no longer tied to their little kicking feet? They were detecting causality all right, and acting on it.

    So what’s the big deal?

    The kind of causality that humans are really good at detecting is called “generative transmission.” It’s an “experience of control” in which we directly feel the power of our actions transmitted into the objects or people in our immediate context, and we can see the energy create effects in front our eyes. Prototypical experiences of generative transmission include the baby leaning over the tray of her highchair and dropping the spoon, calling Mom and having her turn around, the hand reaching the apple and pulling it off the branch, the crack of the bat against the baseball and the baseball soaring off in a completely new direction, touching a match to a candle wick, hurling mean words into your little brother’s face and watching him crumple into tears. We are designed to detect the effects of our actions (so we can be effective in our interactions with the social and physical context). But our actions are rarely either necessary or sufficient causes for target outcomes. They are not necessary—in that my little brother will cry when he encounters other provocations (in fact, for just about any reason, if you ask me) and when I light the candle I see only the match as a cause—but it is not sufficient, the flame needs fuel and oxygen, which I am unlikely to perceive as part of my causal experience (which is why no one ever answers the question, “What caused this forest fire?” with “Trees”). People are good at answering the question, “Can I make this happen?” and “How can I make this happen?” and “What is added to this scenario to create a change?” but we are not as equipped to figure out “What are the necessary and sufficient causes to produce this outcome?”.

    It turns out, perhaps surprisingly, that causality is quite the inferential feat. We need to integrate information from multiple contrasting combinations of experiences before we can come to a conclusion about causes. In the study of the development of causal reasoning, we can see the progressive complexity from “How often does X happen when I do Y?” to “Does X happen more often when I do Y compared to when I do not do Y?” and so on. The cause “ability” is so highly inferential that it has its own developmental course. Initially high ability is inferred from high performance, then high performance without help, then from high effort, then from high performance on hard tasks (in which task difficulty is in turn inferred from other’s performance—difficult tasks are ones that few people do well on), then finally from high performance on difficult tasks with low effort. No wonder determining causality, when we do not directly experience the generative transmission of our causes, is quite a challenge.

    Is it worse for relational meta-theorists?

    Maybe a little. In relational meta-world, causal inference is messy: Things are always multiply determined, rarely is any one cause really necessary, there are lots of packages of sufficient causes, who work only under certain conditions and for certain people at certain times. And, worst of all, many of these effective factors turn out to be imposters, but they have already snuck into the empirical party created by the design of our study and so we have to spend our time rooting them out or we have to call the whole party off. In fact, entire branches of psychological science are dedicated to the issue of how to design your studies so that you can rule out all these alternative explanations, so you can validly make causal inferences about whether your hypothesized antecedent is actually producing your target outcome (Campbell Stanley, 1963; Shadish, Cook, & Campbell, 2002), or in the case of developmental science, your target trajectory.

    So what do they have to tell us?

    Well, read the classics yourself by all means, but one of the things that you will learn in all design classes is that the best design (and some will say the only design) for proving causality is the experiment, in fact, a particular kind of experiment, the randomized control trial (RCT), the so-called “gold standard.” And, in the olden days, it was assumed that experiments happened in the laboratory, so experimental designs and laboratory settings often get merged in students’ minds. So let’s take a minute to consider experiments and labs and relational meta-theories. Let’s let all our descriptive trajectories for all of our cohorts and age groups sort of bump into each other and pile up behind us as we stop to stare into the window behind the one-way mirror into our lab.