Compare and contrast four different approaches to research.
Introduction
In empirical research, there are four basic approaches: experimental methods, statistical methods, comparative methods, and case study methods. Each of these methods is an attempt to understand the relationship between two or more variables, which may be correlational or causal. Each method involves crafting good research questions, using existing theories to inform the understanding of the research problem, and generating and testing hypotheses. We will first discuss the difference between correlation and causation, and we will then provide an overview of each research approach.
Correlation and Causation
When assessing relationships between variables, such as between an independent variable (X) and a dependent variable (Y), these relationships may reflect correlation or a causal relationship. A correlation is a relationship between two variables. If two variables show "positive correlation," they increase or decrease together (i.e., as X increases, Y increases; as X decreases, Y decreases). If two variables show "negative correlation," when one variable increases, the other decreases (i.e., as X increases, Y decreases; as X decreases, Y increases). If the two variables show "no correlation," there is no apparent linear relationship. A causal relationship is more than a correlation; it is a cause-and-effect relationship between two variables (i.e., X causes Y).
Correlation does not imply causation. If a researcher observes a correlation (a relationship) between two variables, this does not mean that one variable caused the other. Correlations may be spurious, or false. For instance, suppose a researcher observes a positive correlation between ice cream sales and crime rates (i.e., as ice cream sales increase, crime rates increase). This does not mean that ice cream sales cause crime. Rather, something else (called a "confounding variable") likely explains the increase in both ice cream sales and the increase in crime (in this case, both variables tend to increase with warm weather). In research, spurious relationships often appear if a researcher leaves out particular variables that influence both the independent and dependent variables in their analysis (referred to as "omitted variable bias").
Figure \(\PageIndex{1}\): This image illustrates the difference between causation and correlation. On the left, we see causation: Variable X directly leads to Variable Y. On the right, we see correlation: Variable X and Variable Y show a relationship. The arrows indicate that there is no way to assess causality (i.e., Does X lead to Y or does Y lead to X?).
As an example, in comparative politics research, there appears a positive correlation between economic development and democracy. That is, countries with higher levels of economic development (e.g., as measured by GDP per capita) tend to be more democratic (e.g., as measured by a democracy index). This is not a causal relationship. Perhaps economic development causes democracy; perhaps democracy causes economic development; perhaps there is an additional variable, Z, that causes both economic development and democracy (e.g., stability stemming from the absence of war (Z) may allow both economic development and democracy to flourish); or perhaps there is an intervening variable, A, that connects economic development and democracy (e.g., economic development leads to an accumulation of wealth (A), which then leads to democracy). There are too many possibilities here to assert a causal relationship; rather, the researcher is stuck with only a correlation. And, in fact, there are numerous exceptions to this positive correlation. Oil-producing countries in the Persian Gulf, such as Saudi Arabia and the United Arab Emirates, have high levels of economic development without democracy; countries like Senegal and India are democratic, but lack high levels of economic development.
These potential contradictions force researchers to be very careful when making causal inferences in political science. Causality is always difficult to establish, but this is especially true when the unit of analysis involves countries, which is often the case in comparative politics.
Experimental Methods
Given the discussion above, one manner in which a researcher can attempt to establish causality is through experimental methods. Experiments are "laboratory studies in which investigators retain control over the recruitment, assignment to random conditions, treatment, and measurement of subjects" (McDermott, 2002, p. 32). In an experiment, the researcher controls the "treatment," the independent variable of interest, meaning that the researcher will randomly assign the treatment to some cases in the experiment (the "treatment group") and not others (the "control group"). The researcher will then observe the effect that this treatment has on the outcome, the dependent variable of interest. Through random assignment, the researcher is able to "control" for any potential confounding variables--observable or unobservable--because the randomization process evenly distributes participants across the treatment and control groups. The experimental method therefore reduces bias (Druckman, et. al. 2011) by ensuring comparability across groups, and this allows a researcher to isolate the causal impact of the treatment.
While a laboratory experiment provides the best chance for researcher control over experimental conditions, conducting a laboratory experiment at the country-level is impossible. There are, however, field experiments and natural experiments, both of which are frequently used by comparativists in an attempt to establish causality. A field experiment is an experiment that takes place in a real-world setting instead of in a laboratory. For example, a researcher may want to learn about how a political candidate's ethnic background (X) influences voter behavior (Y). Prior to an election, the researcher may send out information to a group of voters; one randomly-selected group will receive a flyer that includes a candidate's ethnic background (the "treatment" group) while another randomly-selected group will receive a flyer that does not include the candidate's ethnic background (the "control" group). The researcher will then measure the effect of the treatment on voting. In comparison to a laboratory experiment, the researcher worries more about bias in a field experiment because they are not able to retain as much control over the environment.
A "natural experiment" is technically not an experiment, but instead an "observational study." An observational study is a type of research in which researchers do not control a treatment (as in a laboratory or a field experiment), but simply observe outcomes. What makes a natural experiment interesting, however, is that it occurs when random assignment of a "treatment" occurs naturally or due to some factor other than a researcher. For example, suppose that a government randomly selects subnational units to receive an influx of money to put toward education. Setting aside ethical concerns in this hypothetical example, a researcher could exploit the government's randomization to measure how the money provided to the treatment group influenced education outcomes in comparison to the control group. If true randomization exists in a natural experiment (something that a researcher will need to demonstrate), the researcher will be able to make causal inferences.
Statistical Methods
Beyond experimental methods, statistical methods are another approach to research analysis. As a reminder from Chapter 1, statistical methods reflect quantitative research. Quantitative research centers on testing a theory or hypothesis, usually through mathematical and statistical means, using data from a large sample. Statistical methods work well to discern correlations among variables in large-N research, meaning in situations where there is a large population size (i.e., a large number of observations). Statistical methods are an analysis tool and their use, on its own, does not allow researchers to claim causation.
Many researchers default to the use of statistical analyses to uncover correlations in "applied research." Applied research is "research that attempts to explain social phenomena with immediate public policy implications" (Knoke, et. al., 2002, p. 7). Researchers will conduct statistical analysis on existing data from various sources--such as government agencies, think tanks, and other researchers--and probe for correlations among the variables to test their hypotheses (Omae and Bozonelos, 2020). Researchers also use statistical methods to analyze survey data (for a discussion of survey data and comparative political opinion, see Chapter 10).
Comparative Methods
Unlike experimental and statistical methods, comparative methods can reflect either quantitative or qualitative research (and many researchers use "mixed-methods" approaches that combine the two). As defined in Chapter 1, qualitative research is a type of research that centers on exploring ideas and phenomena, potentially with the goal of consolidating information or developing evidence to form a theory or hypothesis to test. Qualitative research involves categorizing, summarizing, and analyzing cases thoroughly to gain greater understanding, often using a small sample.
All comparative methods focus on analyzing the differences and/or similarities between cases, which necessitates clear processes of conceptualization and operationalization (making concepts measurable). As Merriam-Webster defines, a concept is "an abstract or generic idea generalized from particular instances." For political scientists, concepts are "generally seen as nonmathematical and deal with substantive issues" (Goertz, 2006). For example, if we want to compare democracies, we must first define what exactly constitutes a democracy, and we must formulate measurements that appropriately reflect our conceptualization.
Comparative methods involve "the analysis of a small number of cases, entailing at least two observations." This includes "too few [cases] to permit the application of conventional statistical analysis" (Lijphart, 1971; Collier, 1993, p. 106). This means that comparative methods involve more than a single case study, but fewer cases than included in a large-N statistical analysis (sometimes called "intermediate-N," reflecting the analysis of anywhere from a few to a few dozen cases).
Case Study Methods
Moving from intermediate-N to "single-N" research, a case study is an intensive look into a single case, often with the intent that the single case will help researchers to better understand a particular variable of interest. A case is a "spatially delimited phenomenon (a unit) observed at a single point in time, or over some period of time" (Gerring, 2007). Others define a case as "factual description of events that happened at some point in the past" (Naumes and Naumes, 2015). A case could be an individual, a family household, a group or community, an institution, or a subnational unit within a country. Often, comparativists observe the country as the case of interest.
While a case study traditionally conducts an in-depth examination into one case, a comparative case study is a study compares two or more cases. There can also exist a subnational case study that focuses on subnational governments, rather than countries, as the cases of interest. Subnational case studies may occur entirely within a country (e.g., examining a policy response within all Mexican states) or across countries (e.g., examining a policy response among autonomous subnational governments in countries within the European Union).
Figure \(\PageIndex{2}\): Diagram summarizing empirical research approaches. There are experimental methods. There are also non-experimental methods, including large-N (statistical) methods, intermediate-N (comparative), and single-N (case studies).
Use of Case Studies in Comparative Politics
Case studies are vital to theory development in political science. They are the cornerstones of different discourses in the discipline. Blatter and Haverland (2012) note that a number of case studies have reached "classic" status in political science. These include Robert Dahl's Who Governs? (1961), Graham T. Allison's Essence of Decision (1971), Theda Skocpol's States and Social Revolutions (1979), and Arend Ljiphart's The Politics of Accommodation (1968). Each of these classics is a seminal study into an important aspect of political science. Dahl's work popularized the concept of pluralism, where different actors hold power. Allison studied the decision-making processes during the 1962 Cuban Missile Crisis, and this work was influential for public policy analysis. Skocpol's book laid out the conditions from which a revolution may take place. Ljiphart defined concepts that remain central to comparative studies of democracy, such as "politics of accommodation" and "consensus democracy."
As mentioned earlier, cases in comparative politics historically focus on countries (typically referred to as "states," as we will describe in Chapter 3). Comparisons often involve regime types (democratic and non-democratic), political economies, political identities, social movements, and political violence. All of these comparisons require scholars to look within countries and then compare. As stated in Chapter 1, "looking within" is what separates comparative politics from other subfields of political science. Thus, as the country is the most relevant political actor (though others exist, such as subnational, non-state, and transnational actors), this is where the emphasis tends to be.