Skip to main content
Social Sci LibreTexts

2.06: Correlational Research

  • Page ID
    219783
  • \( \newcommand{\vecs}[1]{\overset { \scriptstyle \rightharpoonup} {\mathbf{#1}} } \)

    \( \newcommand{\vecd}[1]{\overset{-\!-\!\rightharpoonup}{\vphantom{a}\smash {#1}}} \)

    \( \newcommand{\id}{\mathrm{id}}\) \( \newcommand{\Span}{\mathrm{span}}\)

    ( \newcommand{\kernel}{\mathrm{null}\,}\) \( \newcommand{\range}{\mathrm{range}\,}\)

    \( \newcommand{\RealPart}{\mathrm{Re}}\) \( \newcommand{\ImaginaryPart}{\mathrm{Im}}\)

    \( \newcommand{\Argument}{\mathrm{Arg}}\) \( \newcommand{\norm}[1]{\| #1 \|}\)

    \( \newcommand{\inner}[2]{\langle #1, #2 \rangle}\)

    \( \newcommand{\Span}{\mathrm{span}}\)

    \( \newcommand{\id}{\mathrm{id}}\)

    \( \newcommand{\Span}{\mathrm{span}}\)

    \( \newcommand{\kernel}{\mathrm{null}\,}\)

    \( \newcommand{\range}{\mathrm{range}\,}\)

    \( \newcommand{\RealPart}{\mathrm{Re}}\)

    \( \newcommand{\ImaginaryPart}{\mathrm{Im}}\)

    \( \newcommand{\Argument}{\mathrm{Arg}}\)

    \( \newcommand{\norm}[1]{\| #1 \|}\)

    \( \newcommand{\inner}[2]{\langle #1, #2 \rangle}\)

    \( \newcommand{\Span}{\mathrm{span}}\) \( \newcommand{\AA}{\unicode[.8,0]{x212B}}\)

    \( \newcommand{\vectorA}[1]{\vec{#1}}      % arrow\)

    \( \newcommand{\vectorAt}[1]{\vec{\text{#1}}}      % arrow\)

    \( \newcommand{\vectorB}[1]{\overset { \scriptstyle \rightharpoonup} {\mathbf{#1}} } \)

    \( \newcommand{\vectorC}[1]{\textbf{#1}} \)

    \( \newcommand{\vectorD}[1]{\overrightarrow{#1}} \)

    \( \newcommand{\vectorDt}[1]{\overrightarrow{\text{#1}}} \)

    \( \newcommand{\vectE}[1]{\overset{-\!-\!\rightharpoonup}{\vphantom{a}\smash{\mathbf {#1}}}} \)

    \( \newcommand{\vecs}[1]{\overset { \scriptstyle \rightharpoonup} {\mathbf{#1}} } \)

    \( \newcommand{\vecd}[1]{\overset{-\!-\!\rightharpoonup}{\vphantom{a}\smash {#1}}} \)

    \(\newcommand{\avec}{\mathbf a}\) \(\newcommand{\bvec}{\mathbf b}\) \(\newcommand{\cvec}{\mathbf c}\) \(\newcommand{\dvec}{\mathbf d}\) \(\newcommand{\dtil}{\widetilde{\mathbf d}}\) \(\newcommand{\evec}{\mathbf e}\) \(\newcommand{\fvec}{\mathbf f}\) \(\newcommand{\nvec}{\mathbf n}\) \(\newcommand{\pvec}{\mathbf p}\) \(\newcommand{\qvec}{\mathbf q}\) \(\newcommand{\svec}{\mathbf s}\) \(\newcommand{\tvec}{\mathbf t}\) \(\newcommand{\uvec}{\mathbf u}\) \(\newcommand{\vvec}{\mathbf v}\) \(\newcommand{\wvec}{\mathbf w}\) \(\newcommand{\xvec}{\mathbf x}\) \(\newcommand{\yvec}{\mathbf y}\) \(\newcommand{\zvec}{\mathbf z}\) \(\newcommand{\rvec}{\mathbf r}\) \(\newcommand{\mvec}{\mathbf m}\) \(\newcommand{\zerovec}{\mathbf 0}\) \(\newcommand{\onevec}{\mathbf 1}\) \(\newcommand{\real}{\mathbb R}\) \(\newcommand{\twovec}[2]{\left[\begin{array}{r}#1 \\ #2 \end{array}\right]}\) \(\newcommand{\ctwovec}[2]{\left[\begin{array}{c}#1 \\ #2 \end{array}\right]}\) \(\newcommand{\threevec}[3]{\left[\begin{array}{r}#1 \\ #2 \\ #3 \end{array}\right]}\) \(\newcommand{\cthreevec}[3]{\left[\begin{array}{c}#1 \\ #2 \\ #3 \end{array}\right]}\) \(\newcommand{\fourvec}[4]{\left[\begin{array}{r}#1 \\ #2 \\ #3 \\ #4 \end{array}\right]}\) \(\newcommand{\cfourvec}[4]{\left[\begin{array}{c}#1 \\ #2 \\ #3 \\ #4 \end{array}\right]}\) \(\newcommand{\fivevec}[5]{\left[\begin{array}{r}#1 \\ #2 \\ #3 \\ #4 \\ #5 \\ \end{array}\right]}\) \(\newcommand{\cfivevec}[5]{\left[\begin{array}{c}#1 \\ #2 \\ #3 \\ #4 \\ #5 \\ \end{array}\right]}\) \(\newcommand{\mattwo}[4]{\left[\begin{array}{rr}#1 \amp #2 \\ #3 \amp #4 \\ \end{array}\right]}\) \(\newcommand{\laspan}[1]{\text{Span}\{#1\}}\) \(\newcommand{\bcal}{\cal B}\) \(\newcommand{\ccal}{\cal C}\) \(\newcommand{\scal}{\cal S}\) \(\newcommand{\wcal}{\cal W}\) \(\newcommand{\ecal}{\cal E}\) \(\newcommand{\coords}[2]{\left\{#1\right\}_{#2}}\) \(\newcommand{\gray}[1]{\color{gray}{#1}}\) \(\newcommand{\lgray}[1]{\color{lightgray}{#1}}\) \(\newcommand{\rank}{\operatorname{rank}}\) \(\newcommand{\row}{\text{Row}}\) \(\newcommand{\col}{\text{Col}}\) \(\renewcommand{\row}{\text{Row}}\) \(\newcommand{\nul}{\text{Nul}}\) \(\newcommand{\var}{\text{Var}}\) \(\newcommand{\corr}{\text{corr}}\) \(\newcommand{\len}[1]{\left|#1\right|}\) \(\newcommand{\bbar}{\overline{\bvec}}\) \(\newcommand{\bhat}{\widehat{\bvec}}\) \(\newcommand{\bperp}{\bvec^\perp}\) \(\newcommand{\xhat}{\widehat{\xvec}}\) \(\newcommand{\vhat}{\widehat{\vvec}}\) \(\newcommand{\uhat}{\widehat{\uvec}}\) \(\newcommand{\what}{\widehat{\wvec}}\) \(\newcommand{\Sighat}{\widehat{\Sigma}}\) \(\newcommand{\lt}{<}\) \(\newcommand{\gt}{>}\) \(\newcommand{\amp}{&}\) \(\definecolor{fillinmathshade}{gray}{0.9}\)
    Learning Objectives
    • Explain correlational research, including what a correlation coefficient tells us about the relationship between variables

    Correlational Research

    One of the primary methods used to study abnormal behavior is the correlational method. Correlation means that there is a relationship between two or more variables (such between the variables of negative thinking and depressive symptoms), but this relationship does not necessarily imply cause and effect. When two variables are correlated, it simply means that as one variable changes, so does the other. We can measure correlation by calculating a statistic known as a correlation coefficient. A correlation coefficient is a number from negative one to positive one that indicates the strength and direction of the relationship between variables. The association between two variables can be summarized statistically using the correlation coefficient (abbreviated as r).

    The number portion of the correlation coefficient indicates the strength of the relationship. The closer the number is to one (be it negative or positive), the more strongly related the variables are, and the more predictable changes in one variable will be as the other variable changes. The closer the number is to zero, the weaker the relationship, and the less predictable the relationships between the variables becomes. For instance, a correlation coefficient of 0.9 indicates a far stronger relationship than a correlation coefficient of 0.3. If the variables are not related to one another at all, the correlation coefficient is zero. The example above about negative thinking and depressive symptoms is an example of two variables that we might expect to have a relationship to each other. When higher values in one variable (negative thinking) are associated with higher values in the other variable (depressive symptoms), there is a positive correlation between the variables.

    The sign—positive or negative—of the correlation coefficient indicates the direction of the relationship. Positive correlations carry positive signs; negative correlations carry negative signs. A positive correlationmeans that the variables move in the same direction. Put another way, it means that as one variable increases so does the other, and conversely, when one variable decreases so does the other. A negative correlationmeans that the variables move in opposite directions. If two variables are negatively correlated, a decrease in one variable is associated with an increase in the other and vice versa.

    Other examples of positive correlations are the relationship between depression and disturbance in normal sleep patterns. One might expect then that scores on a measure of depression would be positively correlated with scores on a measure of sleep disturbances.

    One might expect a negative correlation to exist between between depression and self-esteem. The more depressed people are, the lower their scores are on the Rosenberg self-esteem scale (RSES), a self-esteem measure widely used in social-science research. Keep in mind that a negative correlation is not the same as no correlation. For example, we would probably find no correlation between depression and someone’s height.

    In correlational research, scientists passively observe and measure phenomena. Here, we do not intervene and change behavior, as we do in experiments. In correlational research, we identify patterns of relationships, but we usually cannot infer what causes what. Importantly, with correlational research, you can examine only two variables at a time, no more and no less.

    As mentioned earlier, correlations have predictive value. So, what if you wanted to test whether spending on others is related to happiness, but you don’t have $20 to give to each participant? You could use a correlational design—which is exactly what Professor Dunn did, too. She asked people how much of their income they spent on others or donated to charity, and later she asked them how happy they were. Do you think these two variables were related? Yes, they were! The more money people reported spending on others, the happier they were.

    More Details about the Correlation

    To find out how well two variables correspond, we can plot the relationship between the two scores on what is known as a scatterplot (Figure 1). In the scatterplot, each dot represents a data point. (In this case it’s individuals, but it could be some other unit.) Importantly, each dot provides us with two pieces of information—in this case, information about how good the person rated the past month (x-axis) and how happy the person felt in the past month (y-axis). Which variable is plotted on which axis does not matter.

    Scatterplot of the association between happiness and ratings of the past month, a positive correlation (r = .81)
    Figure \(\PageIndex{1}\): Scatterplot of the association between happiness and ratings of the past month, a positive correlation (r = .81). Each dot represents an individual.

    For the example above, the direction of the association is positive. This means that people who perceived the past month as being good reported feeling more happy, whereas people who perceived the month as being bad reported feeling less happy.

    In a scatterplot, the dots form a pattern that extends from the bottom left to the upper right (just as they do in Figure 1). The r value for a positive correlation is indicated by a positive number (although, the positive sign is usually omitted). Here, the r value is 0.81.

    Figure 2 shows a negative correlation, the association between the average height of males in a country (y-axis) and the pathogen prevalence, or commonness of disease, of that country (x-axis). In this scatterplot, each dot represents a country. Notice how the dots extend from the top left to the bottom right. What does this mean in real-world terms? It means that people are shorter in parts of the world where there is more disease. The r value for a negative correlation is indicated by a negative number—that is, it has a minus (−) sign in front of it. Here, it is −0.83.

    Scatterplot showing the association between average male height and pathogen prevalence, a negative correlation (r = –.83).
    Figure \(\PageIndex{2}\): Scatterplot showing the association between average male height and pathogen prevalence, a negative correlation (r = –0.83). Each dot represents a country (Chiao, 2009).

    The strength of a correlation has to do with how well the two variables align. Recall that in Professor Dunn’s correlational study, spending on others positively correlated with happiness: the more money people reported spending on others, the happier they reported to be. At this point, you may be thinking to yourself, “I know a very generous person who gave away lots of money to other people but is miserable!” Or maybe you know of a very stingy person who is happy as can be. Yes, there might be exceptions. If an association has many exceptions, it is considered a weak correlation. If an association has few or no exceptions, it is considered a strong correlation. A strong correlation is one in which the two variables always, or almost always, go together. In the example of happiness and how good the month has been, the association is strong. The stronger a correlation is, the tighter the dots in the scatterplot will be arranged along a sloped line.[1]

    Try It

    Problems with correlation

    If generosity and happiness are positively correlated, should we conclude that being generous causes happiness? Similarly, if height and pathogen prevalence are negatively correlated, should we conclude that disease causes shortness? From a correlation alone, we can’t be certain. For example, in the first case it may be that happiness causes generosity, or that generosity causes happiness. Or, a third variable might cause both happiness and generosity, creating the illusion of a direct link between the two. For example, wealth could be the third variable that causes both greater happiness and greater generosity. This is why correlation does not mean causation—an often repeated phrase among psychologists.[2]

    Correlation Does Not Indicate Causation

    Correlational research is useful because it allows us to discover the strength and direction of relationships that exist between two variables. However, correlation is limited because establishing the existence of a relationship tells us little about cause and effect. While variables are sometimes correlated because one does cause the other, it could also be that some other factor, a confounding variable, is actually causing the systematic movement in our variables of interest. In the depression and negative thinking example mentioned earlier, stress is a confounding variable that could account for the relationship between the two variables.

    Even when we cannot point to clear confounding variables, we should not assume that a correlation between two variables implies that one variable causes changes in another. This can be frustrating when a cause-and-effect relationship seems clear and intuitive. Think back to our example about the relationship between depression and disturbance in normal sleep patterns. It seems reasonable to assume that sleep disturbance might cause a higher score on a measure of depression, just as a high degree of depression might cause more disturbed sleep patterns, but if we were limited to correlational research, we would be overstepping our bounds by making this assumption. Both depression and sleep disturbance could be due to an underlying physiological disorder or any to other third variable that you have not measured.

    Unfortunately, people mistakenly make claims of causation as a function of correlations all the time. While correlational research is invaluable in identifying relationships among variables, a major limitation is the inability to establish causality. The correlational method does not involve manipulation of the variables of interest. In the previous example, the experimenter does not manipulate people’s depressive symptoms or sleep patterns. Psychologists want to make statements about cause and effect, but the only way to do that is to conduct an experiment to answer a research question. The next section describes how investigators use experimental methods in which the experimenter manipulates one or more variables of interest and observes their effects on other variables or outcomes under controlled conditions.

    Watch It

    In this video, we discuss one of the best methods psychologists have for predicting behaviors: correlation. But does that mean that a behavior is absolutely going to happen? Let’s find out!

    You can view the transcript for “#5 Correlation vs. Causation – Psy 101” here (opens in new window).

    Try It

    Think It Over

    Consider why correlational research is often used in the study of abnormal behavior. If correlational designs do not demonstrate causation, why do researchers make causal claims regarding their results? Are there instances when correlational results could demonstrate causation?

    Glossary

    cause-and-effect relationship: changes in one variable cause the changes in the other variable; can be determined only through an experimental research design

    confirmation bias: tendency to ignore evidence that disproves ideas or beliefs

    confounding variable: unanticipated outside factor that affects both variables of interest,\; often gives the false impression that changes in one variable causes changes in the other variable, when, in actuality, the outside factor causes changes in both variables

    correlation: the relationship between two or more variables; when two variables are correlated, one variable changes as the other does

    correlation coefficient: number from -1 to +1, indicating the strength and direction of the relationship between variables, and usually represented by r

    negative correlation: two variables change in different directions, with one becoming larger as the other becomes smaller; a negative correlation is not the same thing as no correlation

    positive correlation: two variables change in the same direction, both becoming either larger or smaller


    1. Scollon, C. N. (2020). Research designs. In R. Biswas-Diener & E. Diener (Eds), Noba textbook series: Psychology. Champaign, IL: DEF publishers. Retrieved from http://noba.to/acxb2thy
    2. Scollon, C. N. (2020). Research designs. In R. Biswas-Diener & E. Diener (Eds), Noba textbook series: Psychology. Champaign, IL: DEF publishers. Retrieved from http://noba.to/acxb2thy
    CC licensed content, Original
    • Modification, adaptation, and original content. Authored by: Sonja Ann Miller for Lumen Learning. Provided by: Lumen Learning. License: CC BY: Attribution
    CC licensed content, Shared previously
    All rights reserved content

    This page titled 2.06: Correlational Research is shared under a CC BY 4.0 license and was authored, remixed, and/or curated by Lumen Learning.

    • Was this article helpful?