Skip to main content
Social Sci LibreTexts

1.7: Data analysis

  • Page ID
    122906
  • \( \newcommand{\vecs}[1]{\overset { \scriptstyle \rightharpoonup} {\mathbf{#1}} } \) \( \newcommand{\vecd}[1]{\overset{-\!-\!\rightharpoonup}{\vphantom{a}\smash {#1}}} \)\(\newcommand{\id}{\mathrm{id}}\) \( \newcommand{\Span}{\mathrm{span}}\) \( \newcommand{\kernel}{\mathrm{null}\,}\) \( \newcommand{\range}{\mathrm{range}\,}\) \( \newcommand{\RealPart}{\mathrm{Re}}\) \( \newcommand{\ImaginaryPart}{\mathrm{Im}}\) \( \newcommand{\Argument}{\mathrm{Arg}}\) \( \newcommand{\norm}[1]{\| #1 \|}\) \( \newcommand{\inner}[2]{\langle #1, #2 \rangle}\) \( \newcommand{\Span}{\mathrm{span}}\) \(\newcommand{\id}{\mathrm{id}}\) \( \newcommand{\Span}{\mathrm{span}}\) \( \newcommand{\kernel}{\mathrm{null}\,}\) \( \newcommand{\range}{\mathrm{range}\,}\) \( \newcommand{\RealPart}{\mathrm{Re}}\) \( \newcommand{\ImaginaryPart}{\mathrm{Im}}\) \( \newcommand{\Argument}{\mathrm{Arg}}\) \( \newcommand{\norm}[1]{\| #1 \|}\) \( \newcommand{\inner}[2]{\langle #1, #2 \rangle}\) \( \newcommand{\Span}{\mathrm{span}}\)\(\newcommand{\AA}{\unicode[.8,0]{x212B}}\)

    “Let the data speak for itself” is a frequently invoked dictum that is both grammatically incorrect and impossible. Data, having been recorded, do not then speak for themselves. Data have no meaning apart from how we interpret them. Data analysis is the task of finding meaningful patterns in our data. It’s how we make sense of our data, how we derive meaning from it.

    It is accurate enough to say that quantitative data analysis helps us make sense of numeric data and qualitative data analysis helps us make sense of textual data, but that does oversimplify the distinction a bit. Imagine conducting direct observations of presidential primary campaign stump speeches. Each time we observe a speech, we would probably want to record the approximate number of people in attendance. Clearly, that will yield numeric data, and we would use quantitative data analysis techniques to find patterns in them, such as calculating the mean, median, and standard deviation to summarize the central tendency and variation of crowd sizes at the speeches. We would probably also record the speeches themselves and later transcribe them so that we have a verbatim written record of each speech. This time, we will, clearly, have textual data and use qualitative data analysis tools to identify underlying themes that emerge from the data. However, we would also record whether each speech was delivered by a Republican primary candidate or a Democrat primary candidate, probably by checking a box on our direct observation tool. In this case, the data we record is, in a sense, qualitative; it’s text, Republican or Democrat. When we analyze these data, though, we will most likely use quantitative data analysis tools, in this case, probably just to count the frequency of each value of the variable, political party. The choice between qualitative and quantitative data analysis tools, then, isn’t entirely about the type of data; it’s also determined by what we’re going to do with those data. If we’re performing numeric calculations, we use quantitative data analysis tools, and if we’re deriving and attributing meaning from and to words, we use qualitative data analysis tools. (Even that oversimplifies a little because of gray areas like content analysis, which is a very quantitative approach to qualitative data analysis, but we’ll leave it there.)

    The processes of qualitative data analysis and quantitative data analysis differ as well. When we undertake quantitative data analysis, the concepts we’re measuring are almost always predetermined. We first decide to measure a concept like political literacy, then operationalize the concept by writing a list of quiz items, then collect our data, and, finally, tally our respondents’ scores—that is, conduct our quantitative data analysis—as an indicator of their political literacy. Conceptualization came first, analysis second. When we’re doing qualitative data analysis, though, this isn’t necessarily the case. If we want to conduct interviews to understand (in the verstehen sense, recall) what respondents believe it means to be politically literate, we may not know what concepts we’ll end up identifying—that’s why we’re doing the research. Certainly, we have some starting point—a formal theory, a model, a hunch, whatever we’ve learned from previous research—or we wouldn’t know what to ask questions about. It is during the course of data analysis, though, that important concepts emerge as we find patterns in our interview data. Thus, conceptualization and analysis are pursued iteratively; concepts are a starting point for data collection, consistent with our model of the research process, but concepts are also the product of qualitative data analysis.

    Much more of the quantitative data analysis process is a settled matter than the qualitative data analysis process. There is only one way to calculate the sample standard deviation, and if you want to compare the means of two groups, there are nearly universally agreed upon rules to help you choose the appropriate statistical test. If you want to identify underlying themes in a political speech, though, there is not one right way to go about your analysis. There are many different qualitative data analysis camps, some complementary and some competing, and even within one camp, there is no expectation that qualitative data analysis would lead you and another researcher to precisely the same findings.

    We’re not going to cover the “how to” of data analysis here. For that, I refer you to your introductory statistics and qualitative data analysis courses and textbooks. Most students reading this will also have an introductory statistics course. I think we do aspiring social science researchers a disservice by not also requiring a course in qualitative data analysis. Students find one final distinction appealing. The frank truth is that students can accomplish little high caliber research, by professional standards, using the quantitative data analysis tools learned in an introductory statistics course. There are exceptions, but the type of quantitative research that could be published in a social science journal generally requires more statistics training. In contrast, students can conduct excellent research using basic qualitative data analysis techniques—a lot of good work is done with the basic tools. You shouldn’t choose your data analysis methods based on this, of course, but you should be encouraged to know that qualitative data analysis skills are very accessible to students and can enable students to conduct strong research. A great starting point is David Thomas’s (2006) “A General Inductive Approach for Analyzing Qualitative Evaluation Data,” American Journal of Evaluation, 27(2), 237-246.

    I find that students often show up in my research methods courses still just a little uncertain about inferential statistics, even if they’re fresh out of a statistics course. That’s not a criticism of the students or their statistics courses (sometimes it’s my own course!)—it’s a hard idea to grasp at first. If you’re one of those uncertain students, I offer a quick review of this data analysis approach in Appendix C.

    One final note about data analysis: Incorporating control variables into data analysis often trips students up. Appendix D presents one way of approaching this called elaboration modeling. I like to introduce students to this strategy because its logic can be applied across a wide range of quantitative and qualitative data analysis scenarios, and it helps students better learn the concept of control as well.


    1.7: Data analysis is shared under a not declared license and was authored, remixed, and/or curated by LibreTexts.

    • Was this article helpful?