Skip to main content
Social Sci LibreTexts

2.2: Background on Curriculum-Based Measures

  • Page ID
    74283
  • \( \newcommand{\vecs}[1]{\overset { \scriptstyle \rightharpoonup} {\mathbf{#1}} } \) \( \newcommand{\vecd}[1]{\overset{-\!-\!\rightharpoonup}{\vphantom{a}\smash {#1}}} \)\(\newcommand{\id}{\mathrm{id}}\) \( \newcommand{\Span}{\mathrm{span}}\) \( \newcommand{\kernel}{\mathrm{null}\,}\) \( \newcommand{\range}{\mathrm{range}\,}\) \( \newcommand{\RealPart}{\mathrm{Re}}\) \( \newcommand{\ImaginaryPart}{\mathrm{Im}}\) \( \newcommand{\Argument}{\mathrm{Arg}}\) \( \newcommand{\norm}[1]{\| #1 \|}\) \( \newcommand{\inner}[2]{\langle #1, #2 \rangle}\) \( \newcommand{\Span}{\mathrm{span}}\) \(\newcommand{\id}{\mathrm{id}}\) \( \newcommand{\Span}{\mathrm{span}}\) \( \newcommand{\kernel}{\mathrm{null}\,}\) \( \newcommand{\range}{\mathrm{range}\,}\) \( \newcommand{\RealPart}{\mathrm{Re}}\) \( \newcommand{\ImaginaryPart}{\mathrm{Im}}\) \( \newcommand{\Argument}{\mathrm{Arg}}\) \( \newcommand{\norm}[1]{\| #1 \|}\) \( \newcommand{\inner}[2]{\langle #1, #2 \rangle}\) \( \newcommand{\Span}{\mathrm{span}}\)\(\newcommand{\AA}{\unicode[.8,0]{x212B}}\)

    Students At-Risk

    When students are shown to be at- risk on a USM, interventions are suggested and applied and the students’ progress is monitored more frequently (Fuchs & Fuchs, 2005; Fuchs & Fuchs, 2006; Lane, Ennis, Kalberg, & Carter, 2007). These more frequent progress monitoring measures are termed curriculum-based measures (CBM; Deno, 1985; Fuchs, 2004; Fuchs, Fuchs, & Hamlett, 2015) because they are intended to sample the curriculum for the year and show students’ progress on that curriculum. The utilization of universal screening measures and curriculum-based measures are more in line with what the US Department of Education (2015, 2016), the CGCS (2015) and the ESSA (2015) described as necessary. Specifically, USMs, and especially CBMs, require little time to implement, especially when compared to state and district assessments, are innovative, fair, and, most importantly, are designed to be used to inform instruction and monitor the implementation of interventions (Fuchs & Fuchs, 2005; Fuchs & Fuchs, 2006).

    As previously discussed, CBM differs from more traditional, criterion-referenced testing of students where teachers measure the students’ command of specified objectives or skills (Fuchs et al., 2015). Fuchs and colleagues expand upon the disadvantages of this type of testing by noting the laborious nature of creating additional forms of tests each time a student passes an objective. Next, the reliability and validity of teacher made tests is not known and thus may not show an accurate measure of the students’ performance. Finally, utilizing mastery of a criterion as a benchmark may result in a situation where a student masters the skill in anticipation of the test, but does not achieve automaticity with the skill.

    CBM contrasts from criterion-referenced testing in that it samples the entire curriculum, or expected learning, for the entire year (Fuchs, 2004). By doing this, teachers are given valuable information on the rate of growth of the child as compared to others in the group, classroom, school, and in some cases, national norms. There are two primary types of CBM assessments. The first involves conceptualizing a task that is associated with skills necessary to complete a specific area (Fuchs, 2004). An example of this type of measure is oral reading fluency, which has been shown to be associated with translating letters into sound, making connections between words and sentences, and reading comprehension (Fuchs, Fuchs, Hosp, & Jenkins, 2001). The second type of CBM utilizes a sampling of the curriculum or the skills necessary to complete the curriculum. An example of this are math CBM assessments where each test has a different sampling of problem types from the year’s curriculum (e.g., subtraction, addition, multiplication, fractions). This differs from the earlier criterionreferenced tests in that the CBM assesses all skills in each assessment, rather than just the one learned in the past unit.

    You may be saying to yourself, “Hold-on, they’re all going to fail at the beginning!” I would then reply, “sort of.” Yes, we would expect that they will not do as well on the assessment during the first of the school year as they would at the end. That’s the whole point! We should see growth throughout the year, so they should start at the bottom of the scale and by the end, be near the top of the graph (see below) with a nice pretty line showing improvement that you can show to the parents of the child, your principal, your significant other, your dog, whomever to show that you’re an awesome teacher! The second, and most important thing is this: There is no failing in CBM. These are not for grades; they are formative assessments that show us if what we are doing is resulting in student performance.


    This page titled 2.2: Background on Curriculum-Based Measures is shared under a CC BY-ND license and was authored, remixed, and/or curated by Mickey Losinski (New Prairie Press/Kansas State University Libraries) .

    • Was this article helpful?