Skip to main content
Social Sci LibreTexts

13.8: Summary, Key Words and References

  • Page ID
    87462
  • \( \newcommand{\vecs}[1]{\overset { \scriptstyle \rightharpoonup} {\mathbf{#1}} } \) \( \newcommand{\vecd}[1]{\overset{-\!-\!\rightharpoonup}{\vphantom{a}\smash {#1}}} \)\(\newcommand{\id}{\mathrm{id}}\) \( \newcommand{\Span}{\mathrm{span}}\) \( \newcommand{\kernel}{\mathrm{null}\,}\) \( \newcommand{\range}{\mathrm{range}\,}\) \( \newcommand{\RealPart}{\mathrm{Re}}\) \( \newcommand{\ImaginaryPart}{\mathrm{Im}}\) \( \newcommand{\Argument}{\mathrm{Arg}}\) \( \newcommand{\norm}[1]{\| #1 \|}\) \( \newcommand{\inner}[2]{\langle #1, #2 \rangle}\) \( \newcommand{\Span}{\mathrm{span}}\) \(\newcommand{\id}{\mathrm{id}}\) \( \newcommand{\Span}{\mathrm{span}}\) \( \newcommand{\kernel}{\mathrm{null}\,}\) \( \newcommand{\range}{\mathrm{range}\,}\) \( \newcommand{\RealPart}{\mathrm{Re}}\) \( \newcommand{\ImaginaryPart}{\mathrm{Im}}\) \( \newcommand{\Argument}{\mathrm{Arg}}\) \( \newcommand{\norm}[1]{\| #1 \|}\) \( \newcommand{\inner}[2]{\langle #1, #2 \rangle}\) \( \newcommand{\Span}{\mathrm{span}}\)\(\newcommand{\AA}{\unicode[.8,0]{x212B}}\)

    Chapter Summary

    Standardized tests are developed by a team of experts and are administered in standard ways. They are used for a variety of educational purposes including accountability. Most elementary and middle school teachers are likely to be responsible for helping their students attain state content standards and achieve proficiency on criterion- referenced achievement tests. In order for teachers to interpret test scores and communicate that information to students and parents they have to understand basic information about measures of central tendency and variability, the normal distribution, and several kinds of test scores. Current evidence suggests that standardized tests can be biased against certain groups and that many teachers tailor their curriculum and classroom tests to match the standardized tests. In addition, some educators have been caught cheating.

    Key terms  
    Achievement tests Mean
    Aptitude tests Median
    AYP (Annual Yearly Progress) Mode
    Criterion referenced tests Norm referenced tests
    Diagnostic tests Range
    Frequency distribution Standard deviation
    Grade equivalent scores Stanine
    High stakes tests Z-score

    On the Internet

    < https://cresst.org/ > The National Center for Research on Evaluation, Standards, and Student Testing (CRESST) at UCLA focuses on research and development that improves assessment and accountability systems. It has resources for researchers, K-12 teachers, and policy makers on the implications of NCLB as well as classroom assessment.

    < https://www.ets.org > This is the home page of Educational Testing services which administers the PRAXIS II series of tests and has links to the testing requirements for teachers seeking licensure in each state District of Columbia and the US Virgin Islands.

    <https://www2.ed.gov/nclb/landing.jhtml > This is US Department of Education website devoted to promoting information and supporting and NCLB. Links for teachers and the summaries of the impact of NCLB in each state are provided.

    References

    American Federation of Teachers (2006, July) Smart Testing: Let's get it right. AFT Policy Brief. Retrieved August 8 th 2006 from www.aft.org/presscenter/relea...stingbrief.pdf

    Aronson, J., & Steele, C. M. (2005). Stereotypes and the Fragility of Academic Competence, Motivation, and Self-Concept. In A. J. Elliott & C. S. Dweck (Eds.). Handbook of competence and motivation, (pp. 436- 456) Guilford Publications, New York.

    Bracey, G. W. (2004). Value added assessment findings: Poor kids get poor teachers. Phi Delta Kappan, 86, 331- 333

    Cizek, G. J. (2003). Detecting and preventing classroom cheating: Promoting integrity in assessment. Corwin Press, Thousand Oaks, CA.

    Combined Curriculum Document Reading 4.1 (2006). Accessed November 19, 2006 from www.education.ky.gov/KDE/Inst...and+Resources/ Teaehing+Tools/Combined+Curriculum+Documents/default.htm

    Freedle, R. O. (2003). Correcting the SAT's ethnic and social-class bias: A method for reestimating SAT scores. Harvard Educational Review, 73 (1), 1-42.

    Fuhrman, S. H. (2004). Introduction, In S. H. Fuhrman & R. F. Elmore (Eds). Redesigning accountability systems for education, (pp. 3-14). New York: Teachers College Press.

    Haertel, E. & Herman, J. (2005) A historical perspective on validity arguments for accountability testing. In J. L.Herman & E. H. Haertel (Eds.) Uses and misuses of data for educational accountability and improvement. 104 th Yearbook of the National Society for the Study of Education. Maiden, MA: Blackwell

    Hershberg, T. (2004). Value added assessment: Powerful diagnostics to improve instruction and promote student achievement. American Association of School Administrators, Conference Proceedings. Retrieved August 21 2006 from www.cgp.upenn.edu/ope news.html

    Hess, F. H. Petrilli, M. J. (2006). No Child Left Behind Primer. New York: Peter Lang.

    Hoff, D. J. (2002) States revise meaning of proficient. Educational Week, 22,(6) 1,24-25.

    Hoover, E. (2006, October 21). SAT scores see largest dip in 31 years. Chronicle of Higher Education, 53(10), Al.

    Human Resources Division (n. d.). Firefighter Commonwealth of Massachusetts Physical Abilities Test (PAT) Accessed November, 19, 2006 from http://www.mass.gov/? pageID=hrdtopic&L=2&Lo=Home&Li=Civil+Service&sid=Ehrd

    Jacobson, L. (2006). Probing Test irregularities: Texas launches inquiry into cheating on exams. Education Week, 28(1), 28

    Jerald, C. D (2006,August). The Hidden costs of curriculum narrowing. Issue Brief, Washington DC: The Center for Comprehensive School Reform and Improvement. Accessed November 21, 2006 from www.centerforcsri.org/

    Joshi, R. M. (2003). Misconceptions about the assessment and diagnosis of reading disability. Reading Psychology, 24, 247-266.

    Linn, R. L., & Miller, M. D. (2005). Measurement and Assessment in Teaching 9 th ed. Upper Saddle River, NJ: Pearson .

    Linn, R. L. (2005). Fixing the NCLB Accountability System. CRESST Policy Brief 8. Accessed September 21, 2006 from www.cse.ucla.edu/products/policybriefs set.htm

    New York State Education Department (2005). Home Instruction in New York State. Accessed on November 19, 2006 from www.emsc.nysed.gov/nonpub/partiooio.htm

    Martin, M.O., Mullis, I.V.S., Gonzalez, E.J., & Chrostowski, S.J. (2004). Findings From IEA's Trends in International Mathematics and Science Study at the Fourth and Eighth Grades Chestnut Hill, MA: TIMSS & PIRLS International Study Center, Boston College. Accessed September 23, 2006 from timss.bc. edu/timss2003i/scienceD.html

    Novak, J. R. & Fuller, B (2003, December), Penalizing diverse schools? Similar test scores, but different students bring federal sanctions. Policy analysis for policy education. University of California, Berkeley School of Education: Berkeley CA. Accessed on September 21, 2006 from pace.berkeley.edu/pace index.html

    Olson, L. (2005, November 30 th ). State test program mushroom as NCLB kicks in. Education Week 25(13) 10-12.

    Pedulla, J Abrams, L. M. Madaus, G. F., Russell, M. K., Ramos, M. A., & Miao, J. (2003). Perceived effects of state-mandated testing programs on teaching and learning: Findings from a national survey of teachers. Boston College, Boston MA National Board on Educational Testing and Public Policy. Accessed September 21 2006 from escholarship.bc.edu/lvnch facp/51/

    Popham, W. J. (2004). America's "failing" schools. How parents and teachers can copy with No Child Left Behind. New York: Routledge Falmer.

    Popham, W. J. (2005). Classroom Assessment: What teachers need to know. Boston:, MA: Pearson.

    Popham, W. J. (2006). Educator cheating on No Child Left Behind Tests. Educational Week, 25 (32) 32-33.

    Recht, D. R. & Leslie, L. (1988). Effect of prior knowledge on good and poor readers' memory of text. Journal of Educational Psychology 80, 16-20.

    Shaul, M. S. (2006). No Child Left Behind Act: States face challenges measuring academic growth.

    Testimony before the House Committee on Education and the Workforce Government Accounting Office. Accessed September 25, 2006 from www.gao.gov/cgi-bin/getrptPGAO-o6-Q48T

    Stiggins, R (2004). New Assessment Beliefs for a New School Mission, Phi Delta Kappan, 86 (1) 22 -27.

    Wise, S. L. & DeMars, C. W. (2005). Low examinee effort in low-stakes assessment: Problems and potential solutions. Educational Assessment 10(1), 1-17.

    Young, J. W. (2004). Differential validity and prediction: Race and sex differences in college admissions testing. In R. Zwick (Ed). Rethinking the SAT: The future of standardized testing in university admissions. New York (pp. 289-301). Routledge Falmer.

    ​​​​​​​


    13.8: Summary, Key Words and References is shared under a not declared license and was authored, remixed, and/or curated by LibreTexts.

    • Was this article helpful?