Skip to main content

Addressing the Variability of Learners in Common Core-Aligned Assessments: Policies, Practices, and Universal Design for Learning

Photo of a school hallway overlaid with a math graph
statement
Author(s)

CAST

Publisher

CAST

Date

2013

Abstract

CAST responds to request by the writers of the assessments aligned to Common Core standards for comments on drafts of the assessments.  In this statement, CAST points out five areas where the assessments could be improved to make them more accessible and effective for learners, especially those with disabilities.

Download the Word version

Download the PDF version

Cite As

CAST (2013). Addressing the variability of learners in Common Core-aligned assessments: Policies, practices, and universal design for learning. Policy Statement. Wakefield, MA: Author.

Full Text

Monitoring and assessing the achievement of students is a key component of the curriculum at any instructional level. Because these measures have implications for important educational decision-making, it is essential that the measures are (1) accurate, (2) useful for subsequent educational planning and (3) sufficiently timely to benefit each student. The creation of the Common Core aligned PARRC and SBAC assessments has captured national attention in two primary areas. First, even though formative assessment procedures are stated components of both of these assessment design and development consortia, their primary emphasis has been on the establishment of large-scale summative instruments, which are likely to be implemented in many states as high stakes tests, used as a determinant for grade-level promotion and high school graduation. Second, these summative measures are designed, from the outset, to be delivered in digital formats, a different medium from the paper and pencil versions with which students and educators have more familiarity.

Viewing these challenges within the framework of Universal Design for Learning (UDL) and its emphasis on providing students with multiple means of engagement, representation, and action and expression, some key issues emerge: First, instruction and assessment share dependencies within the curricular cycle, and expansions or constrictions in one area affects the other. Second, the real-time, more formative, achievement monitoring that is increasingly a component of digital curriculum resources and the systems that deliver them, when combined with learning analytics and large data set trend analysis, provides previously unavailable opportunities for applying pedagogical interventions at the point of instruction. These data-driven monitoring capabilities, the subject of considerable attention and investment by the United States Department of Education's Office of Educational Technology and highlighted in the 2010 National Education Technology Plan(2010), promote the benefits of embedded, real-time, versus extrinsic approaches to assessment.

With the above issues and the framework of Universal Design for Learning in mind, CAST has identified five critical factors that should be addressed from the outset by PARCC and SBAC when creating assessments, both formative and summative:

  1. Move away from apparent exclusive focus on summative measures and prioritize formative assessments as part of the assessment instruction cycle.
  2. Capitalize on the use of technology-based assessments to ensure that the benefits—flexibility, real-time monitoring of student progress and the promotion of access for all students—are realized.
  3. Consider the impact of assessment on classroom instruction in order to facilitate rather than constrain the modification of instruction based on student performance.
  4. Be mindful of the potential negative effects of computer-adaptive testing (CAT) on all subgroups, including students with disabilities and English Language Learners.
  5. Ensure—for all students—accuracy, reliability, and precision with respect to intended constructs.

Critical Factors

(1) Move away from apparent exclusive focus on summative measures and prioritize formative assessments as part of the assessment instruction cycle.

(2) Capitalize on the use of technology-based assessments to ensure that the benefits—flexibility, real-time monitoring of student progress and the promotion of access for all students—are realized.

(3) Consider the impact of assessment on classroom instruction in order to facilitate rather than constrain the modification of instruction based on student performance.

(4) Be mindful of the potential negative effects of computer-adaptive testing (CAT) on all subgroups, including students with disabilities and English Language Learners.

(5) Ensure—for all students—accuracy, reliability, and precision with respect to intended constructs.

Sincerely,
Peggy Coyne, EdD, Research Scientist
Tracey E. Hall, PhD, Senior Research Scientist
Chuck Hitchcock, MEd, Chief of Policy and Technology
Richard Jackson, EdD, Senior Research Scientist/Associate Professor, Boston College
Joanne Karger, JD, EdD, Research Scientist/Policy Analyst
Elizabeth Murray, ScD, Senior Research Scientist/Instructional Designer
Kristin Robinson, M. Phil, MA, Instructional Designer and Research Associate
David H. Rose, EdD, Chief Education Officer and Founder
Skip Stahl, MS, Senior Policy Analyst
Sherri Wilcauskas, MA, Senior Development Officer
Joy Zabala, EdD, Director of Technical Assistance

References

Almond, P., Winter, P., Cameto, R., Russell, M., Sato, E., Clarke-Midura, J., ? Lazarus, S. (2010). Technology-enabled and universally designed assessment: Considering access in measuring the achievement of students with disabilities—A foundation for research. Journal of Technology, Learning, and Assessment,10(5). Retrieved from http://ejournals.bc.edu/ojs/index.php/jtla/article/view/1605

Folk, V. G. & Smith, R. L. (2002). Models for delivery of CBTs. In C. Mills, M. Potenza, J. Fremer, & W. Ward (Eds.), Computer-based testing: Building the foundation for future assessments (pp. 41-66). Mahwah, NJ: Erlbaum.

Kingsbury, G. G. & Houser, R. L. (2007). ICAT: An adaptive testing procedure to allow the identification of idiosyncratic knowledge patterns. In D. J. Weiss (Ed.). Proceedings of the 2007 GMAC Conference on Computerized Adaptive Testing.

Laitusis, C. C., Buzick, H. M., Cook, L., & Stone, E. (2011). Adaptive Testing Options for Accountability Assessments. In M. Russell & M. Kavanaugh (Eds.), Assessing Students in the Margins: Challenges, Strategies, and Techniques. Charlotte, NC: Information Age Publishing.

Papanastasiou, E. C. & Reckase, M. D. (2007). A "rearrangement procedure" for scoring adaptive tests with review options. International Journal of Testing, 7(4), 387-407.

Thurlow, M., Lazarus, S. S., Albus, D., & Hodgson, J. (2010). Computer-based testing: Practices and considerations (Synthesis Report 78). Minneapolis, MN: University of Minnesota, National Center on Educational Outcomes.

U.S. Department of Education. (2007). Standards and assessments peer review guidance: Information and examples for meeting requirements of the No Child Left Behind Act of 2001. Retrieved from www.ed.gov/policy/elsec/guid/saaprguidance.doc

U.S. Department of Education, Office of Educational Technology. (2010). Transforming American education: Learning powered by technology.National Education Technology Plan 2010. Washington, DC: Author. Retrieved from http://www.ed.gov/sites/default/files/netp2010.pdf

Way, W. D. (2006). Practical questions in introducing computerized adaptive testing for K–12 assessment. PEM Research Reports. Iowa City, IA: Pearson Educational Measurement.

Yen, Y. C., Ho, R. G., Liao, W. W., & Chen, L. J. (2012). Reducing the Impact of Inappropriate Items on Reviewable Computerized Adaptive Testing. Educational Technology & Society 15(2), 231-243.

Top of Page