Skip to main content

CAST’s Response to the Smarter Balanced Assessment Consortium’s Draft of Accessibility and Accommodations Guidelines

logo of smarter balanced assessment consortium
Statement
Author(s)

CAST

Publisher

CAST

Date

2013

Abstract

CAST responds to the Smarter Balanced Assessment Consortium’s draft of Accessibility and Accommodations Guidelines.

Download the Word version

Download the PDF version

Cite As

CAST (2013). CAST’s response to the Smarter Balanced Assessment Consortium’s draft of accessibility and accommodations guidelines.  Policy Statement. Wakefield, MA: Author.

Full Text

June 12, 2013

Smarter Balanced Assessment Consortium
600 Washington St. S.E.,
Olympia, WA 98504 - 7200

To Whom It May Concern:

Thank you for the opportunity to comment on the Smarter Balanced Assessment Consortium: DRAFT Accessibility and Accommodations Framework. We recognize the tremendous amount of coordination and effort that the Smarter Balanced Assessment Consortium has accomplished to date in developing this framework, and we appreciate your willingness to engage in a dialogue on these issues. We look forward to continuing to work with you toward the goal of ensuring that the Smarter Balanced assessments are fair and equitable and that all students have meaningful opportunities to demonstrate what they have learned with respect to the Common Core State Standards (CCSS).

We would like to share our expertise as an organization that works to expand learning opportunities and outcomes for all individuals through Universal Design for Learning (UDL). CAST defined the principles and practices of UDL, which guide the design of flexible instructional goals, assessments methods, and materials, that consider from the outset the diversity and natural variability of learners in any educational setting. These principles and practices were incorporated into the Higher Education Opportunity Act (HEOA) of 2008. When applying the principles of UDL, we believe that instruction represents the entire episode of learning—i.e., the entire assessment-instructional cycle.

CAST is known for its development of innovative, technology-based educational resources and strategies based on universal design and the principles of UDL. For example, CAST created Bobby, the first software to check website accessibility and guide Web designers to make improvements; WiggleWorks (with Scholastic), the first universally designed literacy program for beginning readers; and CAST eReader, one of the first computer-based literacy tools to give learners full access to e-text while supporting and enhancing their literacy development. Additionally, CAST held an instrumental role in the development of the National Instructional Materials Accessibility Standard (NIMAS) and currently leads the National Accessible Instructional Materials (AIM) Center. CAST has also partnered with the University of Kansas and NASDSE in the federally supported Center on Online Learning and Students with Disabilities and serves as the lead partner (with Vanderbilt University) in the federally funded National Center on the Use of Emerging Technologies to Improve Literacy Achievement for Students with Disabilities in Middle School.

Through strategic collaborations, CAST continues to work on behalf of all learners, especially those with disabilities, by seeding the fields of education research, policy, professional development, and product development with UDL-based solutions. Based on CAST’s extensive experience in universal design and the principles of UDL, we offer the following comments on the Smarter Balanced Assessment Consortium: DRAFT Accessibility and Accommodations Framework.

We believe that the overall framework represents a significant amount of work and progress with respect to the needs of students who may require reasonable accommodations and assistive technologies to demonstrate what they know and can do relative to the Common Core State Standards. We were pleased to see that all SBAC states will be adopting the conceptual model and related accommodations. Having reviewed and discussed the content of the document, we would like to highlight concerns with respect to the following areas:

  • Graphical representation of the conceptual model

  • Attention to item design to ensure construct validity

  • Clarification of the accommodations process

    • Development of the ISAAP

    • Relationship between the IEP and ISAAP

    • Meaning of detrimental effect

    • A few specific questions about Appendix A

  • Appropriate uses of assistive technologies

  • Impact of assessment decisions on instruction

  • Computer adaptive testing for students with disabilities

  • Formative assessment

CONCERNS

Graphical Representation of the Conceptual Model

We fully support the idea that a conceptual model (described on pages 3 and 4, shown graphically on page 14) should guide states with the adoption of a common set of accessibility tools and accommodations. We also support the provision of a framework for enhancing a common understanding as to how the accommodations might be implemented with equity to all learners who might benefit from the allowable supports. We have concerns, however, regarding how decisions will be made with respect to particular accessibility features that are determined by the ISAAP. Page 17 states that SBAC is not defining the composition of the teams that will make decisions about which accessibility features should be incorporated into a student’s ISAAP. Rather, “it is under the control of each school, and is subject to state and federal requirements.” Moreover, for “most students who do not require accessibility tools or accommodations, an initial decision by a teacher may be confirmed by a second person (potentially the student).” This process is vague and invites the possibility of bias and subjectivity to impact decisions about student use of particular features. The process used in one school may differ significantly from that used in another school. It is also important to point out that the test proctors who administer the assessments are not always familiar with the students whose test administrations they are overseeing. These educators may lack the knowledge and skills to determine which accessibility features are appropriate for which individual student.

We further feel that the visual representation of the three levels and the embedded/locally provided accommodations might be improved to display more clearly that the features listed at the top apply to all three layers and that the features listed in the middle band applies both to that layer and the one below. While the matrix of accessibility features provided in Appendix A (page 26) clarifies the conceptual model, it might be better to show that the “Accessibility Tools Available to All Students ” category applies to all categories. The visual generated by PARCC that depicts three concentric circles illustrates the overlap of categories. The representation of accessibility features that are available for all students is not easily interpreted with the visual of three distinct divisions in the triangle.

Attention to Item Design to Ensure Construct Validity

We appreciate the attention given in the Accessibility and Accommodations Framework to the importance of construct validity as well as the role that digitally-based assessments can play in reducing construct irrelevance. We were particularly pleased to see the statements made on page 7 regarding critical factors impacting the validity and fairness of measures of student achievement:

  • A clear definition of the construct—the knowledge, skills, and abilities—that is intended to be measured;

  • The development of items and tasks that are explicitly designed to assess the construct that is the target of measurement; and

  • The delivery of and capturing of responses from those items and tasks in ways that enable students to maximally demonstrate their current state of achievement of the construct.

We urge SBAC to provide greater detail regarding the item and task development process in order to ensure that there is precision with respect to the identification of intended constructs associated with the CCSS that will correspond to individual assessment items. Without this precision, there is the danger that items or tasks will measure construct irrelevant information for certain students and that, as a result, the inferences that are drawn from the assessment scores for these students will be invalid.

In particular, we encourage developers to be exact in identifying the particular constructs associated with reading in order to allow a skill such as decoding to be measured separately from higher level reading comprehension. With today’s widely available technologies, students can independently demonstrate achievement of high levels of reading comprehension without having to decode specific elements of text. For both students with visual impairments or those with a specific learning disabilities, technology can support high levels of language processing necessary for deep understanding and interpretation of text. In many such cases, college and career readiness does not depend on a presumptively prerequisite skill for decoding. In our view, reading represents a cluster of constructs, though related, that can be independently measured and attained.

Clarification of the Accommodations Process

We further urge SBAC to provide greater clarification regarding the process for developing the ISAAP for students with disabilities as well as the relationship between the ISAAP and the IEP. Page 18 states that the ISAAP for students with disabilities “should be based on information in a student’s IEP.” It is unclear, however, what specific steps schools and districts should take to develop the ISAAP. For example, is the student to receive some type of an assessment in conjunction with the information in the student’s IEP to help determine which features are appropriate and should be activated? When would such an assessment be administered? If no assessment is involved, how is the team expected to make decisions about appropriate features?

More details are also necessary to clarify the relationship between the ISAAP and the IEP. Pages 18–19 state that although the ISAAP is not the same as an IEP, the two documents should be consistent. Because the IEP is a legal document that carries with it procedural safeguards, it is important for the accessibility tools and accommodations that are included in the ISAAP to be documented in the student’s IEP as well. We are particularly concerned with the fact that page 19 states that, while the ISAAP can be developed by an IEP or 504 Team, it need not be. Rather, “the ISAAP can also be created by an instructional support team or other ex officio entity formed solely for the purpose of preparing the profile.” IDEA requires that decisions regarding the participation of students with disabilities in statewide assessments, including the accommodations that the student will receive, be made by the IEP team and documented in the student’s IEP. To allow a separate process or “ ex officio entity,” which may or may not include the child’s parents, to make such decisions would violate the student’s right to receive FAPE under IDEA and to receive comparable aids, benefits, and services under Section 504. Page 20 discusses the importance of federal accommodations legislation. It is critical that SBAC assessments be developed and administered in such a way as to protect the rights of students with disabilities. As described earlier, we are also concerned with how the ISAAP process is supposed to function for students without disabilities.

We further believe that the document should emphasize to a greater extent the need for states to provide comprehensive training and technical assistance with respect to the process for ISAAP development. Page 21 cites research that has highlighted that “when proper training supports are provided, educators and educational teams are able to develop quality ISAAPs that support more valid assessment of students (Higgins, Fedorchak, & Katz, 2012).” Page 22 also states: “To assist educators in using ISAAPs, Smarter Balanced will develop tools and training materials that improve on those previously employed by NECAP.” We encourage SBAC to make its training and resources available to parents as well as educators.

Pages 21–23 discuss the use of a new standard—detrimental effect—to determine whether an accessibility feature or accommodation is necessary. It is unclear, however, how educators are to interpret this standard and how the standard will intersect with current legal requirements under IDEA. The creation of a new standard has the potential to introduce confusion into the decision-making process. Again, professional development will be critical here. It will also be necessary to review the standard to ensure that it is not creating an added burden or in any way denying the rights of students with disabilities to demonstrate what they know on these assessments.

Finally, we also have several questions with respect to the specific accessibility features and accommodations listed in Appendix A. For example, for which items will the text to speech accessibility feature be allowed? We are concerned that students not be denied the opportunity to demonstrate their understanding of text. As Jackson (2012) notes, for students with visual impairments who use audio-supported reading, “the task of reading and comprehending text can occur with greater efficiency, thus opening up learning opportunities that will support students in maximizing their educational potential” (p. 1). In addition, which accommodations in Appendix A will be available to students with physical disabilities who require a switch activated device in order to participate? Moreover, what will the accommodations process look like for a paper-and-pencil administration of the assessment?

Appropriate Uses of Assistive Technologies

We wish to make a brief comment about this part of the conceptual model described on page 13 and shown on page 14: “The right side of the framework captures supports that are provided locally. Local supports may require the use of a physical device/tool or interaction with a human.”

CAST favors offering a balance of embedded and external accommodations and assistive technologies so that students may benefit from essential AT that is not embedded, is familiar to the student from daily use during instruction and does not violate construct for selected assessment items. We will be very interested in learning more about the guidelines that will be provided with respect to the selection and use of locally provided accommodations, assistive and communication technologies.

We did notice that the table in Appendix A did not include all of the assistive technologies that are listed on page 9 of the document. Is that because page 9 was a generic introduction to the topic and page 9 indicates practice that will actually be implemented?

Impact of Assessment on Instruction

We have an overarching concern about the potential effects of assessment on classroom instruction. We see a danger of assessment policy and procedures driving instructional practices, including materials and tools, such as accessible instructional materials, used for students in the classroom. While it is very clear and we agree that accommodations may interfere with a construct being measured at the item level, we are concerned that schools and/or teachers may not allow accommodations for instruction because they may not be allowed on the assessment. We have actually observed this phenomenon—a state was not able to provide computer based writing tests and therefore determined that all writing instruction in classrooms should be using paper and pencil in order to parallel the annual high stakes assessment. We believe this is overcompensation for a testing situation and that this policy does not prepare students to meet K–12 standards nor realistic for success in career or college.

Computer Adaptive Testing for Students with Disabilities

While it is clear that the Smarter Balanced approach to large-scale assessment is heavily invested if not predicated on computer-adaptive testing (CAT), the descriptive paragraph on page 10 details the potential efficiency and engagement benefits of CAT but does not address any of its potential liabilities. From prior research and analysis, key potential challenges associated with CAT and students with disabilities include:

  • There is a lack of research on the accuracy and viability of CAT on the various subtypes of students with disabilities (Laitusis et al., 2011; Stone & Davy, 2011); the majority of benefits for students with disabilities ascribed to CAT appear to be based on assumptions unsupported by existing research data.

  • A down-leveling of test items following an item failure could result in the presentation of out-of-level items based on standards from a lower grade. This could render the assessment out of compliance with the ESEA requirement to measure student performance against the expectations for a student’s grade level (Way, 2006; US Department of Education, 2007; ACRE, 2010). Such a result could also have the effect of violating the student’s rights under IDEA and Section 504. Research suggests that maintaining alignment with content standards may be more successful if the adaptation occurs at the testlet/subtest level, rather than at the item level (Folks & Smith, 2002) We presume that since the SBAC is a state-supported initiative, CAT-related ESEA compliance has been been addressed, but we were unable to locate any supporting documentation.

  • Students with uneven skill sets may fail basic items and never have the opportunity to exhibit skills on higher-level tasks; this is particularly relevant to various students with disabilities who may exhibit idiosyncratic and uneven academic skills (Thurlow, et al., 2010;  Almond, et al., 2010; Kingsbury & Houser, 2007).

  • CAT approaches are reported to be efficient and accurate when item responses are limited to multiple choice and short answers (Way, et al., 2010), while the accuracy and efficiency of more varied response types may pose significant challenges to adaptive algorithms, and hence to validity.

  • The majority of CAT systems deployed to date may not allow or may significantly restrict a student’s ability to return to a previous item to review or change a response (Way, et al., 2006), further narrowing the range of test-taking strategies a student may employ. Some solutions to the application of a review and change strategy for CAT have emerged (Yen, 2012; Papanastasiou, 2007). Will students taking the SBAC assessments be allowed to review or change a response?

Formative Assessment

While not part of this particular policy statement, we are concerned about the following issue: In addition to the summative assessments, it is our understanding that the original charge for SBAC was to create formative assessments. We hope that such assessments will be implemented with care so that they might support teaching and learning on a daily basis. We have been concerned about the possibility that SBAC formative assessments, if implemented, will be mini-summative in nature and simply compound the understand-able obsession with preparing students for the summative assessment event.

The importance of the formative assessment process cannot be overstated. Teachers, students, administrators and parents benefit from the data collected in well-designed formative assessment. The formative assessment process provides information about performance during the instructional episode so that modifications, changes, and altera-tions in instruction may be made to support achievement toward the instructional goals. Without formative assessment procedures established and well implemented, educators, students and parents may not be well informed about progress toward a goal and only obtain summative data about performance after instruction has occurred by using only summative assessments—in other words after it is too late to support or change instruction. Without the benefits of formative assessment, policies related to summative assessment become more critical for students with disabilities. Ultimately, it is our hope, that well-developed and implemented formative assessments will lead to improvements in each learner’s attention to and analysis of their own learning process and products.

We thank you again for the opportunity to comment on these issues. We look forward to working with you further in this effort toward creating fair and equitable assessments for all learners. We particularly look forward to reviewing the Access and Accommodations Manual later this summer.

Sincerely,

Tracey E. Hall, PhD, Senior Research Scientist
Chuck Hitchcock, MEd, Chief Officer, Policy and Technology
Richard Jackson, EdD, Research Scientist/Professor, Boston College
Joanne Karger, JD, EdD, Research Scientist/Policy Analyst
Patricia K. Ralabate, EdD, Director of Implementation
David H. Rose, EdD, Chief Education Officer and Founder
Skip Stahl, MS, Senior Policy Analyst
Joy Zabala, EdD, Director of Technical Assistance, CAST and AIM Center

References

Almond, P., Winter, P., Cameto, R., Russell, M., Sato, E., Clarke-Midura, J., & Lazarus, S. (2010). Technology-enabled and universally designed assessment: Considering access in measuring the achievement of students with disabilities—A foundation for research. Journal of Technology, Learning, and Assessment,10(5). Retrieved from http://ejournals.bc.edu/ojs/index.php/jtla/article/view/1605

Hansen, E. G. & Mislevy, R. J. (2005). Accessibility of Computer-Based Testing for Individuals with Disabilities and English Language Learners within a Validity Framework. In M. Hricko & S. Howell. (Eds.), Online Assessment and Measurement: Foundation, Challenges, and Issues. Hershey, PA: IGI Science.

Higgins, J., Fedorchak, G., & Katz, M. (2012). Assignment of accessibility tools for digitally delivered assessments: Key findings. Dover, NH: Measured Progress.

Jackson, R. M. (2012). Audio-Supported Reading for Students Who are Blind or Visually Impaired. Wakefield, MA: National Center on Accessible Instructional Materials. Retrieved from http://aim.cast.org/learn/practice/future/audio_supported_reading

Kingsbury, G. G. & Houser, R. L. (2007). ICAT: An adaptive testing procedure to allow the identification of idiosyncratic knowledge patterns. In D. J. Weiss (Ed.). Proceedings of the 2007 GMAC Conference on Computerized Adaptive Testing. Retrieved from www.psych.umn.edu/psylabs/CATCentral/

Laitusis, C. C., Buzick, H. M., Cook, L., & Stone, E. (2011). Adaptive Testing Options for Accountability Assessments. In M. Russell & M. Kavanaugh (Eds.), Assessing Students in the Margins: Challenges, Strategies, and Techniques. Charlotte, NC: Information Age Publishing.

Papanastasiou, E. C., & Reckase, M. D. (2007). A "rearrangement procedure" for scoring adaptive tests with review options. International Journal of Testing, 7(4), 387-407.

Public Schools of North Carolina (2010, July 16). Computerized adaptive testing: How CAT may be utilized in the next generation of assessments. A Report for the North Carolina State Board of Education. Raleigh, NC: Author. Retrieved from http://www.ncpublicschools.org/docs/acre/publications/2010/publications/20100716-01.pdf

Thurlow, M., Lazarus, S. S., Albus, D., & Hodgson, J. (2010). Computer-based testing: Practices and considerations (Synthesis Report 78). Minneapolis, MN: University of Minnesota, National Center on Educational Outcomes.

U.S. Department of Education. (2007). Standards and assessments peer review guidance: Information and examples for meeting requirements of the No Child Left Behind Act of 2001. Retrieved from www.ed.gov/policy/elsec/guid/saaprguidance.doc

Way, W. D. (2006). Practical questions in introducing computerized adaptive testing for K–12 assessment. PEM Research Reports. Iowa City, IA: Pearson Educational Measurement. Retrieved from  http://www.pearsonassessments.com/NR/rdonlyres/EC965AB8-EE70-46E5-B1A5-036BE41AB899/0/RR_05_03.pdf?WT.mc_id=TMRS_Practical_Questions_in_Introducing_Computerized

Yen, Y. C., Ho, R. G., Liao, W. W., & Chen, L. J. (2012). Reducing the Impact of Inappropriate Items on Reviewable Computerized Adaptive Testing. Educational Technology & Society 15(2), 231-243.

Top of Page