AASA Logo search Logo
search site
 
Navigation Bar American Association of School Administrators
awards and scholarships
career center
conferences
education marketplace
government relations
issues and insights
membership
publications
state associations
home
The School Administrator Web Edition
November 1999
Accommodating Practices
What one school district considers the effective use of accommodations and modifications in student assessments

It was the height of the testing season last April and my phone began to ring. And ring.

As the administrator responsible for overseeing testing programs in my district, I faced a battery of questions, most of them concerning the assessment of students under the Individuals with Disabilities Education Act.

Here is a sampling of what I faced:

* A middle school counselor: "I have some learning disabled kids here that the special education teacher thinks should have extra time on the Metropolitan Achievement Test. Won’t that make the test results invalid? We shouldn’t do it, should we?"

I kick the issue around with him for a few minutes, and he agrees to report to the teacher our feeling that extra time should not be provided.

* A high school counselor: "I have about 30 LD kids here that the IEP says should be provided extra time on the Metropolitan Achievement Test. Can we do that? Won’t it invalidate the results?"

This time I pick up on a key phrase, "the IEP says," and I know it is time to think hard. We discuss the specifics for a while and conclude that the accommodation should be provided. We decide the reporting of the results will require special handling including a notation in the student’s cumulative folder concerning the nature of the accommodation provided.

A few minutes later I call back the middle school counselor to revisit our prior discussion. This time I made what "the IEP says" a focus for our discussion and course of action. I begin to see a great need for staff development in what constitutes appropriate accommodations and I begin to wonder about the quality of decisions our staff are making in meetings about students’ individual education plans.

* A staff member at a school newly designated to teach elementary-level English as a second language students: "How are you going to handle reporting test results for the new ESL students in our building? If they take the tests and you lump them in with our regular students, our scores are going to go down."

These represent just a few of the issues confronting school districts as they try to comply with the recently revised IDEA and Title I assessment requirements.


Accommodation Guidelines
The IDEA legislation and regulations provide little guidance on the particulars of assessment accommodations for students with disabilities. There is specific reference to "accommodations and modification in administration," but neither term is defined nor is a distinction made between them. For that we need to look to emerging practice and writing in the field.

The consensus view is that accommodations are changes in test administration, which do not change the underlying construct being measured. Accommodations ensure that an assessment measures the intended construct rather than a child’s disability. Accommodations are changes in administration involving test presentation format, response format, test setting or test timing.

Considerable variation exists among the various authorities on the accommodations recognized within each of these categories. Reading test directions orally or possibly even the test items on a mathematics test to a student with a learning disability in reading is an example of an accommodation in presentation format. The rational basis for this change in administration is to improve test validity for such a student by ensuring the measure is assessing his math skills rather than the identified reading disability. Some have argued that accommodations should produce a differential boost in performance for disabled compared to nondisabled students.

Modifications to test administration are typically viewed as substantial changes in administration, which alter what is being measured and therefore result in some loss of information. Reading out loud to a student entire passages on a reading assessment is an example of a testing modification. It would change the construct being measured from reading comprehension to listening comprehension.

Tests are systematically collected samples of behavior from which we make inferences about student skills and abilities. When testing accommodations are provided to students with disabilities, the presumption is that we can proceed as usual in making reasonable inferences from the test data. But when testing modifications have been provided, the foundation for making such inferences have been substantially undermined.

At times, however, it is reasonable to make test modifications. Consider the example above of the student with a reading disability. It may be diagnostically useful to read out loud passages to the student to determine if the student can successfully complete listening comprehension tasks, which arguably are a prerequisite to reading skill development. This testing modification could be undertaken following a standard test administration as an extension of it. The data so derived would be used informally for instructional planning but never for accountability purposes.

In most cases test modifications are viewed as a last-resort strategy and are undertaken only when testing with accommodations alone is not a reasonable activity and the student would otherwise not be assessed.

One testing modification provided under the IDEA legislation is the alternative assessment. State and local education agencies are required to develop guidelines for participation of disabled children in alternate assessments when they cannot participate in the standard assessments with accommodations.

The most common strategy for alternate assessments is some type of portfolio system aligned with the same content standards as the regular assessment. The consensus is that alternate assessments should be reserved for a limited segment of the population, ranging perhaps from 0.5 to 2 percent.

The IDEA legislation and regulations contain a key word when referencing accommodation and modification: appropriate. But language defining just what is or is not appropriate is absent and we are left to use professional judgment as to what constitutes good practice. Until recently little research had been done on assessment accommodations and their effects on test results and validity. As Lynn Fuchs of Vanderbilt University has noted (see article), widespread presumptions that accommodations will result in improved performance have been contradicted by some research.

Checklist Use
So what are some strategies for reaching sound decisions about assessment accommodations?

In their book Testing Students with Disabilities, Martha Thurlow and colleagues at the National Center on Educational Outcomes included checklists that can be used to assess student behavior in the instructional setting, which are logically connected to making assessment accommodations. Using this approach, a student whose teacher agrees he can "listen to and follow oral directions given by an adult or on audio tape" in an instructional context would participate in assessments with no changes in presentation format. A disabled student rated by his teacher as "needing directions repeated, clarified or simplified" in an instructional context would be a candidate for receiving those accommodations on an assessment.

Stephen Elliott, professor of educational psychology at the Wisconsin Center for Education Research, and others used a similar approach in developing the "Assessment Accommodations Checklist," published by CTB McGraw-Hill.

While the rating-scale approach is systematic and provides a rational basis for making accommodations, the effect that the accommodations will have for a specific student is still unknown. To address this limitation, Fuchs describes using small action research projects to try out accommodations with individual students to validate their appropriate use. She calls this system Dynamic Assessment of Testing Accommodations. Similarly, Gerald Tindal, an associate professor of behavioral research at University of Oregon, has described the use of single-subject research designs to validate the use of accommodations with specific students.

These research-based approaches offer the soundest basis for making appropriate accommodations. Although somewhat demanding, they should be within the capabilities of special education teachers who have been properly trained to use them.

The checklist approach developed by Thurlow and others for determining accommodations established a link between the need to make accommodations for instruction and the need to do so in assessments. These two practices should go hand in hand to the maximum extent feasible. There is a growing recognition that students with disabilities are not the only ones with special needs and that instructional settings need to be more responsive to variations in learning styles. Arguments can be made for providing assessment accommodations for all students when they are made routinely in classroom instruction. Several states, including Colorado and Kansas, recently began to allow assessment accommodations for all students in their state assessment programs when the accommodations have been provided to the student instructionally.

When one considers these points, it is apparent that accommodations should not be made on a wholesale cookie-cutter basis for a given category of disability. To ensure appropriateness, accommodation determinations should be made individually, systematically, assessment by assessment, and skill area by skill area.

To minimize confusion, local school districts should give serious consideration to mirroring the accommodation guidelines used in state assessments for their own locally administered assessments. The states individually and collectively through work done by the Council of Chief State School Officers and National Center on Educational Outcomes have undertaken extensive research and development efforts to support more inclusive assessments. These practices are generally sound and state of the art. It is a good place to start.


The Driving Agent
DEA legislation and regulations clearly place responsibility for determining assessment accommodations or excluding students from standard assessments with the IEP team as part of the IEP process. The team determines what is appropriate for each child.

The quality of the professional judgments rendered by IEP teams will be a direct function of their professional skills and knowledge in making assessment accommodations.

Staff development is probably the most critical component for ensuring appropriate assessment accommodations are provided. Steps also should be taken to guarantee that IEP documents and processes address all IDEA-related assessment needs. These include identifying accommodations to be made on specific assessments, the basis for excluding a student from an assessment and the alternate assessment strategy to be used when a student is excluded from participating in an assessment.


Reporting Issues
The IDEA regulations include some specific reporting requirements, but they focus on state rather than local education agency assessments. The Improving America’s Schools Act of 1994 requires administration of at least reading and math assessments annually with results disaggregated at the school, district and state levels by ethnicity, gender, English proficiency status, migrant status, economic status and disabled versus nondisabled status.

Locally administered assessments include nationally normed achievement tests and various curriculum-based assessments. The most basic reporting for a given assessment is a simple tabulation of participation versus non-participation for regular students and those with disabilities, perhaps broken out further by disability subcategory. Test results for a building or district will reflect the actual achievement of students only when they are based on a very high percentage of the student enrollment.

In an era of high-stakes testing where merit pay or school accreditation may depend on test results, the incentives are there to systematically exclude low-achieving students from testing. In rare instances, this is a deliberate act of commission, but more often it is an act of omission where school staff do not aggressively pursue makeup testing with absent students. Reporting the percent of participation and requiring documentation of the reasons for non-participation make it possible to identify and correct inappropriate practices.

In reporting the results of those who participate in assessments, it is a common practice to report averages for all students, regular education students, including gifted, disabled students and students with limited English proficiency. If the numbers are large enough, these latter two groups can be further subdivided.

During the past few years, as more state assessment programs began to report results by subgroup, many were surprised to discover that the average performance of mildly disabled students sometimes outperformed the average performance of their nondisabled peers. Inclusion of disabled and ESL students in assessments and reporting their results is a way of promoting high expectations for all students. Disabled students also may benefit politically from being included as stakeholders press for accountability.

Assessment accommodations or modifications have implications for handling the reporting of assessment results. Testing modifications, which fundamentally change what is being measured, clearly warrant special handling. The results for such students should not be lumped in with those for students tested using standard procedures, and probably they ought not to be reported formally at all. Information from such testing may be useful for instructional planning but not for accountability purposes.

For students who participate in assessments with accommodations the picture is more complex. A fair consensus exists that some accommodations, such as testing students individually or in small groups, are minor departures from the standard approach and that the results may be interpreted on the same basis as for students not receiving the accommodation. Other accommodations such as simplifying or repeating directions or giving extended time often are thought to invalidate use of norms or comparisons with students participating with standard administration procedures.

The major norm-referenced test publishers recently have begun to distribute lists of accommodations for specific tests that they believe constitute standard versus nonstandard administration of the test, but the lists show there is not perfect agreement. One can imagine a situation in which an IEP team recommends the use of an accommodation on a norm-referenced test that the test publisher indicates is a nonstandard administration, compromising the use of the norms. Who will identify that problem and who will ensure the reporting is handled appropriately?

Much ambiguity surrounds the handling of these issues, and good practices are just evolving. As more research is conducted on the effects of assessment accommodations and as publishers develop a new generation of IDEA-friendly, norm-referenced tests, this situation should improve.


Shopping Considerations
Nearly all school districts give national norm-referenced tests. The use of norms intensifies issues regarding interpretation of test results with special populations and when accommodations should be provided. In most cases the use of norm-referenced tests is a local decision rather than a state mandate so school districts have the ability to scan the marketplace for the test that best meets their needs. The current generation of tests is understandably not up to meeting the needs generated by IDEA.

Here, though, are some IDEA-related issues that a school district’s test adoption committee should consider when shopping for a norm-referenced test.

* Building the norms:

Are data included in the technical manual delineating the inclusion of disabled and ESL students in the norms? Are separate norms provided for these groups?

What specifications or provisions were provided for accommodations/modifications for testing with the norm sample? Are separate norms provided for students using specific accommodations?

* Administering the assessment:

What specifications/provisions for use of accommodations/modifications in administration are included in the test manuals?

What guidelines are provided for making testing accommodations/modifications? What are these guidelines based upon? Was research conducted with the test and samples of regular and disabled students?

* Reporting results:

What reporting provisions are made for reporting all students tested in the aggregate?

What reporting provisions are made for disaggregation of regular versus disabled?

What reporting provisions are made for disaggregation by disability subgroup?

What reporting provisions are made for disaggregation by English proficiency status?

What reporting provisions are made for disaggregation of results by accomodations/modifications provided?


Validity Tradeoffs
Some purists may cringe at the thought of making widespread accommodations in student assessments. I would remind them that the goal is always to obtain the most valid assessment information possible.

We also need to keep in mind that test validity is not an "all or none" binary concept. There are degrees of validity. Use of testing accommodations may decrease validity slightly in some ways while increasing it in others. The challenge to educators is to make assessment accommodations where this tradeoff results in a reasonable amount of validity and a net improvement in validity.

In recent years applying the concept of consequential validity to tests, which was developed by the late Samuel Messick, vice president for research with the Educational Testing Service, has become common practice.

In several articles during the late ‘80s and early ‘90s, Mesick argued that consideration of validity issues for tests should include an evaluation of their social consequences. He believed when tests produce social consequences logically consistent with the test's intended purposes, they have consequential validity. Clearly, one intended purpose for assessments today is that they drive instructional reform. Implementing a more inclusive assessment system by using accommodations and alternate assessments should improve the consequential validity of the assessments used in our schools by ensuring the interests of all stakeholders are appropriately represented.


Creative Tension
The recent assessment requirements for disabled and English as a second language students embodied in IDEA and the Improving America’s Schools Act present school districts with significant challenges. Clearly, the legislative requirements are bit ahead of the research and development needed to guide good practice in making assessment accommodations and decisions about alternate assessments.

Yet this tension is spurring a great deal of new research, so the situation is steadily improving. Good resources and strategies are emerging that can be the focus for staff development and developing good practices at the district level.

Steve Henry is director of planning, evaluation and grants procurement for the Topeka Public Schools, 624 S.W. 24th St., Topeka, Kan. 66611. E-mail: shenry@topeka.k12.ks.us. He also is president of the National Association of Test Directors.



American Association of School Administrators
1801 North Moore Street • Arlington, VA 22209-1813
Phone 703.528.0700 • FAX 703.841.1543
http://www.aasa.org   e-mail webmaster@aasa.org
AASA.com SiteMap
Copyright © AASA, All Rights Reserved.
Privacy Statement