By Susan L. Hall, Ed.D.
When was the last time you heard either the term “Response to Intervention (RtI)” or “data-differentiated instruction”? Most K-12 educators hear one or both of these terms weekly, if not daily. Everyone knows that data are essential to accomplishing differentiation and RTI, but there are widespread misconceptions about which assessment instrument to use for what purpose. Using the wrong assessment is like trying to flip a pancake with a spoon; you might get some chunks of edible pancake, but it’s messy and inefficient. In my work with schools, I find that the two greatest areas of confusion are about the appropriate uses of universal screener data, and the distinction between a universal screener and a diagnostic instrument.
The most commonly used universal screeners for academic skills are called curriculum-based measures (CBM). CBM is the name of a category of assessments that have particular characteristics that make them well-suited to be universal screeners. CBM is the generic category name, while DIBELS and AIMSweb are brand names. Have you ever noticed how more people ask for a “Kleenex” than a “tissue”? It’s the same thing with CBMs.
Teachers tell me their school uses DIBELS, AIMSweb, or another CBM, yet they haven’t been provided training about what a CBM is and what it can and cannot do. CBMs are terrific universal screeners, especially in reading, because they are quick and efficient ways to assess a skill. When administering repeatedly with alternate forms, it’s possible to see growth. Since universal screeners are supposed to be given to every student (or nearly every student) three times a year, it’s important that it not take longer than 10 or 15 minutes to administer. Yet many schools are using assessments that take 30 minutes per student; that’s a travesty because that’s shifting valuable time from instructing to assessing. Why would anyone spend more than 10-15 minutes three times a year giving a universal screener to the school’s top readers? They’re better off reading during the time you’re spending assessing them.
CBMs are great for sorting students into two piles: those who appear to be performing at benchmark, and those who aren’t. Yet there are limitations to what you can learn from CBM data. Except in all but a few areas, a CBM cannot tell you enough to appropriately place a student in an intervention group. Yet too many schools are trying to use the data to do just that, which doesn’t work very well. Once you find the students whose skills are below benchmark on the CBM, the next step is the most important: giving a diagnostic screener. You want your universal screener to be the most efficient and effective sorter possible and to point to which type of diagnostic screener to give to each student who is below benchmark. Stopping with just a CBM is like admiring the problem without knowing what to do about it.
Teachers need access to diagnostic instruments to figure out how to address below-benchmark issues. If your school has intervention groups and you aren’t using diagnostic assessments, here’s an opportunity to improve the process. Excellent diagnostic instruments in reading exist in the areas of phonological awareness and phonics; it’s difficult to find the kinds of tools needed for comprehension, and there’s nothing available at the current time that is a vocabulary diagnostic measure. Teachers should use diagnostic assessments to pinpoint the areas a student has mastered, as well as those that are lacking. The data from a diagnostic assessment gives information that allows grouping in a very specific skill area, such as phoneme segmentation, or long vowel silent-e.
The other thing that diagnostic assessments are well-suited for is progress monitoring. After a student participates in an intervention group for one to three weeks, it’s far more effective to give an alternate form of the diagnostic screener than to give a CBM indicator. If the student has been in a group to work on the long vowel silent-e pattern, how can you tell if he has mastered it by having him read an oral reading fluency passage? Perhaps only two out of one hundred words contain the focus pattern. Phonics diagnostic screeners allow teachers to deliver a short segment only on the pattern skill, and it takes less than one minute to progress monitor a single skill. If the student has mastered one skill, then you go on up in skills until she fails the next skill, and that’s her next group placement. A common issue with RtI assessment practices is to progress monitor with the wrong instrument. The CBM should be given from time to time, but rarely is it the best instrument for ongoing progress monitoring of students in intervention skills groups.
At this time, many schools are well into implementation of RtI. According to the 2011 Spectrum K12 adoption survey, 68 percent of respondents indicated they are currently either in full RtI implementation or in the process of district-wide implementation. If student achievement is not as strong as hoped, then the first place to check is whether the assessments fit the purposes. Check usage of the CBM first; while it’s a universal screener to give to all students, it should not be used universally for all assessment purposes. Invest some time in learning about the benefits of phonological and phonics diagnostic screeners.
Susan L. Hall, Ed.D., is founder and president of 95 Percent Group, a company that provides professional development and materials to assist schools in implementing RtI. She is author of two books about RtI: Implementing Response to Intervention: A Principal’s Guide, and Jumpstart RTI. She is also author of I’ve DIBEL’d, Now What? Susan is a National LETRS trainer and is coauthor of several books with Louisa Moats, including the Second Edition of Module 7, which is about phonics and word study. She can be reached at shall@95percentgroup.com.
About Susan L. Hall
Books by Susan L. Hall: LETRS, I’ve DIBEL’d, Now What? Next Edition, I’ve DIBEL’d, Now What?