Monitoring Frequency- even when RtI not required for SLD

This post is by guest blogger, Chris Birr, Ed. Birr is a member of the ion Board of Directors, a School Psychologist, MTSS Coordinator, and deep thinker. Chris lives in suburban Columbus, Ohio with his wife, two daughters, and a dog.

TL;DR version:

  • Collect data for at least 6 weeks (low stakes decisions)
  • If weekly, conduct 12 weeks (or more) of CBM
  • If administering 3 probes, record median (with associated errors)
  • Consider administering 3 probes every 2 weeks for the 6 weeks (check trend)
  • If time allows, use 3 probes every month, check at 6 and 12 weeks.

A while ago, a question was posed to me whether weekly, every other weekly, or monthly monitoring of progress was most effective. At the time, my gut reaction was “weekly” is best. I started searching and reviewing research regarding reliability using curriculum-based measures (CBMs) and frequency of delivery. Like any educational topic, the answer was not clear.

In my previous role, I was in Wisconsin where Specific Learning Disabilities are identified through a Response to Intervention (RTI) process. The purpose here is to not debate if that is the most accurate method but I would argue that providing intervention while assessing, provides benefit to the student. Regardless, the SLD rule requires that weekly monitoring occurs for the identification of learning disabilities. The “rule” in Wisconsin was established in 2013 and could be ready for some revision based on research conducted since that time.

Regardless, we are now 6 months into a pandemic and students have experienced a lack of instruction, uneven instruction, poor instruction, and good instruction. Life is a mixed bag presently and generalizing the effect of instruction to make any decisions could be difficult.

My questions are, as follows.

  • Is weekly monitoring of progress still the most reliable?
  • Can the same reliability be obtained if monitoring every other week?
  • Can you really stretch things out and monitor once a month?
  • How long do you need to collect data?

Findings from Van Norman, E. R., Christ, T. J., Newell, K. W. (2017) indicate:

  • Growth is more discrepant when monitoring is short whereas longer duration results in lower error
  • 2 months minimum for reliable progress monitoring data

This indicates that monitoring needs to occur for a couple of months to have a moderate degree of confidence in the data obtained.

Additional findings from Christ, Zopluoglu, Monaghen, & Van Norman, (2013):

  • Data collection for less than 4 weeks is not recommended
  • Reliability is best after 12 weeks of data collection, 8 weeks for low stakes decisions
  • SEM of the slope was less when data collected more often. 10 weeks recommended for less dense data collection schedules
  • Generalized growth is typically observed after a minimum of 6 weeks. Duration is a better predictor of variance rather than schedule References

Based on this, schools should monitor for at least 8 weeks, but 10 weeks is best to feel fairly confident about the data. That is when weekly monitoring is conducted.

Findings from Jenkins, Graff, & Milioretti (2009) when twice-monthly monitoring was analyzed.

  • Frequency of measurement can be reduced to 1-2 times per month
  • Rather than 1 probe, need to collect 3-4 to increase the validity

Here’s where it gets more interesting. Findings from Jenkins, Schulze, Marti, & Harbaugh, (2017)

  • Compared results of weekly to bi-weekly monitoring
  • Minimum of 6 weeks for fairly reliable decision making when using every other week monitoring
  • Every other week monitoring consists of multiple probes (2-3), this is a plausible alternative to weekly
  • Intermittent monitoring provided more reliable data after 6 weeks compared to 8 weeks with weekly monitoring

Rather than 10-12 weeks of weekly, schools could administer 3 probes (record median) and have reliable data in 6 weeks. The same or even more probes are administered but would this allow for continuity of instruction and provide the same or better reliability?

The focus here is more about reducing the error in each measurement. Those who have seen Dr. Christ have heard him report that all ORF scores have a SEM that few discuss when sharing results. If I understand correctly, the shorter the duration of measurement, the greater the chances of error. Two months or more of monitoring, seems to be the sweet spot for obtaining adequate reliability.

Basically, schools could monitor twice a month, but administer 3 probes and record the median. That seems similar to Jenkins et al. (2017) and administering three probes (record median score) but every other week, could result in reliable data in a shorter period of time.

So what? If you are in a state that does not rely on monitoring data or RTI to identify disabilities, this could be viewed as nice to have but not necessary. However, considering the lack of instruction, stress placed on students and families, having some ongoing monitoring data could be very beneficial for informing instruction AND decisions.

I am predicting a higher than the average number of evaluations and increased emotionality when making decisions this year. Having standardized assessment results provides one aspect of information for making decisions but having a view of learning trajectory is also critical. If a student’s level of achievement is low but he or she demonstrated adequate growth as evident by 6 weeks of median scores, that could be a different conversation than what would occur from a flat trajectory of weekly monitoring data for 6 weeks.

Time is going to be in short supply and if conducting remote instruction, figuring out how to conduct monitoring could be one more thing to worry about. However, having that data could lead to improved decisions. We are in the midst of a crisis that will hopefully pass in a year. Eligibility decisions will follow students long after the pandemic hopefully subsides. If you are in a position to influence practice, think critically about how to best collect data to increase reliability and decrease error.

References:

Christ, T. J., Zopluoglu, C., Monaghen, B. D., & Van Norman, E. R. (2013). Curriculum-based measurement of oral reading: Multi-study evaluation of schedule, duration, and dataset quality on progress monitoring outcomes. Journal of School Psychology, 51(1), 19-57.

Jenkins, J. R., Graff, J. J., & Miglioretti, D. L. (2009). Estimating reading growth using intermittent CBM progress monitoring. Exceptional Children, 75(2), 151-163.

Jenkins, J., Schulze, M., Marti, A., & Harbaugh, A. G. (2017). Curriculum-based measurement of reading growth: Weekly versus intermittent progress monitoring. Exceptional Children, 84(1), 42-54.

Van Norman, E. R., Christ, T. J., Newell, K. W. (2017). Curriculum-based measurement of reading progress monitoring: The importance of growth magnitude and goal setting in decision making. School Psychology Review, 46(3), 320-328.