MTSS By Yourself … Because of COVID

Starting, Starting Over, or Revising MTSS

This post is by guest blogger, Dr. Chris Birr. Birr is a member of the ion Board of Directors, a School Psychologist, MTSS Coordinator, and deep thinker. Chris lives in suburban Columbus, Ohio with his wife, two daughters, and a dog.

School closure and now a fluid environment has resulted in many educators feeling challenged to meet the learning needs of students while keeping schools open (Fernando & Schleicher, 2020). However, as students attend part-time, full-time, or continue with remote instruction, it is logical to predict that losses in learning will occur for students due to lack of instructional time and lack of in-person instruction. MTSS has been a framework that many schools were developing and deploying with varying levels of success, fidelity, and completeness even prior to the pandemic. Now, I have heard that MTSS has been placed on the back-burner until the pandemic subsides a bit. This may not be true everywhere, but I fear we may revert to less than evidence-informed instruction and data-based decision-making.

Most districts will not have the bandwidth to dedicate teams of people to MTSS development and deployment at this time. However, there are likely individuals that are either charged with the development of MTSS frameworks or have a passion to nudge their school or district toward an evolved MTSS framework.

As a past MTSS coordinator, the following recommendations are provided if entering a developing MTSS framework or a framework that at one time had a robust framework that he devolved over time. The following recommendations are intended as suggestions to an individual who has influence or impact in the development or refinement of an MTSS framework. But, when uncertainty is the only consistent factor, it can be difficult to find entry points to improve a system.

1. Implementation Rubrics. Review or become familiar with the elements of “good” MTSS. The use of rubrics such as the SAM from the Florida MTSS Project or the NASDSE Blueprints of RtI are excellent resources, along with others. Take inventory of your current system and look for strengths and weaknesses. Ideally, this work is done in teams at the school level and action plans are created. However, these are not ideal times. At least, use a rubric to create a roadmap for yourself regarding how you could influence schools to improve in critical areas. Develop a plan for improvement that you reference without overwhelming others at this time.

2. Assessment Basics. There are different kinds of tests and tests have different purposes. Take inventory of your school or district assessment schedule. Then examine which tests are universal screeners, diagnostic, and progress monitoring. Also, which tests can be used for program evaluation to examine how students grew at the end of each year?

Efficiency and effectiveness can be less than adequate if a universal screening assessment doubles for progress monitoring and a student’s progress is only checked three times a year. Or, if a diagnostic assessment is used for screening, less precision about the student’s needs is provided from the results.

Consider reviewing basic psychometrics. Are the assessments used in your school or district, adequate, regarding reliability and validity? Not that you can make sweeping changes if things are less than ideal. However, looking for weaknesses and opportunities will help develop and order priorities. Use conversations at the school level to nudge the use of the data with the highest reliability and validity for decision-making.

3. Data Transparency. Can data be sorted by grade level, school, or classroom? Is it possible to drill down to a single student or group of students receiving the same intervention? Are administrators able to view the entire district relative to their school or does each school have a patchwork of online spreadsheets that move location or are deleted accidentally at times?

Transparency of data is critical for a high-functioning, efficient MTSS framework. However, robust and efficient student information systems are not cheap and require professional development and support to operate effectively. Look for ways to scale the use of efficient data retrieval to teachers and student services professionals. Access to data is one of the first steps to increasing data-based decision-making within systems. In my experience, it is ideal when teachers and administrators can access data to make decisions, in three clicks or less.

Formatting. One aspect that I found particularly useful was the use of consistent, conditional formatting of student assessment scores. The following was the system of formatting used to mirror the performance categories on our state tests.

Red= Below Basic; Yellow= Basic; Green= Proficient or above; Blue= Advanced

Each assessment had a district or publisher developed table with ranges of scores for each performance band. In this way, teachers or administrators could sort by score and get a quick visual of the number of students in each band. A bonus was that when norms changed, the formatted colors remained, and ranges could be changed behind the scenes within our student information system.

4. Observe Interventions. Schedule time a few times a week to observe a small group or individual interventions. Look for the key elements of explicit and systematic instruction, ample practice, frequent feedback, and monitoring. If you are fortunate to have the opportunity, watch the same intervention in multiple schools. Do practices look similar?

An eventual goal with interventions would be the creation of the district or school fidelity checklists. The purpose is to communicate critical elements of the intervention to the instructor. In my experience, checklists are not effective when used as “gotchas” or for evaluation. In a best-case scenario, the checklists are seen and known by interventionists and used to provide positive feedback that all elements were observed. In a perfect world, fidelity checklists are scored at 90-100% and student data is trending up on progress monitoring.

Summary. Schools are faced with unprecedented times and many are simply trying to keep up or do their job the best they can at this time. Safety and health are the primary concern and making sure students and staff can stay as mentally healthy as possible. If placed in a position where MTSS can be a focus, there is still a need to accelerate student growth and academic progress. Knowing where to start or even how to start can be a challenge when priorities appear to change daily. The recommendations above are not based on an exhaustive search of the literature on the deployment of MTSS but rather an experience of being one person in a system and looking for ways to nudge MTSS forward little by little.

Are you in a position where you are trying to further MTSS? What has worked in your school or district? I’d love to hear suggestions and experiences.


Targets and Triggers – Refined

There are at least 5 fundamental components to a well-designed MTSS (Multi-Tiered System of Support). One of those components is data-based decision making.

Our friends at OregonRTI have suggested “Teams should use data and decision rules to determine effectiveness of the core program, identify students in need of interventions, and evaluate student progress to determine next steps.”

But, what exactly should that look like? If you are new to the idea of setting targets and triggers for your environment, where should you start?

We’re going to offer a few suggestions that will help you get started. In this post, we will assume that your state has a state assessment of some sorts, and that your district has a Universal Screening assessment in place (such as MAP, STAR, iREADY) – by the way – we are not endorsing – just mentioning for reference.

Method 1: Percentiles

An example of a data wall in ion.

Perhaps the most straightforward way to determine who among your students needs intervention is to set a cut point based on national percentiles.

A common starting point is somewhere between the 25th and 30th percentiles. Mileage on this will vary – depending on your actual population and their performance.

If students score below the trigger point, they should be considered for an intervention of some sorts – or at least some follow-up screening to determine what skills need additional support.

This methodology is fine – as long as your students are like most students in the country. These norms are national norms – meaning that the scores reflect the population of students across the country.

Most school districts aren’t like most of the country (whatever that means anyway). So, how helpful is that really? It may not be. It may be more helpful to look at norms and scores that more accurately reflect the learners within your own state.

Method 2: Linking Study

When state assessments are administered, there is a score on that assessment that the state deems as “proficient.” That’s the target – that’s the goal. The state is asking schools to get students in their systems to “proficient.”

Many of the main universal screening assessments (MAP, STAR, iREADY, etc) have done linking studies – studies in which a state’s assessment data is mapped to proficiency levels for a certain state.

We are located in Wisconsin – and our state assessment is the Forward exam.

NWEA MAP has performed a linking study – and as you can see in the table, proficiency in ELA at grade 5 is a score of 610. MAP has determined that if a student scores a 211 on the MAP assessment in the fall of grade 5, they are likely to be proficient on the Forward exam.

Pretty straightforward.

Now, when determining what cut scores to use to indicate need for intervention – the situation is slightly more murky. As a starting point, we can use the “Below Basic” proficiency level.

Looking at the table, students who score below a 194 on the Fall MAP assessment in grade 5 should be considered for some type of intervention – or at least some follow-up screening to determine what skills need additional support.

Next Steps

So – there are two ways you can get to targets and triggers using your Universal Screening data.

We have one more method – but it’s going to have to be an entire post. It involves some decently mid level statistics – and it’s just too much to put in one place.

So…


Monitoring Frequency- even when RtI not required for SLD

This post is by guest blogger, Chris Birr, Ed. Birr is a member of the ion Board of Directors, a School Psychologist, MTSS Coordinator, and deep thinker. Chris lives in suburban Columbus, Ohio with his wife, two daughters, and a dog.

TL;DR version:

  • Collect data for at least 6 weeks (low stakes decisions)
  • If weekly, conduct 12 weeks (or more) of CBM
  • If administering 3 probes, record median (with associated errors)
  • Consider administering 3 probes every 2 weeks for the 6 weeks (check trend)
  • If time allows, use 3 probes every month, check at 6 and 12 weeks.

A while ago, a question was posed to me whether weekly, every other weekly, or monthly monitoring of progress was most effective. At the time, my gut reaction was “weekly” is best. I started searching and reviewing research regarding reliability using curriculum-based measures (CBMs) and frequency of delivery. Like any educational topic, the answer was not clear.

In my previous role, I was in Wisconsin where Specific Learning Disabilities are identified through a Response to Intervention (RTI) process. The purpose here is to not debate if that is the most accurate method but I would argue that providing intervention while assessing, provides benefit to the student. Regardless, the SLD rule requires that weekly monitoring occurs for the identification of learning disabilities. The “rule” in Wisconsin was established in 2013 and could be ready for some revision based on research conducted since that time.

Regardless, we are now 6 months into a pandemic and students have experienced a lack of instruction, uneven instruction, poor instruction, and good instruction. Life is a mixed bag presently and generalizing the effect of instruction to make any decisions could be difficult.

My questions are, as follows.

  • Is weekly monitoring of progress still the most reliable?
  • Can the same reliability be obtained if monitoring every other week?
  • Can you really stretch things out and monitor once a month?
  • How long do you need to collect data?

Findings from Van Norman, E. R., Christ, T. J., Newell, K. W. (2017) indicate:

  • Growth is more discrepant when monitoring is short whereas longer duration results in lower error
  • 2 months minimum for reliable progress monitoring data

This indicates that monitoring needs to occur for a couple of months to have a moderate degree of confidence in the data obtained.

Additional findings from Christ, Zopluoglu, Monaghen, & Van Norman, (2013):

  • Data collection for less than 4 weeks is not recommended
  • Reliability is best after 12 weeks of data collection, 8 weeks for low stakes decisions
  • SEM of the slope was less when data collected more often. 10 weeks recommended for less dense data collection schedules
  • Generalized growth is typically observed after a minimum of 6 weeks. Duration is a better predictor of variance rather than schedule References

Based on this, schools should monitor for at least 8 weeks, but 10 weeks is best to feel fairly confident about the data. That is when weekly monitoring is conducted.

Findings from Jenkins, Graff, & Milioretti (2009) when twice-monthly monitoring was analyzed.

  • Frequency of measurement can be reduced to 1-2 times per month
  • Rather than 1 probe, need to collect 3-4 to increase the validity

Here’s where it gets more interesting. Findings from Jenkins, Schulze, Marti, & Harbaugh, (2017)

  • Compared results of weekly to bi-weekly monitoring
  • Minimum of 6 weeks for fairly reliable decision making when using every other week monitoring
  • Every other week monitoring consists of multiple probes (2-3), this is a plausible alternative to weekly
  • Intermittent monitoring provided more reliable data after 6 weeks compared to 8 weeks with weekly monitoring

Rather than 10-12 weeks of weekly, schools could administer 3 probes (record median) and have reliable data in 6 weeks. The same or even more probes are administered but would this allow for continuity of instruction and provide the same or better reliability?

The focus here is more about reducing the error in each measurement. Those who have seen Dr. Christ have heard him report that all ORF scores have a SEM that few discuss when sharing results. If I understand correctly, the shorter the duration of measurement, the greater the chances of error. Two months or more of monitoring, seems to be the sweet spot for obtaining adequate reliability.

Basically, schools could monitor twice a month, but administer 3 probes and record the median. That seems similar to Jenkins et al. (2017) and administering three probes (record median score) but every other week, could result in reliable data in a shorter period of time.

So what? If you are in a state that does not rely on monitoring data or RTI to identify disabilities, this could be viewed as nice to have but not necessary. However, considering the lack of instruction, stress placed on students and families, having some ongoing monitoring data could be very beneficial for informing instruction AND decisions.

I am predicting a higher than the average number of evaluations and increased emotionality when making decisions this year. Having standardized assessment results provides one aspect of information for making decisions but having a view of learning trajectory is also critical. If a student’s level of achievement is low but he or she demonstrated adequate growth as evident by 6 weeks of median scores, that could be a different conversation than what would occur from a flat trajectory of weekly monitoring data for 6 weeks.

Time is going to be in short supply and if conducting remote instruction, figuring out how to conduct monitoring could be one more thing to worry about. However, having that data could lead to improved decisions. We are in the midst of a crisis that will hopefully pass in a year. Eligibility decisions will follow students long after the pandemic hopefully subsides. If you are in a position to influence practice, think critically about how to best collect data to increase reliability and decrease error.

References:

Christ, T. J., Zopluoglu, C., Monaghen, B. D., & Van Norman, E. R. (2013). Curriculum-based measurement of oral reading: Multi-study evaluation of schedule, duration, and dataset quality on progress monitoring outcomes. Journal of School Psychology, 51(1), 19-57.

Jenkins, J. R., Graff, J. J., & Miglioretti, D. L. (2009). Estimating reading growth using intermittent CBM progress monitoring. Exceptional Children, 75(2), 151-163.

Jenkins, J., Schulze, M., Marti, A., & Harbaugh, A. G. (2017). Curriculum-based measurement of reading growth: Weekly versus intermittent progress monitoring. Exceptional Children, 84(1), 42-54.

Van Norman, E. R., Christ, T. J., Newell, K. W. (2017). Curriculum-based measurement of reading progress monitoring: The importance of growth magnitude and goal setting in decision making. School Psychology Review, 46(3), 320-328.


Making Sense of Some Requests

This post is by guest blogger, Chris Birr, Ed. Birr is a member of the ion Board of Directors, a School Psychologist, MTSS Coordinator, and deep thinker. Chris lives in suburban Columbus, Ohio with his wife, two daughters, and a dog.

To me, one of the best improvements in education has been the rapid delivery of information through Twitter or social media. Long gone are the days of being cut off from new innovations unless you were one of the lucky ones to attend a conference. That can cut both ways and with the lockdowns and shutdown of schools in March, advertising appeared to kick into high gear. At least, my inboxes were receiving more marketing emails than ever before.

Pre COVID-19, much of my previous role involved screening out requests for new products. My objective was not to simply be a gatekeeper to say “NO!” to each request. There was a fine line between becoming overwhelmed with requests, being taken by slick marketing, or overlooking products that might be very beneficial to students or teachers in the district. For the duration, I will refer to products or practices as “interventions”. I was generally involved in intervention selection or adoption but the process could apply to curricula to practices to interventions.

One of the best tools we adopted to process requests was the Hexagon Tool from the National Implementation Research Network (NIRN). If you have not heard of NIRN, click the link and explore their site. For anyone adopting new practices or making changes, they have resources that will be incredibly helpful.

The Hexagon Tool was designed to provide a framework for organizations to use when making decisions about whether to select tools or practices. According to NIRN, the Hexagon is most commonly used when exploring options for tools to use.

The main areas of the Hexagon Tool are as follows.

1. Evidence- what does the research indicate about the effectiveness of the tool or intervention?

2. Usability- who around you uses the tool or can help get it in place?

3. Supports- who in your system can help establish use?

4. Need- what is the target population of the intervention?

5. Fit with Current Initiatives- how well does the intervention align with other practices in the system?

6. Capacity to Implement- Do you have the staff, money, and infrastructure?

From my perspective, I leaned most heavily on the Evidence aspect of the Hexagon. Whenever I was approached with a request, I immediately went to the WWC, NCII Tools Chart, Evidence for ESSA, or Google Scholar to examine the evidence base for the intervention. If evidence was lacking, I was likely to pan the intervention and do what I could to shut down the process.

On the other hand, there were interventions or practices that adhered closely to expert recommendations included in the IES Practice Guides. For instance, a reading intervention could be closely aligned to a practice guide but lack inclusion in peer-reviewed journals as that was not the focus of the publisher. Although evidence was lacking, a need was present for a decoding intervention, the capacity of the system was present to deliver the intervention, and neighboring districts were experiencing success with implementation.

There were times when my biases could have led to the elimination of options that may have been beneficial to students and teachers in our system. On the other hand, there were times when interventions were subject to large marketing budgets and were low on design and impact. In those situations, having the Hexagon Tool in place was helpful for providing balance in decision making.

How to use the Hexagon?

For individual requests: The short version, make a template of a Hexagon report and share it with others. Decide by committee so decisions are as fair and equitable as possible.

The versatility of the Hexagon can be a strength and hindrance when first using. When asked to review a single intervention, I would use the areas of the Hexagon and collect as much information about each section as possible. From there, I would share the document with the person who made the request and ask for their input and suggestions of others to include in the process. If your system has instructional coaches, including several may lead to a more robust discussion and decision-making process. In this way, I was able to stay in my lane and focus on the evidence and research base of the requested intervention.

For reviewing multiple products for the same purpose: When gathering options to select a new intervention for a specific skill (e.g. decoding, computation), we would use the Hexagon headings as column headers and list the intervention choices in the rows of spreadsheets. Again, involve several professionals with varying backgrounds to build a fair and equitable process. I would encourage sharing the spreadsheet with team members, provide a timeline, and set a final meeting for a feedback loop about which interventions can be reviewed further and which can be eliminated from the process.

Summary

The upcoming school year will likely provide challenges that require flexibility and new ways of thinking and doing things. Establishing a process now to work through requests may lead to some increased efficiency as needs or situations change. Furthermore, having clear processes could decrease frustration as we all navigate stressful situations in and out of school. The Hexagon is one tool that may provide a clearer way of making decisions and increasing collaboration in an objective manner.


MTSS in a virtual setting?

This post is by guest blogger, Chris Birr, EdS. Birr is a member of the ion Board of Directors, a School Psychologist, MTSS Coordinator and deep thinker. Chris lives in suburban Columbus, Ohio with his wife, two daughters and dog.

Like many, I have seen local school districts release plans and modify plans within a week based on the presence of COVID-19. Sadly, in-person instruction was the objective but when the presence of the virus climbed and safety became an issue, hybrid and virtual education become more likely each day. MTSS will not be a panacea but will provide an efficient and effective framework during times of uncertainty.

The following is an attempt to adhere to a tight framework while providing options to meet the guidelines. When faced with uncertainty, things will become more confusing and disordered, staying with a framework may help focus priorities. However, nothing will be perfect, and providing benefits to students would be considered a major accomplishment from my perspective.

Although the Blueprints from the National Association of State Directors of Special Education are not new, I would argue the document and contents hold up. Most revised versions of the pillars of MTSS (e.g. SAM, National Center for RtI), build on similar major ideas from the Blueprints. The workgroup defined RtI (MTSS) as having the three main pillars (Batsche, et. al, 2005). These pillars hold up and provide a narrow but efficient view of MTSS that most can recall after a few discussions. More elements could allow for more room for interpretation, confusion, and error.

1. High-quality instruction AND intervention is provided to all students

2. Learning rate and level of performance are the primary sources of information for decision making

3. Important decisions are made using student responses to intervention

High-Quality Instruction and Intervention: Explicit and systematic instruction is critical. Take inventory of the curricula and interventions that either has strong local evidence or have a strong research base. Develop a toolbox of interventions/methods that can address all major skill areas (e.g. decoding, fluency, comprehension, computation, math reasoning, writing, social-emotional skills). Find teachers/educators who are comfortable hitting the ground running to deliver interventions for a grade level. Re-arranging virtual groups may be easier than having a teacher learn a new method during this time.

Level of performance. Many schools conduct universal screening assessments seasonally. Now may not be the time to worry about completing a fall assessment unless you are fortunate to be open 100% for all students. A recommendation would be for school teams to examine the last two seasons of data and if possible, combine the last 2-3 seasons of test scores for each student into a spreadsheet or student information system for review by grade level. Identify the students who were off-track before the shutdowns and identify groups with similar skill deficits (e.g. decoding, fluency, computation).

Learning Rate. This is usually where progress monitoring (monitoring of progress really) comes in. Generally, I support adaptive screening assessments (MAP, STAR) but completion may be difficult due to time or virtual settings. For elementary students, I would revert back to the use of Oral Reading Fluency (ORF) measures as General Outcome Measures. Best case, use a virtual set up to administer a CBM over videoconferencing. Systems such as AimsWeb or EasyCBM will likely have this figured out if they do not already have directions. Administer every couple of weeks or once a month to track progress.

Alternate Plan: For systems that cannot move to a new assessment, words per minute could be used with a varying degree of fidelity. Some information is better than no information at all. In elementary schools (Grades 1-6), Hasbrouck and Tindal (2017) released an update of the Oral Reading Fluency norms by grade. Fall, winter, and spring benchmarks are listed and expected growth between seasons can be calculated with simple subtraction, ask your school psychologist for more information.

Even if parents are shown this during a webinar, targets for rate, accuracy, and even prosody could be given so parents could modestly gauge performance. Although texts are not validated such as a CBM system, using a few sources and collecting timed samples provides some idea of how a student is reading aloud. Although not perfect, this could provide information for a data-based discussion between parents and teachers.

For math, schools may be wise to investigate Spring Math to target specific areas of math and track over time. No royalties are gained from this suggestion but this is one of the only math programs that provide targeted instruction, reliable and valid monitoring, and a skill hierarchy.

In writing, provide students with a prompt and give a minute to think and three to five minutes to write. Even counting the numbers of words written will provide a metric to track progress over time.

Important Decisions: Keep students safe. Without needing to say much, times are difficult and people are suffering. Use data for good and show students evidence of growth whenever possible. Use technology to build connections and reduce loneliness and isolation.

At this point, there have been online discussions and tweets about whether high-stakes decisions such as eligibility for special education are valid now. Professionals are struggling with these topics and doing the very best they can. I am confident that almost all educators will make decisions in the best interest of the student while following procedures and laws the best they can.

Bottom line. Schools will look and operate differently this year. Holding tight to a short list of practices may help increase data-based decision-making and improve outcomes for the students who need it most. Above all, stay safe and look out for one another.


A Personal Reflection on Anxiety

As we walk through unprecedented times, I have found myself being anxious – often inexplicably – about what is going on in the world – and what is going on in my life. If we are to be effective – not only in our jobs, but in our families and in our schools – we need to understand anxiety – learn how to handle it – and learn how to help our kids (both those in your home and in your classroom) understand it.

Disclaimer: There is general anxiety – and there is an anxiety disorder. If you (or the people in your life) think you may have an anxiety (or any other mental health) disorder – PLEASE GET HELP. I’m not a trained psychologist or counselor. These are simply my observations – and tools that I use to help control my own anxiety.

Anxiety and fear do not live in the present. They live in the past or the future.

Think about it – when you are anxious, it is about something that has happened or something that may happen (even one second from now – it’s still the future).

You were designed to handle “now” exceptionally well. No matter how stressful the future is, you can handle “now” because it is always “now”, and “now” isn’t all that bad.

So, if you are anxious about the past, or fearful of the future, look for ways to ground yourself in “now”.

Find someplace quiet.

What are five things you can see? Name them. Describe them. Look at them.

What are four things that you can touch? Actually touch them. How do they feel? How do they make you feel as you are touching them?

What are three things you can hear? Are they pleasant sounds? Are they sounds you may have not expected?

What are two things you can smell? Name them. Are they pleasing smells?

Name one thing you can taste. Right now.

This isn’t mysticism, it isn’t “new-age” mumbo-jumbo. It is taking a minute to focus on what is, to take your mind captive, to set your anxiety aside long enough to “cast your fears upon God” (should that be what you decide to do with them).

If you are fearful or anxious – take a minute to focus on what is – it will clarify what has been and put what might be back into perspective.