Te Kete Ipurangi Navigation:

Te Kete Ipurangi
Communities
Schools

Te Kete Ipurangi user options:



Literacy Online. Every child literate - a shared responsibility.
Ministry of Education.

Using the evidence to make judgments about effectiveness

The next step in the process is to turn the discussions of the data into evaluative judgments about how effective the school has been in achieving progress for its students achieving below curriculum expectations in literacy. To do this, we use a tool called a rubric.

Rubrics have been used for years in student assessment to clarify expectations and standards, and to increase the validity and reliability (consistency) of grading essays and assignments. In evaluation, we can also use these tools to help define "how good is good‟ when it comes to student progress (or literacy programming, or school literacy learning culture, etc) and to judge the mix of evidence we have before us.

Using the first rubric

This step should be used after the initial reflective discussion and gathering and analysis of evidence. This includes plotting student progress relative to NZC and the National Standards using a Progress Grid for each year level (see p. 7).

Our task now is, as a group of literacy leaders (and involving other staff as appropriate), to take the analysed evidence of student progress in literacy and answer the question of “how good” those results are. We do this using an evaluative rubric, which describes what the evidence will look like if our efforts are highly effective versus minimally effective (etc) for students achieving below curriculum expectations in literacy (see p. 21).

Where to start with the rubric
The development schools have experimented with two alternative "quick start‟ approaches to using the rubrics, once they were familiar with the content. Each approach was found to be useful for understanding their data and determining next steps.

Option #1: Start at "the bar‟

  1. Jump straight down to the Minimally Effective description and check whether the evidence at hand meets the requirements there, more or less.
  2. Skip down to Ineffective and then to Detrimental to make sure that none of the items in those levels is evident within the school. If any are found, these are your most urgent points for swift action.
  3. If nothing Ineffective or Detrimental is found, and if the requirements under Minimally Effective are met, move up the levels (Developing Effectiveness → Consolidating Effectiveness → Highly Effective) one by one to see how high a rating seems to be justified.
  4. Remember, you are not aiming for an absolutely exact match here. The key question is, which "picture‟ does our evidence match most closely?

Option #2: Trawl for the "centre of gravity‟

  1. Have the group work through the rubric – individually, in small groups, or as a whole group – and highlight the statements that match the evidence in any and all of the levels.
  2. Next, identify the "centre of gravity‟ (where most of the descriptions fit; your median and/or mode) and note this as your initial approximate rating.
  3. Finally, carefully consider exceptions in the evidence (higher and lower instances of effectiveness in particular areas). Discuss whether these are important enough to justify upgrading or downgrading the overall rating.
  4. Again, the intent here is not to look for an exact match, but to generate an overall conclusion or best fit based on where the greatest weight of evidence lies, while at the same time highlighting any particular points of strength or weakness that should be celebrated or addressed.
  5. Some schools found that their evidence was so mixed (very strong results for some students; much weaker ones for others) that it made little sense to draw an overall conclusion. Instead, they highlighted the strengths and weaknesses relative to the rubric in the outcomes for students achieving below curriculum expectations in literacy.

Published on: 01 Apr 2016




Footer: