Marking, marking, marking. A Triptych of Assessment Strategies.

Imagine that you’ve gone to buy a new car. You have certain requirements. Some of you will be limited by the need to find a “sensible” vehicle in which you can fit 2.4 children with associated holiday luggage or family shopping when you’re on the rush home to watch Strictly. Others of you will have an idealised (or even realistic) vision of yourself as a 007 Austin Martin type. Some of you might just want something to get you from A to B.

Either way, now imagine if, when you went to buy the car, you were only able to make the purchasing decision based either on seeing the MOT certificates of the cars or taking the cars on a test drive before you made your purchase. One or the other.

Would you choose just the MOT certificate or would you choose just the test drive?

The MOT will, of course, tell you whether the car can function in certain basic ways. Do the indicators work? Is the steering wheel strong enough? Are the oil levels right? Is the tyre tread depth legal? Many of these pass/fail judgements are taken in isolation. With the MOT certificate, you don’t find out if, once you’re out on the road, the car is appropriate for your needs.

With a test drive, you can get a sense of how the car feels, how the car functions, whether it matches your requirements. Does it take hills well? How does it steer? Is there enough leg room? You wouldn’t necessarily know though if it were safe or legal.

Austin Martin

One of the developments which we’ve been involved in, as part of United Learning, is a system of assessment called Key Performance Indicators. These are essentially elements of knowledge that we want our students to be able to master in English. They provide the MOT of English assessment. Here are just three of the Year 7 Key Performance Indicators:

  • Use the appropriate structure, conventions and vocabulary for formal letter writing
  • Identify specific words, language techniques and features of organization, commenting on why these have been used
  • Identify, define and accurately use the following in a range of writing: types of verb, types of noun, adjectives, adverbs, pronouns

They’re broken down into parts so that you can test students’ ability to apply them in isolation in a pass/fail way. What you might want to do here is set up a series of assessments which check whether a student can accurately identify, use and define the full range of different types of noun or use the appropriate structure and conventions in formal letter writing. What this can provide you with is a set of data which looks like this at whole year group, class and individual pupil level:

KPI Feedback Sheet

This data set can inform your planning because you can see that whilst this student is able, in a set of exam questions, to use past tense consistently accurately and make use of brackets for parenthesis, their use of speech punctuation is not accurate. Other students in the same class may have the opposite set of problems and so a bank of pre-developed resources could be drawn on to address these students’ needs. This kind of diagnostic assessment may sound potentially great – as long as the marking workload doesn’t prevent teachers from planning the in class interventions which could make the difference.

However, there are also possible risks to this kind of assessment model. The first of these is that, just like in the MOT/test drive model, with this isolated form of assessment you can find yourself making assumptions that students can do things when they are decontextualized which they actually can’t do in broader contexts. Just as there is a difference between a motor passing its MOT and performing how you want it to once it’s out on the road towing a caravan (if you like that kind of thing), there is also a great difference between adding speech punctuation to pre-written sentences which students know have been incorrectly punctuated and students writing a narrative from scratch which includes speech that they have correctly punctuated themselves. Equally though, if you prescribe a set of written aspects which need to be included in a piece of writing, you end up with quite sterile writing by numbers. This approach can also result in markers not paying attention to whether a piece of writing is effective as an example of the form/genre the students were trying to craft as they are so focused on a checklist of small and specific details relating to, for example, punctuation. In the worst instances, students could produce work which demonstrates achievement in the assessed KPIs but which is basically substandard. 

Another key risk with this form of assessment is that these elements and these elements alone can become the English curriculum. When teachers know that their students are being assessed in three weeks’ time and they are aware which Key Performance Indicators will be the focus of this assessment, there is a potential perverse incentive for them to teach to these KPIs even if they don’t know what the actual examination questions will be. It is possible to lose the wider scope of the curriculum or for wider expectations to dip.

This is why, in English, we are gradually developing the following set of assessment strategies which will include:

  1. Pretesting of the grammar and punctuation elements of the KPI framework in isolation to support teachers in identifying which students need further intervention in terms of their basic grammar knowledge. As far as possible, this will be done online so that teachers have the information which they need to and can therefore spend time reshaping lessons to support these students rather than spending time marking the answers to the questions themselves.
  2. Periodic retesting of these grammar and punctuation elements, including the ability to add these aspects into or edit the accuracy of pre-produced texts. This will mean that we can check whether students will be able to demonstrate the ability to make use of the elements of grammar when they are required, but not by forcing them to apply them in an artificial way in their own writing unless they choose to because it’s appropriate.
  3. The use of comparative judgements. David Didau has written about our progress with this process here and here. We see this as providing teachers with a method of checking the quality of students’ writing more holistically, more reliably and more efficiently whilst also enabling staff to set a standard within a year group rather than developing writing by numbers. Judgement questions will focus on the relative success of two students writing in a particular form or genre such as narrative, speech or letter writing. 

The first two of these strategies will act like an MOT, whilst the third will be the test drive. Alongside this of course, there are a range of formative assessment strategies which teachers use both in the moment and between lessons, which I’ll write more about in future posts.

When you’re buying a car, you don’t need to make that decision between the two forms of check and both have to be done separately. I wonder whether, in trying to do both forms of assessment with our assessments of writing or reading in the past, we’ve ended up getting ourselves and our students in a muddle.


Leave a Reply

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s

This site uses Akismet to reduce spam. Learn how your comment data is processed.