Why not try…reading the questions for once?

This is the second in a sequence of three posts about the Key Stage 2 National Curriculum and associated tests for English. In the first post, I explored the controversy surrounding the curriculum as a whole. This time, I’m looking in more detail at the tests themselves, in particular the 2016 papers. I want to see what all the fuss was about so as to unpick what lessons there might be, if any, for secondary English teachers. If you happen to be reading this as a primary teacher, you could probably skip the next bit as it’s an outline of the tests which you’re likely to be familiar with.

Why not literally try pulling the papers apart? No, I do mean it literally. Not metaphorically. Literally. Take the staples out, separate the pages, scatter them over the floor of the room, then dance around singing a ballad about the war between the family of the lion and the family of the bear.

There are two tests which are taken at the end of Key Stage 2 – one of which is split into two papers:

  • Reading
  • Grammar, punctuation and spelling Paper 1: Grammar and Punctuation
  • Grammar, punctuation and spelling Paper 2: Spelling

Reading

The reading test is made up of approximately 30-40 questions each year. Each question is, as of 2016, linked to one of the aspects of the “Content Domain” drawn from the comprehension section of the National Curriculum and listed in the reading test specification. If you want to find out more about content domains and content samples, this article from Daisy Chrisodoulou is certainly worth a read.

Content Domain

Though the connections aren’t perfect, a useful way of getting your head round this as a secondary teacher is to consider how each aspect of the content domain ties in with the Assessment Objectives for GCSE English. The table below attempts to do just that.

AO and Content Domain Comparison

What was clear from creating this grid was that, although there are undoubtedly clear connections between the skills required at Key Stage 2 and Key Stage 4, there is now more clearly (and rightly so I think) a wider gap between the two levels than there was when we were all using the same Assessment Focuses.

If you’re eagle eyed, you’ll notice that the percentages for reading for AQA GCSE English only add up to 50%. The other fifty comes from the completion of two writing questions – one either a narrative or descriptive piece, the other a personal opinion piece. However, as this blog is only about the tests, I’m leaving those for now. I may look at them, alongside moderated, teacher assessed writing, in a separate post in the future. What is worth noting here though is that it is the reading mark from Year 6 that is used by the government to calculate the secondary Progress 8 estimate. The writing mark is not used in this calculation. As a result, students’ outcomes at Key Stage 4 in reading and writing are now predicted based on just their reading response at Key Stage 2.

In addition to the information about the content domain, the test framework also establishes a range of ways in which the complexity of the questions will be varied. I think this is a really useful list for the design of any English comprehension test. It is important here though in terms of gauging whether the test is more or less challenging year on year. The list obviously includes the difficulty level of the text. However, the level of challenge will also be varied through:

The location of the information:

  • The number of pieces of information the question asks students to find and how close they are.
  • Whether the question provides students with the location of the information. For example, Paragraph 2.
  • Whether, in a multiple choice question, there are a number of close answers which act as distractors.

The complexity of the information in the relevant part of the text:

  • Whether the part of text required to answer the question is lexico-grammatically complex
  • Whether the information required to answer the question is abstract or concrete
  • How familiar the information is to the students

The level of skill required to answer the question:

  • Whether the task requires students to retrieve information directly from the text, to know the explicit meaning of single word or multiple words or to infer from a single or multiple pieces of information.

The complexity of the response strategy:

  •  Whether the task requires a multiple choice answer, a single word or phrase from the text or a longer response from the student.

The complexity of the language in the question and the language required in the answer

  • Whether there are challenging words used in the question.
  • Whether students have to make use of technical terms in their answers that aren’t in the question or the text.

We’ll come back to these later when we look at the concerns around this year’s test.

Grammar, Punctuation and Spelling:

The Grammar and Punctuation Paper is worth 50 marks (there are approximately 50 questions) and it takes 45 minutes.

The Spelling Paper, meanwhile, is worth twenty marks (there are 20 questions) and takes approximately 15 minutes, depending on how long the administrator takes to read the questions.

This table provides a sense of the weighting for each of these elements in the overall test.

SPAG Weighting

As with the reading test, there is a Specification for the Grammar, Punctuation and Spelling Test.  Again, this establishes the content domain from which the questions will be developed.

If you’re a secondary teacher, it would certainly be worth you reading this booklet. In particular, Pages 8-12 for the grammar domain and Pages 12-13 for spelling. These are important pages as they tell you what students ‘should’ have learnt in these three areas before the end of KS2.

Remember though, if you find out from the data you receive from their primary school that a pupil did well in these tests, it doesn’t mean that they recalled everything in the domain, merely that they performed well on the questions covering  a supposedly representative sample of the domain. Neither does it mean that they will have retained all of the knowledge over the summer break. If you don’t constantly reinforce this knowledge through further practice through similar isolated tests or through application during analysis or extended writing they will most likely forget it in the future or not make full use of it.

The table on Page 14 helpfully reminds us that students haven’t been assessed on their use of paragraphs in this test – this is instead done through the moderated teacher assessment of writing. This also serves to emphasize the point that students have not used the grammar or spelling in the test in context.

This is an isolated assessment of their knowledge, rather than the application of that knowledge.

Finally, there are some useful though controversial definitions, indicating what the students ‘should’ have been taught about grammar. A number of these have proved contentious because, as we know, there isn’t always agreement over elements of grammar. I’m not going to go over this ground again by unpicking the  2016 grammar paper as I think I covered it in the last post. However, the reading paper this year caused a bit of a stir for a number of reasons so I want to look at that in more detail. 

Why not try asking then answering your own rhetorical question?

So, what was all the fuss about this year? Well firstly, on 10th May, the TES published this article, citing a teacher who claimed the reading “paper…would have had no relevance to inner-city children or ones with no or little life skills.” If anyone has any recommendations of texts for pupils with no or few life skills which would also be suitable for a national reading test, please do leave a comment.

The texts chosen this year do appear challenging. There’s plenty of demanding vocabulary. Haze, weathered and monument appear in the first text. Promptly, sedately, pranced and skittishly appear in the second. In the third, we have oasis, parched, receding and rehabilitate. There’s also some complexity added to the vocabulary by the sentence structures. In text two, we have: “A streak of grey cut across her vision, accompanied by a furious, nasal squeal: ‘Mmweeeh!'” and “There she dangled while Jemmy pranced skittishly and the warthog, intent on defending her young, let out enraged squeals from below. Five baby warthogs milled around in bewilderment, spindly tails pointing heavenwards.” Some teachers have carried out reading accessibility checks on the texts and claim the texts are pitched at a 13-15 year old age range.

The problem here though is that, as we looked at earlier, the difficulty level of the test isn’t just set through the complexity of the texts as a whole. I’m currently working on a Question Level Analysis spreadsheet for the paper, which I hope to share in the final post in this series. In the process of producing this, it’s become clear that, although the first two texts are more challenging based on raw vocabulary the questions for these texts often direct children to simpler and shorter sections or even individual words. The children could pick up marks in the test here without understanding every single word in the whole texts. I would imagine though that not all children understood this, hence the reported tears. As you move through the paper the third text appears simpler in terms of vocabulary. Here though, the questions are based on longer sections of the text and two require longer answers. 

I don’t think the writers of the tests did this perfectly, but I don’t think they did a terrible job. I’ll look at some possible changes I’d like to see in the next post. 

There’s some truth, again superficially, to the claim that the contexts of at least two of the texts are fairly upper middle class: a rowing trip to an ancestral monument and a safari adventure on the back of a giraffe. Who knew you could ride a giraffe? Perhaps my life skills are limited.

Underneath the French polish veneer, these are essentially adventure stories though.  One about discovering family identity, the other about ending up in a dangerous situation after ignoring the rules. These are fairly familiar themes to children, even if the specific contexts may be alien to some of them. I’d worry if we were to find ourselves suggesting that “inner city” children or indeed working class children should be tested using  only texts which describe their lived experiences. 

In Reading Reconsidered, Lemov et al point to research carried out by Annie Murphy Paul in which she argues that “The brain does not make much of a distinction between reading about an experience and encountering it in real life.” Reading helps students empathize. This is no surprise, but is worth a reminder as, at secondary and primary level, it’s our responsibility as teachers of language and literature to expose students to a range of experiences through the texts we direct and encourage them to read. 

3 comments

  1. Steve · May 25

    A good measured article – or at least it reads that way to a Secondary Maths teacher who is a Parent Governor at a Primary School.

    A couple of points to pick up on:

    1) You are right to talk about difficulty (or otherwise) not just being determined by the text but also the questions asked about that text. We need to be mindful that there is an explicit instructionsin the test framework design document explicitly about the suitability of text (irrespective of the questions about that text).

    We are told that “Texts will be appropriate in terms of content and difficulty for pupils aged 11” which I would take to mean that, irrespective of the questions asked, the reading age of the text should be appropriate for 11 year old students. It is not clear, having read tweets, blogs etc. that have calculated reading ages, that this is the case.

    2) It is also explicitly stated that “The texts will be ordered by increasing reading demand within the reading booklet” which again is probably not met having read several blogs. Indeed this article suggests that this compulsory design feature of tests has not been met.

    Now these may be minor things – I am certainly no expert on designing assessments in English – but, if they were important enough to be placed in the test framework design statement, you would imagine that those who do know felt them important enough points to make and hence non trivial.

    Like

  2. Pingback: Why not try doing too many things and not enough things, both at the same time? | English Remnant World
  3. thinkreadtweet · June 11

    Reblogged this on The Literacy Echo Chamber.

    Like

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s