Why not try doing too many things and not enough things, both at the same time?

Back in September 2015, Ofsted published a report entitled ‘Key Stage 3: the wasted years?’ It was produced following Sir Michael Wilshaw’s statement “that primary schools had continued to improve but the performance of secondary schools had stalled …[and]… one of the major contributory factors to this was that, too often, the transition from primary to secondary school was poorly handled.”

Ofsted made these nine recommendations to secondary school leaders:

  1. Make Key Stage 3 a higher priority in all aspects of school planning, monitoring and evaluation.
  2. Ensure that not only is the curriculum offer at Key Stage 3 broad and balanced, but that teaching is of high quality and prepares pupils for more challenging subsequent study at Key Stages 4 and 5.
  3. Ensure that transition from Key Stage 2 to 3 focuses as much on pupils’ academic needs as it does on their pastoral needs.
  4. Create better cross-phase partnerships with primary schools to ensure that Key Stage 3 teachers build on pupils’ prior knowledge, understanding and skills.
  5. Make sure that systems and procedures for assessing and monitoring pupils’ progress in Key Stage 3 are robust.
  6. Focus on the needs of disadvantaged pupils in Key Stage 3, including the most able, in order to close the achievement gap as quickly as possible.
  7. Evaluate the quality and effectiveness of homework in Key Stage 3 to ensure that it helps pupils to make good progress.
  8. Guarantee that pupils have access to timely and high quality careers education, information, advice and guidance from Year 8 onwards
  9. Have literacy and numeracy strategies that ensure that pupils build on their prior attainment in Key Stage 2 in these crucial areas.

With the possible exception of recommendation 8, all of these involve key staff in secondary schools having an understanding of what has gone on at Key Stage 2. This will clearly have different ramifications for different members of a secondary school’s community. For English teachers in particular it means having an understanding of their students’ experience at KS2, including the taught curriculum and the National Curriculum tests.

Having looked at the KS2 English curriculum in Part 1 of this series and explored the reported issues with this year’s test papers in Part 2, I want to look, in this post, at offering some questions to consider for secondary Senior Leaders and English teachers. First of all, though, I have some questions for the powers that be.

Are you sure you’re not trying to do too much still with the Key Stage 2 tests? Purposes and uses. 

The Bew Report of 2011 claimed of the assessment system prior to its publication that, “There seems to be widespread concern…there are too many purposes, which can often conflict with one another. 71% of respondents to the online call for evidence believe strongly that the current system does not achieve effectively what they perceive to be the most important purpose.”

Bew referenced two papers, both by Dr Paul Newton, who was Head of Assessment Research at the QCA. In ‘Clarifying the Purposes of Educational Assessment,’ Newton argues that there are three primary categories of purpose for nationally designed assessments:

  • Assessment to reach a standards referenced judgement. For example, an exam to award a grade, a level or a pass/fail.
  • Assessment to provide evidence for a decision. For example, an A-Level qualification which provides evidence that the student is ready to begin studying a related subject at undergraduate level.
  • Assessment to have a specific impact on the behavior of individuals or groups.  For example, a science GCSE which helps to enforce the KS4 National Curriculum for science.

Newton maintains that each of these three areas of purpose need to be considered carefully. “Where the three discrete meanings are not distinguished clearly, their distinct implications for assessment design may become obscured. In this situation, policy debate is likely to be unfocused and system design is likely to proceed ineffectively.”

In ‘Evaluating Assessment Systems,’ Newton distinguishes the purposes of assessment systems from their uses and identifies twenty two categories of use for assessment on page 5 of the document.

He explains that an assessment’s reliability can deteriorate as more and more uses are added:

“…an end-of-key-stage test will be designed primarily to support an inference concerning a student’s ‘level of attainment at the time of testing’. Let’s call this the primary design inference. And let’s imagine, for the sake of illustration, that our assessment instrument – our key stage 2 science test – supports perfectly accurate design inferences. That is, a student who really is a level X on the day of the test will definitely be awarded a level X as an outcome of testing.

In fact, when the test result is actually used, the user is likely to draw a slightly (or even radically) different kind of inference, tailored to the specific context of use. Let’s call this a use-inference.

Consider, by way of example, some possible use inferences associated with the following result-based decisions/actions.

  1. A placement/segregation use. The inference made by a key stage 3 head of science – when allocating a student to a particular set on the basis of a key stage 2 result – may concern ‘level of attainment at the beginning of the autumn term’.
  2. A student monitoring use. The inference made by a key stage 3 science teacher – when setting a personal achievement target for a student on the basis of a key stage 2 result – may concern ‘level of attainment at the end of key stage 3’.
  3. A guidance use. The inference made by a personal tutor – when encouraging a student to take three single sciences at GCSE on the basis of a key stage 2 result – may concern ‘general aptitude for science’.
  4. A school choice use. The inference made by parents – when deciding which primary school to send their child to on the basis of its profile of aggregated results in English, maths and science – may concern ‘general quality of teaching’.
  5. A system monitoring use. The inference made by a politician – when judging the success of educational policy over a period of time on the basis of national trends in aggregated results in English, maths and science – may concern ‘overall quality of education’.

…when it comes to validation (establishing the accuracy of inferences from results for different purposes) the implication should be clear: accuracy needs to be established independently for each different use/inference.”

As far as I can see, the current Key Stage 2 tests are used, among many other things:

  • To gauge national school performance.
  • To measure individual school performance for accountability purposes.
  • To check individual pupil attainment at KS2.
  • To measure progress from KS1.
  • To establish progress expectations between KS2 and KS4.
  • To check if students are ‘secondary ready’ and therefore trigger the need for them to resit a similar test in Year 7.
  • To enforce the teaching of elements of the National Curriculum which it would be harder to enforce without the test due to ‘academy freedoms.’
  • To inform parents of individual student’s performance.
  • To enable potential parents to make informed decisions about school choice.

This is by no means an exhaustive list. In many cases, the Key Stage 2 data is arguably the most reliable source we have for these uses. However, I do wonder whether the system could be made more reliable and whether all these other uses are making the tests less reliable in terms of theirp primary use.

Are you sure you’re not trying to do too much with the Key Stage 2 tests? Assessing the writing. 

In this TES article, Michael Tidd outlines the issues primary teachers have found, either in teaching the jarring of grammatical and punctuation elements into students’ writing or jarring specific types of writing into the curriculum as they are most likely to feature the elements required to be successful in the moderation process.

This has come about as a result of the use of a secure fit approach to assessment. In her post ‘”Best fit” is not the problem’ Daisy Christodoulou outlines the problems with both best and secure fit assessment. She proposes other ways forward in her conclusion, advising that:

  • If you want to assess a specific and precise concept and ensure that pupils have learned it to mastery, test that concept itself in the most specific and precise way possible and mark for mastery – expect pupils to get 90% or 100% of questions correct.
  • If you want to assess performance on more open, real world tasks where pupils have significant discretion in how they respond to the task, you cannot mark it in a ‘secure fit’ or ‘mastery’ way without risking serious distortions of both assessment accuracy and teaching quality. You have to mark it in a ‘best fit’ way. If the pupil has discretion in how they respond, so should the marker.
  • Prose descriptors will be inaccurate and distort teaching whether they are used in a best fit or secure fit way. To avoid these inaccuracies and distortions, use something like comparative judgment which allows for performance on open tasks to be assessed in a best fit way without prose descriptors.

We use a similar process to this, as outlined here. One wonders whether a key problem for KS2 teachers is that the National Curriculum assessment model is trying to do too much and, to the detriment of the students, it tells KS3 teachers too little.

Are you sure you’re trying to do enough with the Key Stage 2 tests? What’s missing from KS4 at KS2?

Ok, so it sounds contradictory after my first two questions, but there are key elements of the KS4 curriculum missing in the KS2 tests which make them less reliable in terms of their use in estimating likely performance at 16.

Firstly, in the 2016 Key Stage 2 reading test, there were only two opportunities for pupils to write beyond one line of text as an answer. I understand that this may make the questions more reliable in themselves. However, at Key Stage 4, reading assessment requires students to write at far greater length. I’m certainly not arguing for children of ten to have exams lasting two hours, which feature questions to which they have to write responses which fill two and a half sides of A4. I’m merely questioning whether a third of a side of A4 enables the brightest ten year olds to demonstrate their full potential in reading. There are possible implications here for secondary teachers in building, over time, the stamina of students in producing extended responses to reading.

Secondly, again in the reading test, all of the texts are unseen. This is fine if we’re just using the test as a tool for estimating performance at KS4 for English Language where the three texts examined are similarly unseen. However, the English Literature GCSE now has parity with English Language  in terms of school performance measures. Wouldn’t it make sense then to include at least one extract from a full text that students had studied in advance. I’m sure there would be great controversy over which text(s) should be taught, but I think the benefits for children would outweigh the arguments between teachers. One key aspect of literary study is that of making links between texts and their cultural, social and historical context. This used to be an Assessment Focus in the previous framework – though it only ever featured in a limited manner in the tests and the mark scheme. Reinstating it as part of the content domain, could serve to make the link to literary study at Key Stage 3 more effective as well as slightly raise the status of the study of history, culture and society at Key Stage 2.

Are you sure that resits are a good idea?

I’m not going to focus on the potential emotional impact of the resit process which has been written about here. I think we can deal with this additional, potentially high stakes test and help students to deal with it too, in the same way Chris Curtis argues we can support students through the stresses of the grammar test and just as we help students through GCSEs. Instead, I want to focus on curriculum and teaching as these will likely have the biggest impact on students in the long term (including on their emotions).

Imagine training two boys to do five lots of two minutes of keepy-uppies for a competition. One narrowly misses out on qualifying for the semi-finals of the competition and the other goes on to win.

Their coach carries on training the quarter-finalist with keepy-uppies with a tiny bit of full football training mixed in but moves the other on to playing football for a team, training them in set pieces, tackling and passing and with 30, then 60, then 90 minute practice matches every Saturday. The coach knows that both boys have the potential to be playing in the Premier League in five years’ time. Unfortunately for the first boy, keepy-uppies might be useful in terms of ball control, but they don’t prepare you for all aspects of the Premier League properly.

Likewise, the National Curriculum reading and SPAG tests are potentially useful gauges at 11 of certain isolated skills for English. However, the questions aren’t any where near as open as many schools’ Key Stage 3 tasks which are designed to prepare students for the GCSE questions they’ll face in Year 11. In addition, as I’ve mentioned above, Key Stage 2 doesn’t focus on English Literature which includes social, historical and cultural context. The students who are made to resit will need to improve their ability to do the Key Stage 2 style questions, whilst also keeping up with the rest of their year in terms of these aspects of the Key Stage 3 curriculum.

All of this means we need to think strategically in order to limit the extent to which, whilst closing one set of gaps, we might open up a whole host of others and this  brings me onto my questions for secondary leaders and English teachers.

How can you ensure you are clear as to what your students should know and should be able to do based on their KS2 experience and outcomes?

  • Do the reading – check out the National Curriculum for KS2, the frameworks for the Reading and Spelling Punctuation and Grammar tests and the framework for writing assessment so you fully understand the information you can gather from the children’s primary schools.
  • Find an effective way of communicating with the staff in your partner primary schools about the children and about their KS2 curriculum.
  • Get hold of the children’s work – though make sure you know under what conditions the work was produced. In my view, you need to know what they can do independently -though other views do exist.
  • Analyse the data. I’ve created this Question Level Analysis spreadsheet for the 2016 reading paper so that we can see which types of question students were most and least successful at. I’ll write more about it, if it’s worthwhile, once we’ve used it with this years’ cohort.
  • Remember though, that there is a gap between the end of primary and the start of secondary schooling, so…

How might you ensure you are clear as to what your students do know and are ignorant of, what they can and can’t do and what they’ve forgotten when they arrive with you in September?

  • In order to be able to make more valid inferences about your Year 7 students’ knowledge and abilities in September, you may want to pre-test. This should clearly give you information which you will use to inform your planning rather than be testing for testing’s sake.

How could you build on what’s gone before?

  • Build up students’ writing stamina, including extended responses to reading. Look at crafting the small details as well as structuring a whole response like this or like this.
  • Explain and model the writing process and the thinking which goes on behind it.
  • Continue to develop the grammatical knowledge which the students already have, increasingly expecting its application in analysis and consideration in crafting writing.
  • Use challenging texts – these children can read unseen texts with surprisingly sophisticated sentence structures and vocabulary.
  • Carry on building their general vocabulary and developing their use of technical terminology.
  • Keep them practicing what they’ve done previously so they maintain or develop fluency.

I’ve only included a handful of ideas here – the list could clearly go on and on but I realize you have other things to do.

How will you deal with the resits – if they happen?

Let’s consider this question sensibly and carefully as there have been quite a few people who have already suggested that the resits will destroy students initial experiences of secondary school.

First of all, let’s return to the content domain defined in the framework for reading:

There was actually only one of these references (2e) which didn’t map straightforwardly into the GCSE Assessment Objectives when I produced this for the first post in this series:

This would suggest (unsurprisingly) that all of the KS2 domain is still relevant at KS3 and 4.

What about the grammar then? There must be a problem with that. Remember pages 8-12 of the Grammar, Punctuation and Spelling Framework which I mentioned in Part One. If not, or if you never looked at them in the first place, take a look at them. Now imagine that your students in Year 7 are so familiar with those terms that you could start teaching them properly to drop them into their analysis or that you could use them when discussing slips in their writing. That might be nice mightn’t it? There are some terms you might feel are less useful, some definitions you’d rather were changed, some terms you call something else but having children arrive at secondary school knowing this stuff – that could be a game changer couldn’t it?

So the reality is that the majority of this content will be relevant to our teaching for KS3 if we are following Ofsted’s sensible advice to ensure “teaching is of high quality and prepares pupils for more challenging subsequent study at Key Stages 4.”

Well, if it’s not the content that will limit our students, then surely it will be the question types – drilling the students who are being forced to resit in responding to these question types will almost certainly be detrimental won’t it. So let’s look at those again.

Question Types

 

They’re a mixture of multiple choice, ordering, matching and labeling with short and long responses – hardly the question types of the devil and actually, though I’d be wanting to shift the balance towards the extended responses, if students struggled with the basic questions which were mostly about finding information and vocabulary, then this is where they need more practice and this is how we need to amend the curriculum they experience in Year 7. We keep our challenging texts, we keep our focus on grammar and extended, independent writing, we keep our drive to improve responses to reading and all of the other things I’ve mentioned but we build in more work on knowledge of vocabulary as this is where the biggest challenge was in the reading test and, fortuitously, this will benefit these students in the longer term anyway.

When I started writing this, I didn’t expect to be in favor of the resits. In the proposed form, I’m still not, even though I think I’m now beginning to develop a clearer plan of how to deal with them.

If we are to have ‘retesting’ a better model, in my view, would be to test later, either towards the end of Year 7 or beginning of Year 8 and to test more or all of the cohort. I’d also propose a literature element to the tests and a much stronger focus on decontextualised vocabulary testing.

These changes would act as a much firmer lever, I think, to achieve what Ofsted recommended in their Key Stage 3 report.

One comment

  1. teachingbattleground · June 6, 2016

    Reblogged this on The Echo Chamber.

    Like

Leave a comment

This site uses Akismet to reduce spam. Learn how your comment data is processed.