Why not try doing too many things and not enough things, both at the same time?

Back in September 2015, Ofsted published a report entitled ‘Key Stage 3: the wasted years?’ It was produced following Sir Michael Wilshaw’s statement “that primary schools had continued to improve but the performance of secondary schools had stalled …[and]… one of the major contributory factors to this was that, too often, the transition from primary to secondary school was poorly handled.”

Ofsted made these nine recommendations to secondary school leaders:

  1. Make Key Stage 3 a higher priority in all aspects of school planning, monitoring and evaluation.
  2. Ensure that not only is the curriculum offer at Key Stage 3 broad and balanced, but that teaching is of high quality and prepares pupils for more challenging subsequent study at Key Stages 4 and 5.
  3. Ensure that transition from Key Stage 2 to 3 focuses as much on pupils’ academic needs as it does on their pastoral needs.
  4. Create better cross-phase partnerships with primary schools to ensure that Key Stage 3 teachers build on pupils’ prior knowledge, understanding and skills.
  5. Make sure that systems and procedures for assessing and monitoring pupils’ progress in Key Stage 3 are robust.
  6. Focus on the needs of disadvantaged pupils in Key Stage 3, including the most able, in order to close the achievement gap as quickly as possible.
  7. Evaluate the quality and effectiveness of homework in Key Stage 3 to ensure that it helps pupils to make good progress.
  8. Guarantee that pupils have access to timely and high quality careers education, information, advice and guidance from Year 8 onwards
  9. Have literacy and numeracy strategies that ensure that pupils build on their prior attainment in Key Stage 2 in these crucial areas.

With the possible exception of recommendation 8, all of these involve key staff in secondary schools having an understanding of what has gone on at Key Stage 2. This will clearly have different ramifications for different members of a secondary school’s community. For English teachers in particular it means having an understanding of their students’ experience at KS2, including the taught curriculum and the National Curriculum tests.

Having looked at the KS2 English curriculum in Part 1 of this series and explored the reported issues with this year’s test papers in Part 2, I want to look, in this post, at offering some questions to consider for secondary Senior Leaders and English teachers. First of all, though, I have some questions for the powers that be.

Are you sure you’re not trying to do too much still with the Key Stage 2 tests? Purposes and uses. 

The Bew Report of 2011 claimed of the assessment system prior to its publication that, “There seems to be widespread concern…there are too many purposes, which can often conflict with one another. 71% of respondents to the online call for evidence believe strongly that the current system does not achieve effectively what they perceive to be the most important purpose.”

Bew referenced two papers, both by Dr Paul Newton, who was Head of Assessment Research at the QCA. In ‘Clarifying the Purposes of Educational Assessment,’ Newton argues that there are three primary categories of purpose for nationally designed assessments:

  • Assessment to reach a standards referenced judgement. For example, an exam to award a grade, a level or a pass/fail.
  • Assessment to provide evidence for a decision. For example, an A-Level qualification which provides evidence that the student is ready to begin studying a related subject at undergraduate level.
  • Assessment to have a specific impact on the behavior of individuals or groups.  For example, a science GCSE which helps to enforce the KS4 National Curriculum for science.

Newton maintains that each of these three areas of purpose need to be considered carefully. “Where the three discrete meanings are not distinguished clearly, their distinct implications for assessment design may become obscured. In this situation, policy debate is likely to be unfocused and system design is likely to proceed ineffectively.”

In ‘Evaluating Assessment Systems,’ Newton distinguishes the purposes of assessment systems from their uses and identifies twenty two categories of use for assessment on page 5 of the document.

He explains that an assessment’s reliability can deteriorate as more and more uses are added:

“…an end-of-key-stage test will be designed primarily to support an inference concerning a student’s ‘level of attainment at the time of testing’. Let’s call this the primary design inference. And let’s imagine, for the sake of illustration, that our assessment instrument – our key stage 2 science test – supports perfectly accurate design inferences. That is, a student who really is a level X on the day of the test will definitely be awarded a level X as an outcome of testing.

In fact, when the test result is actually used, the user is likely to draw a slightly (or even radically) different kind of inference, tailored to the specific context of use. Let’s call this a use-inference.

Consider, by way of example, some possible use inferences associated with the following result-based decisions/actions.

  1. A placement/segregation use. The inference made by a key stage 3 head of science – when allocating a student to a particular set on the basis of a key stage 2 result – may concern ‘level of attainment at the beginning of the autumn term’.
  2. A student monitoring use. The inference made by a key stage 3 science teacher – when setting a personal achievement target for a student on the basis of a key stage 2 result – may concern ‘level of attainment at the end of key stage 3’.
  3. A guidance use. The inference made by a personal tutor – when encouraging a student to take three single sciences at GCSE on the basis of a key stage 2 result – may concern ‘general aptitude for science’.
  4. A school choice use. The inference made by parents – when deciding which primary school to send their child to on the basis of its profile of aggregated results in English, maths and science – may concern ‘general quality of teaching’.
  5. A system monitoring use. The inference made by a politician – when judging the success of educational policy over a period of time on the basis of national trends in aggregated results in English, maths and science – may concern ‘overall quality of education’.

…when it comes to validation (establishing the accuracy of inferences from results for different purposes) the implication should be clear: accuracy needs to be established independently for each different use/inference.”

As far as I can see, the current Key Stage 2 tests are used, among many other things:

  • To gauge national school performance.
  • To measure individual school performance for accountability purposes.
  • To check individual pupil attainment at KS2.
  • To measure progress from KS1.
  • To establish progress expectations between KS2 and KS4.
  • To check if students are ‘secondary ready’ and therefore trigger the need for them to resit a similar test in Year 7.
  • To enforce the teaching of elements of the National Curriculum which it would be harder to enforce without the test due to ‘academy freedoms.’
  • To inform parents of individual student’s performance.
  • To enable potential parents to make informed decisions about school choice.

This is by no means an exhaustive list. In many cases, the Key Stage 2 data is arguably the most reliable source we have for these uses. However, I do wonder whether the system could be made more reliable and whether all these other uses are making the tests less reliable in terms of theirp primary use.

Are you sure you’re not trying to do too much with the Key Stage 2 tests? Assessing the writing. 

In this TES article, Michael Tidd outlines the issues primary teachers have found, either in teaching the jarring of grammatical and punctuation elements into students’ writing or jarring specific types of writing into the curriculum as they are most likely to feature the elements required to be successful in the moderation process.

This has come about as a result of the use of a secure fit approach to assessment. In her post ‘”Best fit” is not the problem’ Daisy Christodoulou outlines the problems with both best and secure fit assessment. She proposes other ways forward in her conclusion, advising that:

  • If you want to assess a specific and precise concept and ensure that pupils have learned it to mastery, test that concept itself in the most specific and precise way possible and mark for mastery – expect pupils to get 90% or 100% of questions correct.
  • If you want to assess performance on more open, real world tasks where pupils have significant discretion in how they respond to the task, you cannot mark it in a ‘secure fit’ or ‘mastery’ way without risking serious distortions of both assessment accuracy and teaching quality. You have to mark it in a ‘best fit’ way. If the pupil has discretion in how they respond, so should the marker.
  • Prose descriptors will be inaccurate and distort teaching whether they are used in a best fit or secure fit way. To avoid these inaccuracies and distortions, use something like comparative judgment which allows for performance on open tasks to be assessed in a best fit way without prose descriptors.

We use a similar process to this, as outlined here. One wonders whether a key problem for KS2 teachers is that the National Curriculum assessment model is trying to do too much and, to the detriment of the students, it tells KS3 teachers too little.

Are you sure you’re trying to do enough with the Key Stage 2 tests? What’s missing from KS4 at KS2?

Ok, so it sounds contradictory after my first two questions, but there are key elements of the KS4 curriculum missing in the KS2 tests which make them less reliable in terms of their use in estimating likely performance at 16.

Firstly, in the 2016 Key Stage 2 reading test, there were only two opportunities for pupils to write beyond one line of text as an answer. I understand that this may make the questions more reliable in themselves. However, at Key Stage 4, reading assessment requires students to write at far greater length. I’m certainly not arguing for children of ten to have exams lasting two hours, which feature questions to which they have to write responses which fill two and a half sides of A4. I’m merely questioning whether a third of a side of A4 enables the brightest ten year olds to demonstrate their full potential in reading. There are possible implications here for secondary teachers in building, over time, the stamina of students in producing extended responses to reading.

Secondly, again in the reading test, all of the texts are unseen. This is fine if we’re just using the test as a tool for estimating performance at KS4 for English Language where the three texts examined are similarly unseen. However, the English Literature GCSE now has parity with English Language  in terms of school performance measures. Wouldn’t it make sense then to include at least one extract from a full text that students had studied in advance. I’m sure there would be great controversy over which text(s) should be taught, but I think the benefits for children would outweigh the arguments between teachers. One key aspect of literary study is that of making links between texts and their cultural, social and historical context. This used to be an Assessment Focus in the previous framework – though it only ever featured in a limited manner in the tests and the mark scheme. Reinstating it as part of the content domain, could serve to make the link to literary study at Key Stage 3 more effective as well as slightly raise the status of the study of history, culture and society at Key Stage 2.

Are you sure that resits are a good idea?

I’m not going to focus on the potential emotional impact of the resit process which has been written about here. I think we can deal with this additional, potentially high stakes test and help students to deal with it too, in the same way Chris Curtis argues we can support students through the stresses of the grammar test and just as we help students through GCSEs. Instead, I want to focus on curriculum and teaching as these will likely have the biggest impact on students in the long term (including on their emotions).

Imagine training two boys to do five lots of two minutes of keepy-uppies for a competition. One narrowly misses out on qualifying for the semi-finals of the competition and the other goes on to win.

Their coach carries on training the quarter-finalist with keepy-uppies with a tiny bit of full football training mixed in but moves the other on to playing football for a team, training them in set pieces, tackling and passing and with 30, then 60, then 90 minute practice matches every Saturday. The coach knows that both boys have the potential to be playing in the Premier League in five years’ time. Unfortunately for the first boy, keepy-uppies might be useful in terms of ball control, but they don’t prepare you for all aspects of the Premier League properly.

Likewise, the National Curriculum reading and SPAG tests are potentially useful gauges at 11 of certain isolated skills for English. However, the questions aren’t any where near as open as many schools’ Key Stage 3 tasks which are designed to prepare students for the GCSE questions they’ll face in Year 11. In addition, as I’ve mentioned above, Key Stage 2 doesn’t focus on English Literature which includes social, historical and cultural context. The students who are made to resit will need to improve their ability to do the Key Stage 2 style questions, whilst also keeping up with the rest of their year in terms of these aspects of the Key Stage 3 curriculum.

All of this means we need to think strategically in order to limit the extent to which, whilst closing one set of gaps, we might open up a whole host of others and this  brings me onto my questions for secondary leaders and English teachers.

How can you ensure you are clear as to what your students should know and should be able to do based on their KS2 experience and outcomes?

  • Do the reading – check out the National Curriculum for KS2, the frameworks for the Reading and Spelling Punctuation and Grammar tests and the framework for writing assessment so you fully understand the information you can gather from the children’s primary schools.
  • Find an effective way of communicating with the staff in your partner primary schools about the children and about their KS2 curriculum.
  • Get hold of the children’s work – though make sure you know under what conditions the work was produced. In my view, you need to know what they can do independently -though other views do exist.
  • Analyse the data. I’ve created this Question Level Analysis spreadsheet for the 2016 reading paper so that we can see which types of question students were most and least successful at. I’ll write more about it, if it’s worthwhile, once we’ve used it with this years’ cohort.
  • Remember though, that there is a gap between the end of primary and the start of secondary schooling, so…

How might you ensure you are clear as to what your students do know and are ignorant of, what they can and can’t do and what they’ve forgotten when they arrive with you in September?

  • In order to be able to make more valid inferences about your Year 7 students’ knowledge and abilities in September, you may want to pre-test. This should clearly give you information which you will use to inform your planning rather than be testing for testing’s sake.

How could you build on what’s gone before?

  • Build up students’ writing stamina, including extended responses to reading. Look at crafting the small details as well as structuring a whole response like this or like this.
  • Explain and model the writing process and the thinking which goes on behind it.
  • Continue to develop the grammatical knowledge which the students already have, increasingly expecting its application in analysis and consideration in crafting writing.
  • Use challenging texts – these children can read unseen texts with surprisingly sophisticated sentence structures and vocabulary.
  • Carry on building their general vocabulary and developing their use of technical terminology.
  • Keep them practicing what they’ve done previously so they maintain or develop fluency.

I’ve only included a handful of ideas here – the list could clearly go on and on but I realize you have other things to do.

How will you deal with the resits – if they happen?

Let’s consider this question sensibly and carefully as there have been quite a few people who have already suggested that the resits will destroy students initial experiences of secondary school.

First of all, let’s return to the content domain defined in the framework for reading:

There was actually only one of these references (2e) which didn’t map straightforwardly into the GCSE Assessment Objectives when I produced this for the first post in this series:

This would suggest (unsurprisingly) that all of the KS2 domain is still relevant at KS3 and 4.

What about the grammar then? There must be a problem with that. Remember pages 8-12 of the Grammar, Punctuation and Spelling Framework which I mentioned in Part One. If not, or if you never looked at them in the first place, take a look at them. Now imagine that your students in Year 7 are so familiar with those terms that you could start teaching them properly to drop them into their analysis or that you could use them when discussing slips in their writing. That might be nice mightn’t it? There are some terms you might feel are less useful, some definitions you’d rather were changed, some terms you call something else but having children arrive at secondary school knowing this stuff – that could be a game changer couldn’t it?

So the reality is that the majority of this content will be relevant to our teaching for KS3 if we are following Ofsted’s sensible advice to ensure “teaching is of high quality and prepares pupils for more challenging subsequent study at Key Stages 4.”

Well, if it’s not the content that will limit our students, then surely it will be the question types – drilling the students who are being forced to resit in responding to these question types will almost certainly be detrimental won’t it. So let’s look at those again.

Question Types

 

They’re a mixture of multiple choice, ordering, matching and labeling with short and long responses – hardly the question types of the devil and actually, though I’d be wanting to shift the balance towards the extended responses, if students struggled with the basic questions which were mostly about finding information and vocabulary, then this is where they need more practice and this is how we need to amend the curriculum they experience in Year 7. We keep our challenging texts, we keep our focus on grammar and extended, independent writing, we keep our drive to improve responses to reading and all of the other things I’ve mentioned but we build in more work on knowledge of vocabulary as this is where the biggest challenge was in the reading test and, fortuitously, this will benefit these students in the longer term anyway.

When I started writing this, I didn’t expect to be in favor of the resits. In the proposed form, I’m still not, even though I think I’m now beginning to develop a clearer plan of how to deal with them.

If we are to have ‘retesting’ a better model, in my view, would be to test later, either towards the end of Year 7 or beginning of Year 8 and to test more or all of the cohort. I’d also propose a literature element to the tests and a much stronger focus on decontextualised vocabulary testing.

These changes would act as a much firmer lever, I think, to achieve what Ofsted recommended in their Key Stage 3 report.

Why not try…reading the questions for once?

This is the second in a sequence of three posts about the Key Stage 2 National Curriculum and associated tests for English. In the first post, I explored the controversy surrounding the curriculum as a whole. This time, I’m looking in more detail at the tests themselves, in particular the 2016 papers. I want to see what all the fuss was about so as to unpick what lessons there might be, if any, for secondary English teachers. If you happen to be reading this as a primary teacher, you could probably skip the next bit as it’s an outline of the tests which you’re likely to be familiar with.

Why not literally try pulling the papers apart? No, I do mean it literally. Not metaphorically. Literally. Take the staples out, separate the pages, scatter them over the floor of the room, then dance around singing a ballad about the war between the family of the lion and the family of the bear.

There are two tests which are taken at the end of Key Stage 2 – one of which is split into two papers:

  • Reading
  • Grammar, punctuation and spelling Paper 1: Grammar and Punctuation
  • Grammar, punctuation and spelling Paper 2: Spelling

Reading

The reading test is made up of approximately 30-40 questions each year. Each question is, as of 2016, linked to one of the aspects of the “Content Domain” drawn from the comprehension section of the National Curriculum and listed in the reading test specification. If you want to find out more about content domains and content samples, this article from Daisy Chrisodoulou is certainly worth a read.

Content Domain

Though the connections aren’t perfect, a useful way of getting your head round this as a secondary teacher is to consider how each aspect of the content domain ties in with the Assessment Objectives for GCSE English. The table below attempts to do just that.

AO and Content Domain Comparison

What was clear from creating this grid was that, although there are undoubtedly clear connections between the skills required at Key Stage 2 and Key Stage 4, there is now more clearly (and rightly so I think) a wider gap between the two levels than there was when we were all using the same Assessment Focuses.

If you’re eagle eyed, you’ll notice that the percentages for reading for AQA GCSE English only add up to 50%. The other fifty comes from the completion of two writing questions – one either a narrative or descriptive piece, the other a personal opinion piece. However, as this blog is only about the tests, I’m leaving those for now. I may look at them, alongside moderated, teacher assessed writing, in a separate post in the future. What is worth noting here though is that it is the reading mark from Year 6 that is used by the government to calculate the secondary Progress 8 estimate. The writing mark is not used in this calculation. As a result, students’ outcomes at Key Stage 4 in reading and writing are now predicted based on just their reading response at Key Stage 2.

In addition to the information about the content domain, the test framework also establishes a range of ways in which the complexity of the questions will be varied. I think this is a really useful list for the design of any English comprehension test. It is important here though in terms of gauging whether the test is more or less challenging year on year. The list obviously includes the difficulty level of the text. However, the level of challenge will also be varied through:

The location of the information:

  • The number of pieces of information the question asks students to find and how close they are.
  • Whether the question provides students with the location of the information. For example, Paragraph 2.
  • Whether, in a multiple choice question, there are a number of close answers which act as distractors.

The complexity of the information in the relevant part of the text:

  • Whether the part of text required to answer the question is lexico-grammatically complex
  • Whether the information required to answer the question is abstract or concrete
  • How familiar the information is to the students

The level of skill required to answer the question:

  • Whether the task requires students to retrieve information directly from the text, to know the explicit meaning of single word or multiple words or to infer from a single or multiple pieces of information.

The complexity of the response strategy:

  •  Whether the task requires a multiple choice answer, a single word or phrase from the text or a longer response from the student.

The complexity of the language in the question and the language required in the answer

  • Whether there are challenging words used in the question.
  • Whether students have to make use of technical terms in their answers that aren’t in the question or the text.

We’ll come back to these later when we look at the concerns around this year’s test.

Grammar, Punctuation and Spelling:

The Grammar and Punctuation Paper is worth 50 marks (there are approximately 50 questions) and it takes 45 minutes.

The Spelling Paper, meanwhile, is worth twenty marks (there are 20 questions) and takes approximately 15 minutes, depending on how long the administrator takes to read the questions.

This table provides a sense of the weighting for each of these elements in the overall test.

SPAG Weighting

As with the reading test, there is a Specification for the Grammar, Punctuation and Spelling Test.  Again, this establishes the content domain from which the questions will be developed.

If you’re a secondary teacher, it would certainly be worth you reading this booklet. In particular, Pages 8-12 for the grammar domain and Pages 12-13 for spelling. These are important pages as they tell you what students ‘should’ have learnt in these three areas before the end of KS2.

Remember though, if you find out from the data you receive from their primary school that a pupil did well in these tests, it doesn’t mean that they recalled everything in the domain, merely that they performed well on the questions covering  a supposedly representative sample of the domain. Neither does it mean that they will have retained all of the knowledge over the summer break. If you don’t constantly reinforce this knowledge through further practice through similar isolated tests or through application during analysis or extended writing they will most likely forget it in the future or not make full use of it.

The table on Page 14 helpfully reminds us that students haven’t been assessed on their use of paragraphs in this test – this is instead done through the moderated teacher assessment of writing. This also serves to emphasize the point that students have not used the grammar or spelling in the test in context.

This is an isolated assessment of their knowledge, rather than the application of that knowledge.

Finally, there are some useful though controversial definitions, indicating what the students ‘should’ have been taught about grammar. A number of these have proved contentious because, as we know, there isn’t always agreement over elements of grammar. I’m not going to go over this ground again by unpicking the  2016 grammar paper as I think I covered it in the last post. However, the reading paper this year caused a bit of a stir for a number of reasons so I want to look at that in more detail. 

Why not try asking then answering your own rhetorical question?

So, what was all the fuss about this year? Well firstly, on 10th May, the TES published this article, citing a teacher who claimed the reading “paper…would have had no relevance to inner-city children or ones with no or little life skills.” If anyone has any recommendations of texts for pupils with no or few life skills which would also be suitable for a national reading test, please do leave a comment.

The texts chosen this year do appear challenging. There’s plenty of demanding vocabulary. Haze, weathered and monument appear in the first text. Promptly, sedately, pranced and skittishly appear in the second. In the third, we have oasis, parched, receding and rehabilitate. There’s also some complexity added to the vocabulary by the sentence structures. In text two, we have: “A streak of grey cut across her vision, accompanied by a furious, nasal squeal: ‘Mmweeeh!'” and “There she dangled while Jemmy pranced skittishly and the warthog, intent on defending her young, let out enraged squeals from below. Five baby warthogs milled around in bewilderment, spindly tails pointing heavenwards.” Some teachers have carried out reading accessibility checks on the texts and claim the texts are pitched at a 13-15 year old age range.

The problem here though is that, as we looked at earlier, the difficulty level of the test isn’t just set through the complexity of the texts as a whole. I’m currently working on a Question Level Analysis spreadsheet for the paper, which I hope to share in the final post in this series. In the process of producing this, it’s become clear that, although the first two texts are more challenging based on raw vocabulary the questions for these texts often direct children to simpler and shorter sections or even individual words. The children could pick up marks in the test here without understanding every single word in the whole texts. I would imagine though that not all children understood this, hence the reported tears. As you move through the paper the third text appears simpler in terms of vocabulary. Here though, the questions are based on longer sections of the text and two require longer answers. 

I don’t think the writers of the tests did this perfectly, but I don’t think they did a terrible job. I’ll look at some possible changes I’d like to see in the next post. 

There’s some truth, again superficially, to the claim that the contexts of at least two of the texts are fairly upper middle class: a rowing trip to an ancestral monument and a safari adventure on the back of a giraffe. Who knew you could ride a giraffe? Perhaps my life skills are limited.

Underneath the French polish veneer, these are essentially adventure stories though.  One about discovering family identity, the other about ending up in a dangerous situation after ignoring the rules. These are fairly familiar themes to children, even if the specific contexts may be alien to some of them. I’d worry if we were to find ourselves suggesting that “inner city” children or indeed working class children should be tested using  only texts which describe their lived experiences. 

In Reading Reconsidered, Lemov et al point to research carried out by Annie Murphy Paul in which she argues that “The brain does not make much of a distinction between reading about an experience and encountering it in real life.” Reading helps students empathize. This is no surprise, but is worth a reminder as, at secondary and primary level, it’s our responsibility as teachers of language and literature to expose students to a range of experiences through the texts we direct and encourage them to read. 

Why not try finding out about the problem?

Why not try waking up screaming after a recurrent nightmare in which you ride a white camel whilst being pursued by bees who are suffering with CCD?

I’ve spent a good part of today exploring the 2016 Key Stage 2 National Curriculum Tests for Reading, Spelling and Grammar. 

This was in part because I was intrigued by the unrest about a number of changes to the curriculum, assessment and testing model this year. In particular, I was interested to see what all the fuss was about in terms of the level of challenge in the reading paper. Mostly though, as a secondary English teacher and Vice Principal of an all through academy, my motive was to get my head around what we might glean from the data we receive about our 2016 Year 7 cohort when we get the results so that we might address possible gaps in their knowledge to limit a dip at the start of the secondary phase and to deal with the resits which some of them are likely to end up taking. 

Wouldn’t it be good if we could drag something potentially positive out of an assessment which is viewed so negatively by some and distrusted by so many – something useful for the children we (both primary and secondary teachers) educate?

This post will be in three parts. In the first, I’ll focus on the current issues people take with the National Curriculum for English at Key Stage 2 and the three associated tests. The second post will look specifically at this year’s tests to see what all the fuss was about. In the third I’ll tentatively suggest some ways forward, with a particular focus on what secondary English teachers might do with the information from the tests, hopefully to the benefit of their students.  

Why not attend a ResearchED event dressed in a tweed jacket with leather elbow patches and chalk dust marks, convince everyone you’re a traditionalist by offering a reading of David Didau’s as yet unpublished, house sized edu-bible, but secretly start a breakout session on guerilla Brain Gym warfare?

In the limbo period between children taking the tests and the public release date of 20th May, I thought it’d be worthwhile finding out more about the controversy surrounding them. 

So, first, a bit of history…

When the National Curriculum was introduced to UK schools in 1988 it attempted to establish the knowledge and skills which children should learn between the start of their schooling and the age of 16. In order to do this, it  formally separated education into Key Stages. These were based on the structures which were already in place in the schooling system: 

  • Key Stage 1 – Infant school 
  • Key Stage 2 – Junior school
  • Key Stage 3 – Lower secondary
  • Key Stage 4 – Upper secondary

Kenneth Baker’s original consultation document proposed the following purposes for the curriculum:

  1. Setting standards for pupil attainment
  2. Supporting school accountability
  3. Improving continuity and coherence within the curriculum
  4. Aiding public understanding of the work of schools

The curriculum has been amended a number of times, with reasons for these changes being put variously down to streamlining, coherence, relevance and rigour. 

The current National Curriculum, in as far as it is one, has very similar aims to those outlined in Baker’s initial consultation – though the means to the end are quite different now. It seems unlikely, for example, that the following would have been seen in the original National Curriculum:

“The national curriculum is just one element in the education of every child. There is time and space in the school day and in each week, term and year to range beyond the national curriculum specifications. The national curriculum provides an outline of core knowledge around which teachers can develop exciting and stimulating lessons to promote the development of pupils’ knowledge, understanding and skills as part of the wider school curriculum.”

The Curriculum and its associated tests have always been contentious, as outlined by Robert Peal in his polemic Progressively Worse. At different points in its history, the designers and redesigners of the curriculum have been accused of contributing to a “dumbing down” of education with the help of Mr Men or being overly elitist as a result of focusing “too much” on dead white males. At the moment (though some would argue differently) the complaints mainly swing towards the latter of these two. Let’s categorise some of the current debate before we look at the tests themselves. 

Why not try sitting on the fence?

Content

The biggest current issue in terms of the content of both the KS2 English curriculum and the tests relates to grammar. 

This article, from 2013 in The Guardian, neatly summarizes the points Michael Rosen has to make against the National Curriculum’s treatment of grammar and the current testing methodology. Here, he states that he doesn’t disagree with the teaching of grammar in itself, but rather the manner of teaching and testing which the curriculum prescribes. 

On the flip side of the debate are Daisy Christodoulou and David Didau who view the teaching of grammar and linguistic terminology at primary level as a gateway to success at secondary school and beyond as an adult. 

Interestingly, there seems to be very little, if any similar argument about the isolated teaching of spelling and I doubt there would be if the government introduced an isolated vocabulary test. I know this is, in part, because there is far more consistent agreement about the spellings and meanings of words as a result of something called a dictionary, but I can’t help feeling that teaching novices a set of rules and conventions they can later be taught to bend and break would help them in the long run.  

Validity and Reliability 

Some commentators argue that the tests are neither valid (that they don’t assess a decent sample of the domain of each subject) nor reliable (that they don’t assess consistently or precisely enough). Page 31 of the Bew Report deals well with this and other issues further. 

Another argument against the National Curriculum tests is that they are unreliable because of issues with the accuracy of marking and faults in administration. The TES highlights these issues here. 

A number of anti-testers view teacher assessment as being the answer to these problems. The NUT outline their case for a shift towards this kind of model in this document

Teacher assessment can have its own pitfalls though, as Daisy Christodoulou identifies in this blog.  

What’s particularly concerning is the point that it seems to be particularly biased against poor and disadvantaged students. 
High stakes – Under Pressure

This aspect of the debate can be divided into two very closely related issues:

  1. The tests put pressure on schools and teachers to act in perverse ways. 
  2. The tests put undue pressure on children. 

An effective summary of the arguments relating to the former can be found here from Stephen Tierney. 

In terms of the latter, just read this report of children’s reactions to the tests on the day. 

Meanwhile, Martin Robinson offers some balance to this part of the debate in his piece about not panicking


Who uses the data anyway?

A significant issue with the National Curriculum tests and KS2 teacher assessments is that they create a divide between primary and secondary professionals at the exact point that they need to be working together most for the benefit of children. Many primary teachers believe the data is not only not used by their secondary counterparts but actively replaced by other information gleaned from other tests. Secondary teachers, meanwhile feel that the data is unreliable due to inflation resulting from the high stakes nature of the results. Both sides of this argument are explored really well here

The writer, Michael Tidd, who is a middle school teacher, finishes off by saying, “If I see an anomalous result, or a child who appears not to be working at the expected level, then I would think it only normal to speak to the previous class teacher. If only the same happened more frequently between schools.”

Perhaps a good starting point in this process would be for secondary teachers to have a better understanding of the nature of the test papers and this will be the focus of my next post.