Reconsider Yourself at Home

“What an excellent example of the power of dress, young Oliver Twist was! Wrapped in the blanket which had hitherto formed his only covering, he might have been the child of a nobleman or a beggar; it would have been hard for the haughtiest stranger to have assigned him his proper station in society. But now that he was enveloped in the old calico robes which had grown yellow in the same service, he was badged and ticketed, and fell into his place at once — a parish child — the orphan of a workhouse — the humble, half-starved drudge — to be cuffed and buffeted through the world — despised by all, and pitied by none.”

Only a few paragraphs into his novel, Oliver Twist, Dickens establishes his protagonist as being representative of those children who, through the  circumstances of their birth, the state of Victorian society and the treatment of others, were destined to a life of economic, social and cultural poverty. 

But our Olivers, our Olivias, our Olgas and our Omars, they’re alright aren’t they? We have higher expectations now. We have more schools than ever before which are judged to be good and outstanding by Ofsted. The government says. Even Sir Michael Wilshaw says – and he definitely has high expectations. 

Yet, according to Barnado’s, the British children’s charity, “There are currently 3.7 million children living in poverty in the UK. That’s over a quarter of all children.” Worryingly, on the same page of Barnado’s website, they tell us:

  • “Only 48 per cent of 5 year olds entitled to free school meals have a good level of development at the end of their reception year, compared to 65 per cent of all other pupils.  
  • Less than half of pupils entitled to free school meals (just 34 per cent) achieve 5 GCSEs at C or above, including English and Maths, this compares to 61 per cent of pupils who are not eligible.”

So, at both ends of our system of compulsory schooling, statistically there are still significant educational inequalities. Though many of these children won’t be bound by these statistics, in too many cases the underlying issues affect students as they move through adulthood and linger into old age. Although the numbers are fewer, too many, like Oliver, are still left “badged and ticketed” to fall into their “place” in society. 

In their 2013 paper for ASCL, “What is Preventing Social Mobility? A Review of the Evidence,” Francis and Wong identify the following two factors as playing a key roll in generating the attainment and opportunity gaps between advantaged and disadvantaged students in the U.K. 

1) The high level of educational and social segregation in our system.

2) The facilitation of those with better financial and social capital to use this to secure advantage for their children.

Captured between these two areas, Francis and Wong list:

  • School (particularly teacher) quality and dis/advantage. 
  • Educational segregation through private and selective schools and through setting or streaming within schools. 
  • Identity and self-fulfilling prophecies. 
  • Curriculum. 
  • Work experience and school to work routes. 
  • Access to higher education. 

Addressing these issues requires action at system, school and teacher levels. 

Last week, I had the privilege of spending two days training with the Teach Like a Champion team – Doug Lemov, Erica Woolway and Maggie Johnson – as well as welcoming Doug to Swindon Academy, the school where I teach. I remember, on first reading Teach Like a Champion, being seriously impressed by the analysis Doug and his team had carried out in terms of what the most effective teachers in their school systems were doing to secure rapid progress, particularly for students from deprived backgrounds. If you’re unfamiliar with his work, this video is a useful starting point. 

When we were beginning to develop our teaching model at Swindon Academy, it struck us that there would be great benefit in having a shared language of teaching which we could use with both staff, during their coaching and CPD sessions, and students during their lessons. We could also see that the strategies tied in exceedingly well with the mastery curriculum model we were moving towards. Most importantly, we agreed with Lemov’s message that the strategies he identifies in the book are a toolkit to draw from, rather than a prescriptive list of methods which must be used robotically and unthinkingly in every lesson. Teachers must, whilst aligning themselves with the vision and ethos of their school, be seen as professional thinkers who make choices about the best strategies to support their students’ to make progress. 

I think this also links to what Doug alludes to in this post about school leaders making professional judgements in terms of what’s going to be of greatest benefit to the students of their school at that point on their developmental journey. It’s also why I’m really proud of what Doug says in the video at the end of this post about what he saw happening in our school. 

I’ll be writing another post soon about the ways in which we’re looking to weave in the strategies from Reading Reconsidered with the same sense of mission – not one born out of the pity which Dickens suggests people could/should have for Oliver, but rather one based on a belief in the benefits of engaging all of our students – whatever their background – in the richness of the English language and the wonder of English literature. 


 

The Higher You Build Your Barriers – Analyse This 2


In the previous post in this sequence, I established the premise that, in the literature classroom, reading is essentially an intellectual, emotional and/or behavioural reaction to text(s) and that, when we’re teaching students to study literature, we’re teaching them factual and/or procedural knowledge which will enable them to more successfully communicate these reactions.

Now I want to look at the potential barriers to students communicating a knowledgeable reaction in the form of an analytical piece of writing at KS3 and beyond. As this is such a huge topic, my aim, in this instance, is to categorise these barriers rather than list every possible permutation. I also don’t intend to explore any solutions here just yet. Instead, I’ll be saving these for a later post in this sequence. To help structure my thinking I’ll be splitting the issues up into two core categories.

Hands up please if you think I’ve missed something. 

Text based barriers relating to:

  • The mechanics of reading 
  • Emotional impact 
  • Behavioural impact 
  • Intellectual impact

Task based barriers relating to:

  • Question type
  • Mark scheme
  • Specimen exemplar responses from exam boards

Text based barriers:


At the most fundamental level, this set of barriers includes students having gaps in their phonic knowledge on arrival at secondary school, not having reached fluency in their decoding skills and making little use of expression or variation of tone in their reading. If these more basic problems are still lingering at the end of Key Stage 2, then they clearly need to be addressed early on during Key Stage 3 for students to make any sense of the more complex literary texts they’ll encounter during their GCSE years. 

Beyond the foundations, these mechanical barriers also encompass limited levels of perseverance with potentially unfamiliar or archaic language; the possibility that students may be reading a text written in a language in which they are not yet proficient; and difficulties caused by the complex, syntactic sequencing often used in poetry and some (particularly older) prose texts. 

Barriers relating to students’ emotional, behavioural and intellectual reactions can, of course, be caused by a range of specific educational needs which make, for example, empathising with characters in a text or cognitively processing a text’s meaning much more challenging. 

In addition, some students have limited vocabulary with which to either comprehend or express subtly different feelings or actions. Comprehension and communication of comprehension can also be stifled if students don’t know much about the themes or the concepts which a text focuses upon. 

A lack of exposure to a range of cultural, social or emotional experiences inhibitting empathy with the narrator or character(s) can prevent or limit an emotional reaction. Conversely, students can be unwilling to open up about emotions or actions as a result of past social experiences they have had, such as mockery at the hands of their peers. 

When students have little knowledge relating to the possible impacts of choices of forms, structures and figurative or rhetorical language, it can limit their reaction going beyond the emotional or behavioral. This issue can also restrict their ability to express why they or others may have had these reactions to a text in the first place.

A lack of knowledge linked to social, historical and cultural contexts can prevent students expressing how or why texts are characteristic of their time or how they break away from traditions or conventions. It can also prevent students understanding why characters have acted in certain ways if they deviate from the manner in which they would act themselves as a result of differences in culture. 

Task based barriers:


As English teachers we are, in the majority of cases, graduates of English literature and/or language degree courses. Consequently, I’m sure we’d all like to think, we have a clear sense of how analytical writing should be structured and crafted. The ideal in our minds, most likely, takes the form of an academic essay – start to finish. 

A number of the examination questions which students have to answer for the latest GCSE exam specifications, though, require them to write something more like a mini-essay or ‘essay-let.’ This is in part because of the wording of the questions themselves and in part because of the time students are given to respond in the exams. 

At times, therefore, I think there is a mismatch between what we have in mind in terms of structuring academic writing and what is required for a successful response from the students in the form of a high grade. This is more so the case in English than in English Literature, but I believe the issue exists in both qualifications. 

To exemplify this, in AQA’s GCSE Specimen Literature Paper 2, students have to complete this question:

In both ‘Poem to my Sister’ and ‘To a Daughter Leaving Home’ the speakers describe feelings about watching someone they love grow up. What are the similarities and or differences between the ways the poets present those feelings?

Students’ responses to this question are worth a maximum of eight marks from a paper worth 94 marks in total. The time allocated for the paper is 2 1/4 hours. If you were to allocate the same proportion of time to each question as the proportion of the overall marks it is worth, then this question should take just under 12 minutes. Although the two unseen poems referenced in this question are nowhere near as rich in language or as structurally dense as Ozymandias, by Shelley, or Exposure, by Owen, (two of the pre-studied poems the same paper may also include a question about) and although the students will have read the first poem in order to answer the previous question, 12 minutes seems very little to respond to a question which could, if more time were given, potentially lead into a full analytical, comparative essay. 

The time allocation for this question will clearly mean students will produce a less than full response to these relatively simplistic poems. One wonders, therefore, whether it’s a worthwhile task or whether it’s actually been included in the paper to fulfill a government requirement, especially as it’s likely to lead to the teaching of a more simplistic form of response. 

Potentially exacerbating the issue of which structure to use for each question are the bullet points which exam boards provide in some of their tasks. These are, in the most part, designed to support students with basic prompts relating to the content of their responses. However, they can actually act as a barrier if students use them as a guide to structuring and organising their answer. 

In the same specimen AQA paper as the poetry question we’ve just looked at, students are assessed on their knowledge of a modern prose or drama text. One of the options for the question relating to An Inspector Calls is:

How and why does Sheila change in An Inspector Calls?

Write about:

  • How Sheila responds to her family and the Inspector. 
  • How Priestley presents Sheila by the way he writes. 

This question is odd for a number of reasons. Firstly, of the twenty four questions in this section of the paper, this is one of only four that don’t mention the writer’s name. What’s stranger still is that this is the only question which, prior to the bullet points, treats the events of the text as if they’re reality and the character as a real person rather than a literary construct. The vast majority of questions in this paper begin with the stem, “How does (insert the writer’s name) present/explore…?” The three other questions which don’t mention the writer of the text ask about the importance of a particular feature, theme or character. 

The reason this is important is because a student could answer the question above about Sheila well within the terms of the question itself by giving a narrative based response but they would be penalized in terms of the mark scheme and the bullet points as they would be less likely to have discussed the effects of the writer’s choices. I may be wrong, but I think this would restrict them to level one of the mark scheme and no higher than five marks. 

Protecting students against this is presumably why the bullet points have been included. They’re designed to remind students of the other aspects of the mark scheme, but it is plausible that this would be too late. It’s also quite possible that he bullet points in the Sheila question could actively promote a way of thinking which risks taking students further away from the original question. The first bullet could quite feasibly lead students to discuss Sheila’s separate responses to the other members of her family, without making these relevant by linking each back to the way it results in the changes to her world view or sense of morality. The second bullet point finally suggests to students that they should view Shiela as a literary construct, crafted by Priestley. However, there is no reference to changes, alterations or shifts in her character in the last bullet point and my concern is that this creates an unnecessary barrier to students crafting an effective response. The question itself has prompted one way of thinking and therefore writing. The bullet points suggest a different approach. 

The question itself should lead towards success within the terms of the mark scheme. In this, and other cases, it does not. 

One reason why teachers revert to teaching a PEE/PEEL/PEEZ style structure as a basic form for structuring the parts of a response is that you can feel like you’re wangling the different parts of the acronym around to address the different parts of the mark scheme. This, you might think, mitigates against a dodgy question like the one about Sheila. 

There are four Assessment Objectives covered in the English Literature qualification:

AO1 – Read, understand and respond to texts:

  • Maintain a critical style and develop an informed, personal response.
  • Use textual references, including quotations, to support and illustrate interpretations. 

AO2 – Analyse the language, form and structure used by a writer to create meanings and effects, using relevant subject terminology where appropriate. 

AO3 – Show understanding of the relationships between texts and the contexts in which they were written. 

AO4 – Use a range of vocabulary and sentence structures for clarity, purpose and effect with accurate spelling and punctuation. 

PEE covers some of this, but not all. In the next post in this sequence, I’m going to look at where PEE comes from and pull some responses to literature from people who’ve not been taught such a structured response to see what they do. 

Thoughts, feelings and actions – Analyse This 1


Why do you read? Why did you read the book you were reading before you turned the lights out last night? Why did you read that poem at your grandfather’s funeral? Why did you bother to read the instructions for the flat pack cabin bed you struggled to put up at the weekend? 

The most beautiful, most ugly, most awe-inspiring, earth-shattering, heartbreaking, life-changing, even the most mind-numbingly functional texts we read impact on us because they make us think, feel and/or do something. 

When we teach literature, we’re essentially teaching students factual knowledge and procedures which may change both the way they react to the text(s) or the ways in which they subsequently express these reactions.

I watched, with interest, as Fiona Ritson (someone who you should follow @fkritson if you’re an English teacher on Twitter because she’s so helpful and so generous with her resources) collated a list of approaches to analytical writing. 

Here it is:


What was particularly interesting  was that the focus of the overwhelming majority of responses to Fiona’s request related to procedures post reading – the analytical writing element. I want to explore this aspect of literary analysis as it is really important. It’s where students win the game on match day. However, I do think there’s a risk we can spend too much time playing out set pieces for the end game rather than focusing on whether our students are match fit. Without working on conditioning students’ knowledge, none of the PEEs, PEAs,PEALs, PETALs, PEEDs or PEEZLs will help. It’s as easy peasy as that. 

Over the next few posts I want to explore:

  • A range of barriers which are faced by students’ when they’re reading and therefore writing analytically about thoughts, feelings and actions triggered by the texts they’ve read. 
  • Some possible solutions to these barriers. 
  • The key features of analytical writing.
  • The ways in which exam structures both aid and limit students in developing the procedural knowledge relating to analytical writing. 

English Subject Knowledge Reading

In ‘What Makes Great Teaching?’ Coe et al list six components of great teaching. The first of these, which they say there is “strong evidence of impact on student outcomes” for is what they call “pedagogical subject knowledge.” They argue that, “The most effective teachers have deep knowledge of the subjects they teach, and when teachers’ knowledge falls below a certain level it is a significant impediment to students’ learning. As well as a strong understanding of the material being taught, teachers must also understand the ways students think about the content, be able to evaluate the thinking behind students’ own methods, and identify students’ common misconceptions.”

Over the last couple of days, I’ve been collating the following list of texts which English teachers have recommended as being useful for developing different areas of subject knowledge. At some point in the future, I intend to write about the other part of claim above relating to unpicking misunderstandings and analysing students’ thinking in English but that’s a whole other job. If you have a further suggestion for the list below, please contact me on Twitter @nsmwells.

There are many people I’d like to thank for their help with the reading list and I’ve named them at the end of this post. The main person I’d like to thank though is our Sixth Form Study Supervisor who is intending to train as a teacher in 2017. If he hadn’t asked me for a list of books to read, then I wouldn’t have asked for everyone’s help to pull this together.

Rhetoric:

  • ‘The Elements of Eloquence: How to Turn the Perfect English Phrase’ by Mark Forsyth.
  • ‘You Talkin’ To Me? Rhetoric from Aristotle to Obama’ by Sam Leith.
  • ‘Trivium 21st C’ by Martin Robinson
  • ‘A Matter of Style’ by Matthew Clark

Grammar and Spelling

  • ‘Gwynne’s Grammar’ by N.M. Gwynne.
  • ‘Practical English Usage’ by Michael Swann
  • ‘It was the best of sentences, it was the worst of sentences’ by June Casagrande
  • ‘Teachers’ Guide to Grammar’ by Deborah Cameron
  • ‘Rediscover Grammar’ by David Crystal
  • ‘Cambridge Grammar of English’ by Ron Carter and Michael McCarthy
  • ‘The Teacher’s Guide to Grammar’ by Deborah Cameron
  • ‘Discover Grammar’ by David Crystal
  • ‘How Language Works’ by David Crystal
  • ‘Language, the Basics’ by Robert Lawrence Trask,
  • ‘English Grammar for Today:A New Introduction’ by Geoffrey Leech, M. Deuchar, and Robert Hoogenraad
  • ‘Spell it Out’ by David Crystal

Poetry:

  • ‘Poetics’ by Aristotle
  • How to be Well Versed in Poetry’ by E.O. Parrot
  • ‘Poetry Toolkit’ by Rhiannon Williams
  • ‘The Ode Less Travelled’ by Stephen Fry
  • ‘Poetry in the Making’ by Ted Hughes
  • ‘On Poetry’ by Glynn Maxwell
  • ‘A Linguistic Guide to English Poetry’ by Geoffrey Leech
  • ‘The Art of Poetry’ by Neil Bowen
  • ‘Does it have to Rhyme?’ by Sandy Brownjohn
  • ‘All the Fun’s in How You Say a Think: An Explanation of Meter and Versification’ by Timothy Steele
  • ‘Articulate Energy’ by Donald Davie
  • ‘The Secret Life of Poems’ by Tom Paulin

Drama:

  • ‘The Empty Space’ by Peter Brook
  • ‘Modern Drama in Theory and Practice’ by JL Styan
  • ‘An Introduction to Greek Theatre’ by P.D. Arnott
  • ‘Greek Theatre Performance’ by David Wiles
  • ‘The Time-traveller’s Guide to British Theatre’ by Aleks Sierz & Lia Ghilhardi
  • ‘How Plays Work’ by D Edgar

The Novel:

  • ‘The Art of Fiction’ by David Lodge
  • ‘How Fiction Works’ by James Wood
  • ‘How Novels Work’ by John Mullan
  • ‘How to Study a Novel’ John Peck
  • ‘Reading Like a Writer’ by Francine Prose
  • ‘Faulks on Fiction’ by Sebastian Faulks

Shakespeare

  • ‘Shakespeare’s Restless World: An Unexpected History in Twenty Objects’ by Neil MacGregor.
  • ‘Shakespeare’s Words: A Glossary and Language Companion’ by David and Ben Crystal
  • ‘1599’ and ‘1606’ by James Shapiro
  • ‘Will in the World’ by Stephen Greenblatt
  • ‘Shakespeare the Basics’ by Sean McEvoy
  • ‘Teaching Shakespeare’ by Rex Gibson
  • ‘Shakespeare on Toast’ by Ben Crystal
  • ‘Soul of the Age’ by Jonathan Bate
  • ‘The Genius of Shakespeare’ by Jonathan Bate
  • ‘Shakespeare’s Wife’ by Germaine Greer
  • ‘Shakespeare’s Restless World’ by Neil MacGreggor
  • ‘Shakespeare’s Language’ by Frank Kermode
  • ‘The Wheel of Fire’ by G Wilson Knight
  • ‘Shakespearean Tragedy’ by A.C. Bradley
  • ‘Shakespeare: A Biography’ by Peter Ackroyd
  • ‘The Lodger: Shakespeare on Silver Street’ by Charles Nicholl
  • ‘In Search of Shakespeare’ by Michael Wood
  • ‘William Shakespeare: His Life and Work’ by Anthony Holden

Linguistics and Language Debates:

  • ‘The Language Wars’ by Henry Hitchings
  • ‘For Who The Bell Tolls’ by David Marsh,
  • ‘English for the Natives’ by Harry Ritchie,
  • ‘Accidence Will Happen: The Non-Pedantic Guide to English Usage’ by Oliver Kamm,
  • ‘Doing English Language’ by Angela Goddard,
  • ‘Knowing About Language: Linguistics and the Secondary English Classroom’ by Marcello Giovanelli & Dan Clayton
  • ‘Ways with Words’ by Shirley Brice Heath

History of Language:

  • ‘Mother Tongue’ by Bill Bryson
  • ‘History of English in 100 Words’ by David Crystal
  • ‘Adventure of English’ by Melvyn Bragg

Literary History and Glossaries:

  • ‘A Little History of Literature’ by John Sutherland.
  • ‘The Seven Basic Plots’ by Christopher Booker
  • ‘The Western Canon: The Books and Schools of The Ages’ by Harold Bloom
  • ‘Literature, Criticism and Style’ by Stephen Croft
  • ‘Dictionary of Literary Terms and Literary Theory’ Penguin Reference
  • ‘Practical Criticism’ by John Peck
  • Routledge’s New Critical Idiom Series
  • ‘Beginning Theory’ by P Barry
  • ‘How To Read Literature’ by Terry Eagleton

Essay writing:

  • ‘The Art of Writing English Literature Essays’ by Neil Bowen

Creative Writing:

  • ‘Negotiating with the Dead’ by Margaret Atwood
  • ‘Gotham Writers Workshop Writing Fiction’ by The Gotham Writers Workshop
  • ‘Short Story: From First Draft to Final Product’ by Michael Milton
  • ‘Short Circuit: A Guide to the Art of the Short Story’ ed Vanessa Gebbie

Macbeth:

  • ‘Macbeth’ the Arden edition
  • ‘Sweet Violence: The Idea of the Tragic’ by Terry Eagleton
  • ‘1599 A Year in the Life of William Shakespeare’ by James Shapiro
  • ‘Shakespeare and Co’ by Stanley Wells
  • ‘The Tragedy of Macbeth’ by Nicholas Brooke
  • ‘William Shakespeare’s Macbeth’ by Harold Bloom
  • ‘Macbeth (New Casebooks)’ by Alan Sinfield
  • ‘Shakespeare: “Macbeth” (Casebook)’ by John Wain
  • ‘Macbeth: A Guide to the Play’ by H.R. Courson
  • ‘Macbeth: Shakespeare Handbooks’ by John Russell Brown
  • ‘Springboard Shakespeare: Macbeth’ by Ben Crystal
  • ‘Macbeth’ by Harold Bloom

A Christmas Carol:

  • ‘Dickens’ by Peter Ackroyd
  • ‘London Labour and the London Poor’ by Henry Mayhew
  • Charles Dickens’ by George Orwell
  • ‘A Christmas Carol’ by Harold Bloom
  • ‘Victoria’s Hayday’ by J.B Priestley
  • ‘Victorian City: Everyday Life in Dicken’s London’ by Judith Flanders
  • ‘The Blackest Streets’ by Sarah Wise

With thanks to:

  • James Theobald
  • David Didau
  • @teacherwithbike
  • Emma Tomaz
  • Jack Richardson
  • Martin Galway
  • Dawn Jones
  • Sarah Ashton
  • @shadylady222
  • Amy Forrester
  • Dan Clayton
  • Marcello Giovanelli
  • Jess Droflet
  • Henry Wiggins
  • Tom Boulter
  • Kerry Puleyn
  • Jenn Ludgate
  • @Gwenelope
  • Samra Arshad
  • Matt Pinkett
  • David Bunker
  • Joe Kirby
  • Chris Curtis
  • Jo Facer
  • Tilly Riches
  • Fran Nantongwe
  • @EnglishTeach10x
  • Mark Roberts
  • Jemma Mitchell
  • Martin Robinson
  • @DRArleneHH
  • Susan Elkin
  • Sean Delahoy
  • David Varley
  • @heymrshallahan
  • Louisa Enstone
  • Diane Leedham
  • Michael Muralee
  • Charles Parker
  • Eliza O’Driscoll
  • KES Library

Why not try doing too many things and not enough things, both at the same time?

Back in September 2015, Ofsted published a report entitled ‘Key Stage 3: the wasted years?’ It was produced following Sir Michael Wilshaw’s statement “that primary schools had continued to improve but the performance of secondary schools had stalled …[and]… one of the major contributory factors to this was that, too often, the transition from primary to secondary school was poorly handled.”

Ofsted made these nine recommendations to secondary school leaders:

  1. Make Key Stage 3 a higher priority in all aspects of school planning, monitoring and evaluation.
  2. Ensure that not only is the curriculum offer at Key Stage 3 broad and balanced, but that teaching is of high quality and prepares pupils for more challenging subsequent study at Key Stages 4 and 5.
  3. Ensure that transition from Key Stage 2 to 3 focuses as much on pupils’ academic needs as it does on their pastoral needs.
  4. Create better cross-phase partnerships with primary schools to ensure that Key Stage 3 teachers build on pupils’ prior knowledge, understanding and skills.
  5. Make sure that systems and procedures for assessing and monitoring pupils’ progress in Key Stage 3 are robust.
  6. Focus on the needs of disadvantaged pupils in Key Stage 3, including the most able, in order to close the achievement gap as quickly as possible.
  7. Evaluate the quality and effectiveness of homework in Key Stage 3 to ensure that it helps pupils to make good progress.
  8. Guarantee that pupils have access to timely and high quality careers education, information, advice and guidance from Year 8 onwards
  9. Have literacy and numeracy strategies that ensure that pupils build on their prior attainment in Key Stage 2 in these crucial areas.

With the possible exception of recommendation 8, all of these involve key staff in secondary schools having an understanding of what has gone on at Key Stage 2. This will clearly have different ramifications for different members of a secondary school’s community. For English teachers in particular it means having an understanding of their students’ experience at KS2, including the taught curriculum and the National Curriculum tests.

Having looked at the KS2 English curriculum in Part 1 of this series and explored the reported issues with this year’s test papers in Part 2, I want to look, in this post, at offering some questions to consider for secondary Senior Leaders and English teachers. First of all, though, I have some questions for the powers that be.

Are you sure you’re not trying to do too much still with the Key Stage 2 tests? Purposes and uses. 

The Bew Report of 2011 claimed of the assessment system prior to its publication that, “There seems to be widespread concern…there are too many purposes, which can often conflict with one another. 71% of respondents to the online call for evidence believe strongly that the current system does not achieve effectively what they perceive to be the most important purpose.”

Bew referenced two papers, both by Dr Paul Newton, who was Head of Assessment Research at the QCA. In ‘Clarifying the Purposes of Educational Assessment,’ Newton argues that there are three primary categories of purpose for nationally designed assessments:

  • Assessment to reach a standards referenced judgement. For example, an exam to award a grade, a level or a pass/fail.
  • Assessment to provide evidence for a decision. For example, an A-Level qualification which provides evidence that the student is ready to begin studying a related subject at undergraduate level.
  • Assessment to have a specific impact on the behavior of individuals or groups.  For example, a science GCSE which helps to enforce the KS4 National Curriculum for science.

Newton maintains that each of these three areas of purpose need to be considered carefully. “Where the three discrete meanings are not distinguished clearly, their distinct implications for assessment design may become obscured. In this situation, policy debate is likely to be unfocused and system design is likely to proceed ineffectively.”

In ‘Evaluating Assessment Systems,’ Newton distinguishes the purposes of assessment systems from their uses and identifies twenty two categories of use for assessment on page 5 of the document.

He explains that an assessment’s reliability can deteriorate as more and more uses are added:

“…an end-of-key-stage test will be designed primarily to support an inference concerning a student’s ‘level of attainment at the time of testing’. Let’s call this the primary design inference. And let’s imagine, for the sake of illustration, that our assessment instrument – our key stage 2 science test – supports perfectly accurate design inferences. That is, a student who really is a level X on the day of the test will definitely be awarded a level X as an outcome of testing.

In fact, when the test result is actually used, the user is likely to draw a slightly (or even radically) different kind of inference, tailored to the specific context of use. Let’s call this a use-inference.

Consider, by way of example, some possible use inferences associated with the following result-based decisions/actions.

  1. A placement/segregation use. The inference made by a key stage 3 head of science – when allocating a student to a particular set on the basis of a key stage 2 result – may concern ‘level of attainment at the beginning of the autumn term’.
  2. A student monitoring use. The inference made by a key stage 3 science teacher – when setting a personal achievement target for a student on the basis of a key stage 2 result – may concern ‘level of attainment at the end of key stage 3’.
  3. A guidance use. The inference made by a personal tutor – when encouraging a student to take three single sciences at GCSE on the basis of a key stage 2 result – may concern ‘general aptitude for science’.
  4. A school choice use. The inference made by parents – when deciding which primary school to send their child to on the basis of its profile of aggregated results in English, maths and science – may concern ‘general quality of teaching’.
  5. A system monitoring use. The inference made by a politician – when judging the success of educational policy over a period of time on the basis of national trends in aggregated results in English, maths and science – may concern ‘overall quality of education’.

…when it comes to validation (establishing the accuracy of inferences from results for different purposes) the implication should be clear: accuracy needs to be established independently for each different use/inference.”

As far as I can see, the current Key Stage 2 tests are used, among many other things:

  • To gauge national school performance.
  • To measure individual school performance for accountability purposes.
  • To check individual pupil attainment at KS2.
  • To measure progress from KS1.
  • To establish progress expectations between KS2 and KS4.
  • To check if students are ‘secondary ready’ and therefore trigger the need for them to resit a similar test in Year 7.
  • To enforce the teaching of elements of the National Curriculum which it would be harder to enforce without the test due to ‘academy freedoms.’
  • To inform parents of individual student’s performance.
  • To enable potential parents to make informed decisions about school choice.

This is by no means an exhaustive list. In many cases, the Key Stage 2 data is arguably the most reliable source we have for these uses. However, I do wonder whether the system could be made more reliable and whether all these other uses are making the tests less reliable in terms of theirp primary use.

Are you sure you’re not trying to do too much with the Key Stage 2 tests? Assessing the writing. 

In this TES article, Michael Tidd outlines the issues primary teachers have found, either in teaching the jarring of grammatical and punctuation elements into students’ writing or jarring specific types of writing into the curriculum as they are most likely to feature the elements required to be successful in the moderation process.

This has come about as a result of the use of a secure fit approach to assessment. In her post ‘”Best fit” is not the problem’ Daisy Christodoulou outlines the problems with both best and secure fit assessment. She proposes other ways forward in her conclusion, advising that:

  • If you want to assess a specific and precise concept and ensure that pupils have learned it to mastery, test that concept itself in the most specific and precise way possible and mark for mastery – expect pupils to get 90% or 100% of questions correct.
  • If you want to assess performance on more open, real world tasks where pupils have significant discretion in how they respond to the task, you cannot mark it in a ‘secure fit’ or ‘mastery’ way without risking serious distortions of both assessment accuracy and teaching quality. You have to mark it in a ‘best fit’ way. If the pupil has discretion in how they respond, so should the marker.
  • Prose descriptors will be inaccurate and distort teaching whether they are used in a best fit or secure fit way. To avoid these inaccuracies and distortions, use something like comparative judgment which allows for performance on open tasks to be assessed in a best fit way without prose descriptors.

We use a similar process to this, as outlined here. One wonders whether a key problem for KS2 teachers is that the National Curriculum assessment model is trying to do too much and, to the detriment of the students, it tells KS3 teachers too little.

Are you sure you’re trying to do enough with the Key Stage 2 tests? What’s missing from KS4 at KS2?

Ok, so it sounds contradictory after my first two questions, but there are key elements of the KS4 curriculum missing in the KS2 tests which make them less reliable in terms of their use in estimating likely performance at 16.

Firstly, in the 2016 Key Stage 2 reading test, there were only two opportunities for pupils to write beyond one line of text as an answer. I understand that this may make the questions more reliable in themselves. However, at Key Stage 4, reading assessment requires students to write at far greater length. I’m certainly not arguing for children of ten to have exams lasting two hours, which feature questions to which they have to write responses which fill two and a half sides of A4. I’m merely questioning whether a third of a side of A4 enables the brightest ten year olds to demonstrate their full potential in reading. There are possible implications here for secondary teachers in building, over time, the stamina of students in producing extended responses to reading.

Secondly, again in the reading test, all of the texts are unseen. This is fine if we’re just using the test as a tool for estimating performance at KS4 for English Language where the three texts examined are similarly unseen. However, the English Literature GCSE now has parity with English Language  in terms of school performance measures. Wouldn’t it make sense then to include at least one extract from a full text that students had studied in advance. I’m sure there would be great controversy over which text(s) should be taught, but I think the benefits for children would outweigh the arguments between teachers. One key aspect of literary study is that of making links between texts and their cultural, social and historical context. This used to be an Assessment Focus in the previous framework – though it only ever featured in a limited manner in the tests and the mark scheme. Reinstating it as part of the content domain, could serve to make the link to literary study at Key Stage 3 more effective as well as slightly raise the status of the study of history, culture and society at Key Stage 2.

Are you sure that resits are a good idea?

I’m not going to focus on the potential emotional impact of the resit process which has been written about here. I think we can deal with this additional, potentially high stakes test and help students to deal with it too, in the same way Chris Curtis argues we can support students through the stresses of the grammar test and just as we help students through GCSEs. Instead, I want to focus on curriculum and teaching as these will likely have the biggest impact on students in the long term (including on their emotions).

Imagine training two boys to do five lots of two minutes of keepy-uppies for a competition. One narrowly misses out on qualifying for the semi-finals of the competition and the other goes on to win.

Their coach carries on training the quarter-finalist with keepy-uppies with a tiny bit of full football training mixed in but moves the other on to playing football for a team, training them in set pieces, tackling and passing and with 30, then 60, then 90 minute practice matches every Saturday. The coach knows that both boys have the potential to be playing in the Premier League in five years’ time. Unfortunately for the first boy, keepy-uppies might be useful in terms of ball control, but they don’t prepare you for all aspects of the Premier League properly.

Likewise, the National Curriculum reading and SPAG tests are potentially useful gauges at 11 of certain isolated skills for English. However, the questions aren’t any where near as open as many schools’ Key Stage 3 tasks which are designed to prepare students for the GCSE questions they’ll face in Year 11. In addition, as I’ve mentioned above, Key Stage 2 doesn’t focus on English Literature which includes social, historical and cultural context. The students who are made to resit will need to improve their ability to do the Key Stage 2 style questions, whilst also keeping up with the rest of their year in terms of these aspects of the Key Stage 3 curriculum.

All of this means we need to think strategically in order to limit the extent to which, whilst closing one set of gaps, we might open up a whole host of others and this  brings me onto my questions for secondary leaders and English teachers.

How can you ensure you are clear as to what your students should know and should be able to do based on their KS2 experience and outcomes?

  • Do the reading – check out the National Curriculum for KS2, the frameworks for the Reading and Spelling Punctuation and Grammar tests and the framework for writing assessment so you fully understand the information you can gather from the children’s primary schools.
  • Find an effective way of communicating with the staff in your partner primary schools about the children and about their KS2 curriculum.
  • Get hold of the children’s work – though make sure you know under what conditions the work was produced. In my view, you need to know what they can do independently -though other views do exist.
  • Analyse the data. I’ve created this Question Level Analysis spreadsheet for the 2016 reading paper so that we can see which types of question students were most and least successful at. I’ll write more about it, if it’s worthwhile, once we’ve used it with this years’ cohort.
  • Remember though, that there is a gap between the end of primary and the start of secondary schooling, so…

How might you ensure you are clear as to what your students do know and are ignorant of, what they can and can’t do and what they’ve forgotten when they arrive with you in September?

  • In order to be able to make more valid inferences about your Year 7 students’ knowledge and abilities in September, you may want to pre-test. This should clearly give you information which you will use to inform your planning rather than be testing for testing’s sake.

How could you build on what’s gone before?

  • Build up students’ writing stamina, including extended responses to reading. Look at crafting the small details as well as structuring a whole response like this or like this.
  • Explain and model the writing process and the thinking which goes on behind it.
  • Continue to develop the grammatical knowledge which the students already have, increasingly expecting its application in analysis and consideration in crafting writing.
  • Use challenging texts – these children can read unseen texts with surprisingly sophisticated sentence structures and vocabulary.
  • Carry on building their general vocabulary and developing their use of technical terminology.
  • Keep them practicing what they’ve done previously so they maintain or develop fluency.

I’ve only included a handful of ideas here – the list could clearly go on and on but I realize you have other things to do.

How will you deal with the resits – if they happen?

Let’s consider this question sensibly and carefully as there have been quite a few people who have already suggested that the resits will destroy students initial experiences of secondary school.

First of all, let’s return to the content domain defined in the framework for reading:

There was actually only one of these references (2e) which didn’t map straightforwardly into the GCSE Assessment Objectives when I produced this for the first post in this series:

This would suggest (unsurprisingly) that all of the KS2 domain is still relevant at KS3 and 4.

What about the grammar then? There must be a problem with that. Remember pages 8-12 of the Grammar, Punctuation and Spelling Framework which I mentioned in Part One. If not, or if you never looked at them in the first place, take a look at them. Now imagine that your students in Year 7 are so familiar with those terms that you could start teaching them properly to drop them into their analysis or that you could use them when discussing slips in their writing. That might be nice mightn’t it? There are some terms you might feel are less useful, some definitions you’d rather were changed, some terms you call something else but having children arrive at secondary school knowing this stuff – that could be a game changer couldn’t it?

So the reality is that the majority of this content will be relevant to our teaching for KS3 if we are following Ofsted’s sensible advice to ensure “teaching is of high quality and prepares pupils for more challenging subsequent study at Key Stages 4.”

Well, if it’s not the content that will limit our students, then surely it will be the question types – drilling the students who are being forced to resit in responding to these question types will almost certainly be detrimental won’t it. So let’s look at those again.

Question Types

 

They’re a mixture of multiple choice, ordering, matching and labeling with short and long responses – hardly the question types of the devil and actually, though I’d be wanting to shift the balance towards the extended responses, if students struggled with the basic questions which were mostly about finding information and vocabulary, then this is where they need more practice and this is how we need to amend the curriculum they experience in Year 7. We keep our challenging texts, we keep our focus on grammar and extended, independent writing, we keep our drive to improve responses to reading and all of the other things I’ve mentioned but we build in more work on knowledge of vocabulary as this is where the biggest challenge was in the reading test and, fortuitously, this will benefit these students in the longer term anyway.

When I started writing this, I didn’t expect to be in favor of the resits. In the proposed form, I’m still not, even though I think I’m now beginning to develop a clearer plan of how to deal with them.

If we are to have ‘retesting’ a better model, in my view, would be to test later, either towards the end of Year 7 or beginning of Year 8 and to test more or all of the cohort. I’d also propose a literature element to the tests and a much stronger focus on decontextualised vocabulary testing.

These changes would act as a much firmer lever, I think, to achieve what Ofsted recommended in their Key Stage 3 report.

Why not try…reading the questions for once?

This is the second in a sequence of three posts about the Key Stage 2 National Curriculum and associated tests for English. In the first post, I explored the controversy surrounding the curriculum as a whole. This time, I’m looking in more detail at the tests themselves, in particular the 2016 papers. I want to see what all the fuss was about so as to unpick what lessons there might be, if any, for secondary English teachers. If you happen to be reading this as a primary teacher, you could probably skip the next bit as it’s an outline of the tests which you’re likely to be familiar with.

Why not literally try pulling the papers apart? No, I do mean it literally. Not metaphorically. Literally. Take the staples out, separate the pages, scatter them over the floor of the room, then dance around singing a ballad about the war between the family of the lion and the family of the bear.

There are two tests which are taken at the end of Key Stage 2 – one of which is split into two papers:

  • Reading
  • Grammar, punctuation and spelling Paper 1: Grammar and Punctuation
  • Grammar, punctuation and spelling Paper 2: Spelling

Reading

The reading test is made up of approximately 30-40 questions each year. Each question is, as of 2016, linked to one of the aspects of the “Content Domain” drawn from the comprehension section of the National Curriculum and listed in the reading test specification. If you want to find out more about content domains and content samples, this article from Daisy Chrisodoulou is certainly worth a read.

Content Domain

Though the connections aren’t perfect, a useful way of getting your head round this as a secondary teacher is to consider how each aspect of the content domain ties in with the Assessment Objectives for GCSE English. The table below attempts to do just that.

AO and Content Domain Comparison

What was clear from creating this grid was that, although there are undoubtedly clear connections between the skills required at Key Stage 2 and Key Stage 4, there is now more clearly (and rightly so I think) a wider gap between the two levels than there was when we were all using the same Assessment Focuses.

If you’re eagle eyed, you’ll notice that the percentages for reading for AQA GCSE English only add up to 50%. The other fifty comes from the completion of two writing questions – one either a narrative or descriptive piece, the other a personal opinion piece. However, as this blog is only about the tests, I’m leaving those for now. I may look at them, alongside moderated, teacher assessed writing, in a separate post in the future. What is worth noting here though is that it is the reading mark from Year 6 that is used by the government to calculate the secondary Progress 8 estimate. The writing mark is not used in this calculation. As a result, students’ outcomes at Key Stage 4 in reading and writing are now predicted based on just their reading response at Key Stage 2.

In addition to the information about the content domain, the test framework also establishes a range of ways in which the complexity of the questions will be varied. I think this is a really useful list for the design of any English comprehension test. It is important here though in terms of gauging whether the test is more or less challenging year on year. The list obviously includes the difficulty level of the text. However, the level of challenge will also be varied through:

The location of the information:

  • The number of pieces of information the question asks students to find and how close they are.
  • Whether the question provides students with the location of the information. For example, Paragraph 2.
  • Whether, in a multiple choice question, there are a number of close answers which act as distractors.

The complexity of the information in the relevant part of the text:

  • Whether the part of text required to answer the question is lexico-grammatically complex
  • Whether the information required to answer the question is abstract or concrete
  • How familiar the information is to the students

The level of skill required to answer the question:

  • Whether the task requires students to retrieve information directly from the text, to know the explicit meaning of single word or multiple words or to infer from a single or multiple pieces of information.

The complexity of the response strategy:

  •  Whether the task requires a multiple choice answer, a single word or phrase from the text or a longer response from the student.

The complexity of the language in the question and the language required in the answer

  • Whether there are challenging words used in the question.
  • Whether students have to make use of technical terms in their answers that aren’t in the question or the text.

We’ll come back to these later when we look at the concerns around this year’s test.

Grammar, Punctuation and Spelling:

The Grammar and Punctuation Paper is worth 50 marks (there are approximately 50 questions) and it takes 45 minutes.

The Spelling Paper, meanwhile, is worth twenty marks (there are 20 questions) and takes approximately 15 minutes, depending on how long the administrator takes to read the questions.

This table provides a sense of the weighting for each of these elements in the overall test.

SPAG Weighting

As with the reading test, there is a Specification for the Grammar, Punctuation and Spelling Test.  Again, this establishes the content domain from which the questions will be developed.

If you’re a secondary teacher, it would certainly be worth you reading this booklet. In particular, Pages 8-12 for the grammar domain and Pages 12-13 for spelling. These are important pages as they tell you what students ‘should’ have learnt in these three areas before the end of KS2.

Remember though, if you find out from the data you receive from their primary school that a pupil did well in these tests, it doesn’t mean that they recalled everything in the domain, merely that they performed well on the questions covering  a supposedly representative sample of the domain. Neither does it mean that they will have retained all of the knowledge over the summer break. If you don’t constantly reinforce this knowledge through further practice through similar isolated tests or through application during analysis or extended writing they will most likely forget it in the future or not make full use of it.

The table on Page 14 helpfully reminds us that students haven’t been assessed on their use of paragraphs in this test – this is instead done through the moderated teacher assessment of writing. This also serves to emphasize the point that students have not used the grammar or spelling in the test in context.

This is an isolated assessment of their knowledge, rather than the application of that knowledge.

Finally, there are some useful though controversial definitions, indicating what the students ‘should’ have been taught about grammar. A number of these have proved contentious because, as we know, there isn’t always agreement over elements of grammar. I’m not going to go over this ground again by unpicking the  2016 grammar paper as I think I covered it in the last post. However, the reading paper this year caused a bit of a stir for a number of reasons so I want to look at that in more detail. 

Why not try asking then answering your own rhetorical question?

So, what was all the fuss about this year? Well firstly, on 10th May, the TES published this article, citing a teacher who claimed the reading “paper…would have had no relevance to inner-city children or ones with no or little life skills.” If anyone has any recommendations of texts for pupils with no or few life skills which would also be suitable for a national reading test, please do leave a comment.

The texts chosen this year do appear challenging. There’s plenty of demanding vocabulary. Haze, weathered and monument appear in the first text. Promptly, sedately, pranced and skittishly appear in the second. In the third, we have oasis, parched, receding and rehabilitate. There’s also some complexity added to the vocabulary by the sentence structures. In text two, we have: “A streak of grey cut across her vision, accompanied by a furious, nasal squeal: ‘Mmweeeh!'” and “There she dangled while Jemmy pranced skittishly and the warthog, intent on defending her young, let out enraged squeals from below. Five baby warthogs milled around in bewilderment, spindly tails pointing heavenwards.” Some teachers have carried out reading accessibility checks on the texts and claim the texts are pitched at a 13-15 year old age range.

The problem here though is that, as we looked at earlier, the difficulty level of the test isn’t just set through the complexity of the texts as a whole. I’m currently working on a Question Level Analysis spreadsheet for the paper, which I hope to share in the final post in this series. In the process of producing this, it’s become clear that, although the first two texts are more challenging based on raw vocabulary the questions for these texts often direct children to simpler and shorter sections or even individual words. The children could pick up marks in the test here without understanding every single word in the whole texts. I would imagine though that not all children understood this, hence the reported tears. As you move through the paper the third text appears simpler in terms of vocabulary. Here though, the questions are based on longer sections of the text and two require longer answers. 

I don’t think the writers of the tests did this perfectly, but I don’t think they did a terrible job. I’ll look at some possible changes I’d like to see in the next post. 

There’s some truth, again superficially, to the claim that the contexts of at least two of the texts are fairly upper middle class: a rowing trip to an ancestral monument and a safari adventure on the back of a giraffe. Who knew you could ride a giraffe? Perhaps my life skills are limited.

Underneath the French polish veneer, these are essentially adventure stories though.  One about discovering family identity, the other about ending up in a dangerous situation after ignoring the rules. These are fairly familiar themes to children, even if the specific contexts may be alien to some of them. I’d worry if we were to find ourselves suggesting that “inner city” children or indeed working class children should be tested using  only texts which describe their lived experiences. 

In Reading Reconsidered, Lemov et al point to research carried out by Annie Murphy Paul in which she argues that “The brain does not make much of a distinction between reading about an experience and encountering it in real life.” Reading helps students empathize. This is no surprise, but is worth a reminder as, at secondary and primary level, it’s our responsibility as teachers of language and literature to expose students to a range of experiences through the texts we direct and encourage them to read. 

Why not try finding out about the problem?

Why not try waking up screaming after a recurrent nightmare in which you ride a white camel whilst being pursued by bees who are suffering with CCD?

I’ve spent a good part of today exploring the 2016 Key Stage 2 National Curriculum Tests for Reading, Spelling and Grammar. 

This was in part because I was intrigued by the unrest about a number of changes to the curriculum, assessment and testing model this year. In particular, I was interested to see what all the fuss was about in terms of the level of challenge in the reading paper. Mostly though, as a secondary English teacher and Vice Principal of an all through academy, my motive was to get my head around what we might glean from the data we receive about our 2016 Year 7 cohort when we get the results so that we might address possible gaps in their knowledge to limit a dip at the start of the secondary phase and to deal with the resits which some of them are likely to end up taking. 

Wouldn’t it be good if we could drag something potentially positive out of an assessment which is viewed so negatively by some and distrusted by so many – something useful for the children we (both primary and secondary teachers) educate?

This post will be in three parts. In the first, I’ll focus on the current issues people take with the National Curriculum for English at Key Stage 2 and the three associated tests. The second post will look specifically at this year’s tests to see what all the fuss was about. In the third I’ll tentatively suggest some ways forward, with a particular focus on what secondary English teachers might do with the information from the tests, hopefully to the benefit of their students.  

Why not attend a ResearchED event dressed in a tweed jacket with leather elbow patches and chalk dust marks, convince everyone you’re a traditionalist by offering a reading of David Didau’s as yet unpublished, house sized edu-bible, but secretly start a breakout session on guerilla Brain Gym warfare?

In the limbo period between children taking the tests and the public release date of 20th May, I thought it’d be worthwhile finding out more about the controversy surrounding them. 

So, first, a bit of history…

When the National Curriculum was introduced to UK schools in 1988 it attempted to establish the knowledge and skills which children should learn between the start of their schooling and the age of 16. In order to do this, it  formally separated education into Key Stages. These were based on the structures which were already in place in the schooling system: 

  • Key Stage 1 – Infant school 
  • Key Stage 2 – Junior school
  • Key Stage 3 – Lower secondary
  • Key Stage 4 – Upper secondary

Kenneth Baker’s original consultation document proposed the following purposes for the curriculum:

  1. Setting standards for pupil attainment
  2. Supporting school accountability
  3. Improving continuity and coherence within the curriculum
  4. Aiding public understanding of the work of schools

The curriculum has been amended a number of times, with reasons for these changes being put variously down to streamlining, coherence, relevance and rigour. 

The current National Curriculum, in as far as it is one, has very similar aims to those outlined in Baker’s initial consultation – though the means to the end are quite different now. It seems unlikely, for example, that the following would have been seen in the original National Curriculum:

“The national curriculum is just one element in the education of every child. There is time and space in the school day and in each week, term and year to range beyond the national curriculum specifications. The national curriculum provides an outline of core knowledge around which teachers can develop exciting and stimulating lessons to promote the development of pupils’ knowledge, understanding and skills as part of the wider school curriculum.”

The Curriculum and its associated tests have always been contentious, as outlined by Robert Peal in his polemic Progressively Worse. At different points in its history, the designers and redesigners of the curriculum have been accused of contributing to a “dumbing down” of education with the help of Mr Men or being overly elitist as a result of focusing “too much” on dead white males. At the moment (though some would argue differently) the complaints mainly swing towards the latter of these two. Let’s categorise some of the current debate before we look at the tests themselves. 

Why not try sitting on the fence?

Content

The biggest current issue in terms of the content of both the KS2 English curriculum and the tests relates to grammar. 

This article, from 2013 in The Guardian, neatly summarizes the points Michael Rosen has to make against the National Curriculum’s treatment of grammar and the current testing methodology. Here, he states that he doesn’t disagree with the teaching of grammar in itself, but rather the manner of teaching and testing which the curriculum prescribes. 

On the flip side of the debate are Daisy Christodoulou and David Didau who view the teaching of grammar and linguistic terminology at primary level as a gateway to success at secondary school and beyond as an adult. 

Interestingly, there seems to be very little, if any similar argument about the isolated teaching of spelling and I doubt there would be if the government introduced an isolated vocabulary test. I know this is, in part, because there is far more consistent agreement about the spellings and meanings of words as a result of something called a dictionary, but I can’t help feeling that teaching novices a set of rules and conventions they can later be taught to bend and break would help them in the long run.  

Validity and Reliability 

Some commentators argue that the tests are neither valid (that they don’t assess a decent sample of the domain of each subject) nor reliable (that they don’t assess consistently or precisely enough). Page 31 of the Bew Report deals well with this and other issues further. 

Another argument against the National Curriculum tests is that they are unreliable because of issues with the accuracy of marking and faults in administration. The TES highlights these issues here. 

A number of anti-testers view teacher assessment as being the answer to these problems. The NUT outline their case for a shift towards this kind of model in this document

Teacher assessment can have its own pitfalls though, as Daisy Christodoulou identifies in this blog.  

What’s particularly concerning is the point that it seems to be particularly biased against poor and disadvantaged students. 
High stakes – Under Pressure

This aspect of the debate can be divided into two very closely related issues:

  1. The tests put pressure on schools and teachers to act in perverse ways. 
  2. The tests put undue pressure on children. 

An effective summary of the arguments relating to the former can be found here from Stephen Tierney. 

In terms of the latter, just read this report of children’s reactions to the tests on the day. 

Meanwhile, Martin Robinson offers some balance to this part of the debate in his piece about not panicking


Who uses the data anyway?

A significant issue with the National Curriculum tests and KS2 teacher assessments is that they create a divide between primary and secondary professionals at the exact point that they need to be working together most for the benefit of children. Many primary teachers believe the data is not only not used by their secondary counterparts but actively replaced by other information gleaned from other tests. Secondary teachers, meanwhile feel that the data is unreliable due to inflation resulting from the high stakes nature of the results. Both sides of this argument are explored really well here

The writer, Michael Tidd, who is a middle school teacher, finishes off by saying, “If I see an anomalous result, or a child who appears not to be working at the expected level, then I would think it only normal to speak to the previous class teacher. If only the same happened more frequently between schools.”

Perhaps a good starting point in this process would be for secondary teachers to have a better understanding of the nature of the test papers and this will be the focus of my next post.