img_4130-2

Thoughts, feelings and actions – Analyse This 1


Why do you read? Why did you read the book you were reading before you turned the lights out last night? Why did you read that poem at your grandfather’s funeral? Why did you bother to read the instructions for the flat pack cabin bed you struggled to put up at the weekend? 

The most beautiful, most ugly, most awe-inspiring, earth-shattering, heartbreaking, life-changing, even the most mind-numbingly functional texts we read impact on us because they make us think, feel and/or do something. 

When we teach literature, we’re essentially teaching students factual knowledge and procedures which may change both the way they react to the text(s) or the ways in which they subsequently express these reactions.

I watched, with interest, as Fiona Ritson (someone who you should follow @fkritson if you’re an English teacher on Twitter because she’s so helpful and so generous with her resources) collated a list of approaches to analytical writing. 

Here it is:


What was particularly interesting  was that the focus of the overwhelming majority of responses to Fiona’s request related to procedures post reading – the analytical writing element. I want to explore this aspect of literary analysis as it is really important. It’s where students win the game on match day. However, I do think there’s a risk we can spend too much time playing out set pieces for the end game rather than focusing on whether our students are match fit. Without working on conditioning students’ knowledge, none of the PEEs, PEAs,PEALs, PETALs, PEEDs or PEEZLs will help. It’s as easy peasy as that. 

Over the next few posts I want to explore:

  • A range of barriers which are faced by students’ when they’re reading and therefore writing analytically about thoughts, feelings and actions triggered by the texts they’ve read. 
  • Some possible solutions to these barriers. 
  • The key features of analytical writing.
  • The ways in which exam structures both aid and limit students in developing the procedural knowledge relating to analytical writing. 
2013-10-13 07.15.40

English Subject Knowledge Reading

In ‘What Makes Great Teaching?’ Coe et al list six components of great teaching. The first of these, which they say there is “strong evidence of impact on student outcomes” for is what they call “pedagogical subject knowledge.” They argue that, “The most effective teachers have deep knowledge of the subjects they teach, and when teachers’ knowledge falls below a certain level it is a significant impediment to students’ learning. As well as a strong understanding of the material being taught, teachers must also understand the ways students think about the content, be able to evaluate the thinking behind students’ own methods, and identify students’ common misconceptions.”

Over the last couple of days, I’ve been collating the following list of texts which English teachers have recommended as being useful for developing different areas of subject knowledge. At some point in the future, I intend to write about the other part of claim above relating to unpicking misunderstandings and analysing students’ thinking in English but that’s a whole other job. If you have a further suggestion for the list below, please contact me on Twitter @nsmwells.

There are many people I’d like to thank for their help with the reading list and I’ve named them at the end of this post. The main person I’d like to thank though is our Sixth Form Study Supervisor who is intending to train as a teacher in 2017. If he hadn’t asked me for a list of books to read, then I wouldn’t have asked for everyone’s help to pull this together.

Rhetoric:

  • ‘The Elements of Eloquence: How to Turn the Perfect English Phrase’ by Mark Forsyth.
  • ‘You Talkin’ To Me? Rhetoric from Aristotle to Obama’ by Sam Leith.
  • ‘Trivium 21st C’ by Martin Robinson
  • ‘A Matter of Style’ by Matthew Clark

Grammar and Spelling

  • ‘Gwynne’s Grammar’ by N.M. Gwynne.
  • ‘Practical English Usage’ by Michael Swann
  • ‘It was the best of sentences, it was the worst of sentences’ by June Casagrande
  • ‘Teachers’ Guide to Grammar’ by Deborah Cameron
  • ‘Rediscover Grammar’ by David Crystal
  • ‘Cambridge Grammar of English’ by Ron Carter and Michael McCarthy
  • ‘The Teacher’s Guide to Grammar’ by Deborah Cameron
  • ‘Discover Grammar’ by David Crystal
  • ‘How Language Works’ by David Crystal
  • ‘Language, the Basics’ by Robert Lawrence Trask,
  • ‘English Grammar for Today:A New Introduction’ by Geoffrey Leech, M. Deuchar, and Robert Hoogenraad
  • ‘Spell it Out’ by David Crystal

Poetry:

  • ‘Poetics’ by Aristotle
  • How to be Well Versed in Poetry’ by E.O. Parrot
  • ‘Poetry Toolkit’ by Rhiannon Williams
  • ‘The Ode Less Travelled’ by Stephen Fry
  • ‘Poetry in the Making’ by Ted Hughes
  • ‘On Poetry’ by Glynn Maxwell
  • ‘A Linguistic Guide to English Poetry’ by Geoffrey Leech
  • ‘The Art of Poetry’ by Neil Bowen
  • ‘Does it have to Rhyme?’ by Sandy Brownjohn
  • ‘All the Fun’s in How You Say a Think: An Explanation of Meter and Versification’ by Timothy Steele
  • ‘Articulate Energy’ by Donald Davie
  • ‘The Secret Life of Poems’ by Tom Paulin

Drama:

  • ‘The Empty Space’ by Peter Brook
  • ‘Modern Drama in Theory and Practice’ by JL Styan
  • ‘An Introduction to Greek Theatre’ by P.D. Arnott
  • ‘Greek Theatre Performance’ by David Wiles
  • ‘The Time-traveller’s Guide to British Theatre’ by Aleks Sierz & Lia Ghilhardi
  • ‘How Plays Work’ by D Edgar

The Novel:

  • ‘The Art of Fiction’ by David Lodge
  • ‘How Fiction Works’ by James Wood
  • ‘How Novels Work’ by John Mullan
  • ‘How to Study a Novel’ John Peck
  • ‘Reading Like a Writer’ by Francine Prose
  • ‘Faulks on Fiction’ by Sebastian Faulks

Shakespeare

  • ‘Shakespeare’s Restless World: An Unexpected History in Twenty Objects’ by Neil MacGregor.
  • ‘Shakespeare’s Words: A Glossary and Language Companion’ by David and Ben Crystal
  • ‘1599’ and ‘1606’ by James Shapiro
  • ‘Will in the World’ by Stephen Greenblatt
  • ‘Shakespeare the Basics’ by Sean McEvoy
  • ‘Teaching Shakespeare’ by Rex Gibson
  • ‘Shakespeare on Toast’ by Ben Crystal
  • ‘Soul of the Age’ by Jonathan Bate
  • ‘The Genius of Shakespeare’ by Jonathan Bate
  • ‘Shakespeare’s Wife’ by Germaine Greer
  • ‘Shakespeare’s Restless World’ by Neil MacGreggor
  • ‘Shakespeare’s Language’ by Frank Kermode
  • ‘The Wheel of Fire’ by G Wilson Knight
  • ‘Shakespearean Tragedy’ by A.C. Bradley
  • ‘Shakespeare: A Biography’ by Peter Ackroyd
  • ‘The Lodger: Shakespeare on Silver Street’ by Charles Nicholl
  • ‘In Search of Shakespeare’ by Michael Wood
  • ‘William Shakespeare: His Life and Work’ by Anthony Holden

Linguistics and Language Debates:

  • ‘The Language Wars’ by Henry Hitchings
  • ‘For Who The Bell Tolls’ by David Marsh,
  • ‘English for the Natives’ by Harry Ritchie,
  • ‘Accidence Will Happen: The Non-Pedantic Guide to English Usage’ by Oliver Kamm,
  • ‘Doing English Language’ by Angela Goddard,
  • ‘Knowing About Language: Linguistics and the Secondary English Classroom’ by Marcello Giovanelli & Dan Clayton
  • ‘Ways with Words’ by Shirley Brice Heath

History of Language:

  • ‘Mother Tongue’ by Bill Bryson
  • ‘History of English in 100 Words’ by David Crystal
  • ‘Adventure of English’ by Melvyn Bragg

Literary History and Glossaries:

  • ‘A Little History of Literature’ by John Sutherland.
  • ‘The Seven Basic Plots’ by Christopher Booker
  • ‘The Western Canon: The Books and Schools of The Ages’ by Harold Bloom
  • ‘Literature, Criticism and Style’ by Stephen Croft
  • ‘Dictionary of Literary Terms and Literary Theory’ Penguin Reference
  • ‘Practical Criticism’ by John Peck
  • Routledge’s New Critical Idiom Series
  • ‘Beginning Theory’ by P Barry
  • ‘How To Read Literature’ by Terry Eagleton

Essay writing:

  • ‘The Art of Writing English Literature Essays’ by Neil Bowen

Creative Writing:

  • ‘Negotiating with the Dead’ by Margaret Atwood
  • ‘Gotham Writers Workshop Writing Fiction’ by The Gotham Writers Workshop
  • ‘Short Story: From First Draft to Final Product’ by Michael Milton
  • ‘Short Circuit: A Guide to the Art of the Short Story’ ed Vanessa Gebbie

Macbeth:

  • ‘Macbeth’ the Arden edition
  • ‘Sweet Violence: The Idea of the Tragic’ by Terry Eagleton
  • ‘1599 A Year in the Life of William Shakespeare’ by James Shapiro
  • ‘Shakespeare and Co’ by Stanley Wells
  • ‘The Tragedy of Macbeth’ by Nicholas Brooke
  • ‘William Shakespeare’s Macbeth’ by Harold Bloom
  • ‘Macbeth (New Casebooks)’ by Alan Sinfield
  • ‘Shakespeare: “Macbeth” (Casebook)’ by John Wain
  • ‘Macbeth: A Guide to the Play’ by H.R. Courson
  • ‘Macbeth: Shakespeare Handbooks’ by John Russell Brown
  • ‘Springboard Shakespeare: Macbeth’ by Ben Crystal
  • ‘Macbeth’ by Harold Bloom

A Christmas Carol:

  • ‘Dickens’ by Peter Ackroyd
  • ‘London Labour and the London Poor’ by Henry Mayhew
  • Charles Dickens’ by George Orwell
  • ‘A Christmas Carol’ by Harold Bloom
  • ‘Victoria’s Hayday’ by J.B Priestley
  • ‘Victorian City: Everyday Life in Dicken’s London’ by Judith Flanders
  • ‘The Blackest Streets’ by Sarah Wise

With thanks to:

  • James Theobald
  • David Didau
  • @teacherwithbike
  • Emma Tomaz
  • Jack Richardson
  • Martin Galway
  • Dawn Jones
  • Sarah Ashton
  • @shadylady222
  • Amy Forrester
  • Dan Clayton
  • Marcello Giovanelli
  • Jess Droflet
  • Henry Wiggins
  • Tom Boulter
  • Kerry Puleyn
  • Jenn Ludgate
  • @Gwenelope
  • Samra Arshad
  • Matt Pinkett
  • David Bunker
  • Joe Kirby
  • Chris Curtis
  • Jo Facer
  • Tilly Riches
  • Fran Nantongwe
  • @EnglishTeach10x
  • Mark Roberts
  • Jemma Mitchell
  • Martin Robinson
  • @DRArleneHH
  • Susan Elkin
  • Sean Delahoy
  • David Varley
  • @heymrshallahan
  • Louisa Enstone
  • Diane Leedham
  • Michael Muralee
  • Charles Parker
  • Eliza O’Driscoll
  • KES Library
img_3685-2

Why not try doing too many things and not enough things, both at the same time?

Back in September 2015, Ofsted published a report entitled ‘Key Stage 3: the wasted years?’ It was produced following Sir Michael Wilshaw’s statement “that primary schools had continued to improve but the performance of secondary schools had stalled …[and]… one of the major contributory factors to this was that, too often, the transition from primary to secondary school was poorly handled.”

Ofsted made these nine recommendations to secondary school leaders:

  1. Make Key Stage 3 a higher priority in all aspects of school planning, monitoring and evaluation.
  2. Ensure that not only is the curriculum offer at Key Stage 3 broad and balanced, but that teaching is of high quality and prepares pupils for more challenging subsequent study at Key Stages 4 and 5.
  3. Ensure that transition from Key Stage 2 to 3 focuses as much on pupils’ academic needs as it does on their pastoral needs.
  4. Create better cross-phase partnerships with primary schools to ensure that Key Stage 3 teachers build on pupils’ prior knowledge, understanding and skills.
  5. Make sure that systems and procedures for assessing and monitoring pupils’ progress in Key Stage 3 are robust.
  6. Focus on the needs of disadvantaged pupils in Key Stage 3, including the most able, in order to close the achievement gap as quickly as possible.
  7. Evaluate the quality and effectiveness of homework in Key Stage 3 to ensure that it helps pupils to make good progress.
  8. Guarantee that pupils have access to timely and high quality careers education, information, advice and guidance from Year 8 onwards
  9. Have literacy and numeracy strategies that ensure that pupils build on their prior attainment in Key Stage 2 in these crucial areas.

With the possible exception of recommendation 8, all of these involve key staff in secondary schools having an understanding of what has gone on at Key Stage 2. This will clearly have different ramifications for different members of a secondary school’s community. For English teachers in particular it means having an understanding of their students’ experience at KS2, including the taught curriculum and the National Curriculum tests.

Having looked at the KS2 English curriculum in Part 1 of this series and explored the reported issues with this year’s test papers in Part 2, I want to look, in this post, at offering some questions to consider for secondary Senior Leaders and English teachers. First of all, though, I have some questions for the powers that be.

Are you sure you’re not trying to do too much still with the Key Stage 2 tests? Purposes and uses. 

The Bew Report of 2011 claimed of the assessment system prior to its publication that, “There seems to be widespread concern…there are too many purposes, which can often conflict with one another. 71% of respondents to the online call for evidence believe strongly that the current system does not achieve effectively what they perceive to be the most important purpose.”

Bew referenced two papers, both by Dr Paul Newton, who was Head of Assessment Research at the QCA. In ‘Clarifying the Purposes of Educational Assessment,’ Newton argues that there are three primary categories of purpose for nationally designed assessments:

  • Assessment to reach a standards referenced judgement. For example, an exam to award a grade, a level or a pass/fail.
  • Assessment to provide evidence for a decision. For example, an A-Level qualification which provides evidence that the student is ready to begin studying a related subject at undergraduate level.
  • Assessment to have a specific impact on the behavior of individuals or groups.  For example, a science GCSE which helps to enforce the KS4 National Curriculum for science.

Newton maintains that each of these three areas of purpose need to be considered carefully. “Where the three discrete meanings are not distinguished clearly, their distinct implications for assessment design may become obscured. In this situation, policy debate is likely to be unfocused and system design is likely to proceed ineffectively.”

In ‘Evaluating Assessment Systems,’ Newton distinguishes the purposes of assessment systems from their uses and identifies twenty two categories of use for assessment on page 5 of the document.

He explains that an assessment’s reliability can deteriorate as more and more uses are added:

“…an end-of-key-stage test will be designed primarily to support an inference concerning a student’s ‘level of attainment at the time of testing’. Let’s call this the primary design inference. And let’s imagine, for the sake of illustration, that our assessment instrument – our key stage 2 science test – supports perfectly accurate design inferences. That is, a student who really is a level X on the day of the test will definitely be awarded a level X as an outcome of testing.

In fact, when the test result is actually used, the user is likely to draw a slightly (or even radically) different kind of inference, tailored to the specific context of use. Let’s call this a use-inference.

Consider, by way of example, some possible use inferences associated with the following result-based decisions/actions.

  1. A placement/segregation use. The inference made by a key stage 3 head of science – when allocating a student to a particular set on the basis of a key stage 2 result – may concern ‘level of attainment at the beginning of the autumn term’.
  2. A student monitoring use. The inference made by a key stage 3 science teacher – when setting a personal achievement target for a student on the basis of a key stage 2 result – may concern ‘level of attainment at the end of key stage 3’.
  3. A guidance use. The inference made by a personal tutor – when encouraging a student to take three single sciences at GCSE on the basis of a key stage 2 result – may concern ‘general aptitude for science’.
  4. A school choice use. The inference made by parents – when deciding which primary school to send their child to on the basis of its profile of aggregated results in English, maths and science – may concern ‘general quality of teaching’.
  5. A system monitoring use. The inference made by a politician – when judging the success of educational policy over a period of time on the basis of national trends in aggregated results in English, maths and science – may concern ‘overall quality of education’.

…when it comes to validation (establishing the accuracy of inferences from results for different purposes) the implication should be clear: accuracy needs to be established independently for each different use/inference.”

As far as I can see, the current Key Stage 2 tests are used, among many other things:

  • To gauge national school performance.
  • To measure individual school performance for accountability purposes.
  • To check individual pupil attainment at KS2.
  • To measure progress from KS1.
  • To establish progress expectations between KS2 and KS4.
  • To check if students are ‘secondary ready’ and therefore trigger the need for them to resit a similar test in Year 7.
  • To enforce the teaching of elements of the National Curriculum which it would be harder to enforce without the test due to ‘academy freedoms.’
  • To inform parents of individual student’s performance.
  • To enable potential parents to make informed decisions about school choice.

This is by no means an exhaustive list. In many cases, the Key Stage 2 data is arguably the most reliable source we have for these uses. However, I do wonder whether the system could be made more reliable and whether all these other uses are making the tests less reliable in terms of theirp primary use.

Are you sure you’re not trying to do too much with the Key Stage 2 tests? Assessing the writing. 

In this TES article, Michael Tidd outlines the issues primary teachers have found, either in teaching the jarring of grammatical and punctuation elements into students’ writing or jarring specific types of writing into the curriculum as they are most likely to feature the elements required to be successful in the moderation process.

This has come about as a result of the use of a secure fit approach to assessment. In her post ‘”Best fit” is not the problem’ Daisy Christodoulou outlines the problems with both best and secure fit assessment. She proposes other ways forward in her conclusion, advising that:

  • If you want to assess a specific and precise concept and ensure that pupils have learned it to mastery, test that concept itself in the most specific and precise way possible and mark for mastery – expect pupils to get 90% or 100% of questions correct.
  • If you want to assess performance on more open, real world tasks where pupils have significant discretion in how they respond to the task, you cannot mark it in a ‘secure fit’ or ‘mastery’ way without risking serious distortions of both assessment accuracy and teaching quality. You have to mark it in a ‘best fit’ way. If the pupil has discretion in how they respond, so should the marker.
  • Prose descriptors will be inaccurate and distort teaching whether they are used in a best fit or secure fit way. To avoid these inaccuracies and distortions, use something like comparative judgment which allows for performance on open tasks to be assessed in a best fit way without prose descriptors.

We use a similar process to this, as outlined here. One wonders whether a key problem for KS2 teachers is that the National Curriculum assessment model is trying to do too much and, to the detriment of the students, it tells KS3 teachers too little.

Are you sure you’re trying to do enough with the Key Stage 2 tests? What’s missing from KS4 at KS2?

Ok, so it sounds contradictory after my first two questions, but there are key elements of the KS4 curriculum missing in the KS2 tests which make them less reliable in terms of their use in estimating likely performance at 16.

Firstly, in the 2016 Key Stage 2 reading test, there were only two opportunities for pupils to write beyond one line of text as an answer. I understand that this may make the questions more reliable in themselves. However, at Key Stage 4, reading assessment requires students to write at far greater length. I’m certainly not arguing for children of ten to have exams lasting two hours, which feature questions to which they have to write responses which fill two and a half sides of A4. I’m merely questioning whether a third of a side of A4 enables the brightest ten year olds to demonstrate their full potential in reading. There are possible implications here for secondary teachers in building, over time, the stamina of students in producing extended responses to reading.

Secondly, again in the reading test, all of the texts are unseen. This is fine if we’re just using the test as a tool for estimating performance at KS4 for English Language where the three texts examined are similarly unseen. However, the English Literature GCSE now has parity with English Language  in terms of school performance measures. Wouldn’t it make sense then to include at least one extract from a full text that students had studied in advance. I’m sure there would be great controversy over which text(s) should be taught, but I think the benefits for children would outweigh the arguments between teachers. One key aspect of literary study is that of making links between texts and their cultural, social and historical context. This used to be an Assessment Focus in the previous framework – though it only ever featured in a limited manner in the tests and the mark scheme. Reinstating it as part of the content domain, could serve to make the link to literary study at Key Stage 3 more effective as well as slightly raise the status of the study of history, culture and society at Key Stage 2.

Are you sure that resits are a good idea?

I’m not going to focus on the potential emotional impact of the resit process which has been written about here. I think we can deal with this additional, potentially high stakes test and help students to deal with it too, in the same way Chris Curtis argues we can support students through the stresses of the grammar test and just as we help students through GCSEs. Instead, I want to focus on curriculum and teaching as these will likely have the biggest impact on students in the long term (including on their emotions).

Imagine training two boys to do five lots of two minutes of keepy-uppies for a competition. One narrowly misses out on qualifying for the semi-finals of the competition and the other goes on to win.

Their coach carries on training the quarter-finalist with keepy-uppies with a tiny bit of full football training mixed in but moves the other on to playing football for a team, training them in set pieces, tackling and passing and with 30, then 60, then 90 minute practice matches every Saturday. The coach knows that both boys have the potential to be playing in the Premier League in five years’ time. Unfortunately for the first boy, keepy-uppies might be useful in terms of ball control, but they don’t prepare you for all aspects of the Premier League properly.

Likewise, the National Curriculum reading and SPAG tests are potentially useful gauges at 11 of certain isolated skills for English. However, the questions aren’t any where near as open as many schools’ Key Stage 3 tasks which are designed to prepare students for the GCSE questions they’ll face in Year 11. In addition, as I’ve mentioned above, Key Stage 2 doesn’t focus on English Literature which includes social, historical and cultural context. The students who are made to resit will need to improve their ability to do the Key Stage 2 style questions, whilst also keeping up with the rest of their year in terms of these aspects of the Key Stage 3 curriculum.

All of this means we need to think strategically in order to limit the extent to which, whilst closing one set of gaps, we might open up a whole host of others and this  brings me onto my questions for secondary leaders and English teachers.

How can you ensure you are clear as to what your students should know and should be able to do based on their KS2 experience and outcomes?

  • Do the reading – check out the National Curriculum for KS2, the frameworks for the Reading and Spelling Punctuation and Grammar tests and the framework for writing assessment so you fully understand the information you can gather from the children’s primary schools.
  • Find an effective way of communicating with the staff in your partner primary schools about the children and about their KS2 curriculum.
  • Get hold of the children’s work – though make sure you know under what conditions the work was produced. In my view, you need to know what they can do independently -though other views do exist.
  • Analyse the data. I’ve created this Question Level Analysis spreadsheet for the 2016 reading paper so that we can see which types of question students were most and least successful at. I’ll write more about it, if it’s worthwhile, once we’ve used it with this years’ cohort.
  • Remember though, that there is a gap between the end of primary and the start of secondary schooling, so…

How might you ensure you are clear as to what your students do know and are ignorant of, what they can and can’t do and what they’ve forgotten when they arrive with you in September?

  • In order to be able to make more valid inferences about your Year 7 students’ knowledge and abilities in September, you may want to pre-test. This should clearly give you information which you will use to inform your planning rather than be testing for testing’s sake.

How could you build on what’s gone before?

  • Build up students’ writing stamina, including extended responses to reading. Look at crafting the small details as well as structuring a whole response like this or like this.
  • Explain and model the writing process and the thinking which goes on behind it.
  • Continue to develop the grammatical knowledge which the students already have, increasingly expecting its application in analysis and consideration in crafting writing.
  • Use challenging texts – these children can read unseen texts with surprisingly sophisticated sentence structures and vocabulary.
  • Carry on building their general vocabulary and developing their use of technical terminology.
  • Keep them practicing what they’ve done previously so they maintain or develop fluency.

I’ve only included a handful of ideas here – the list could clearly go on and on but I realize you have other things to do.

How will you deal with the resits – if they happen?

Let’s consider this question sensibly and carefully as there have been quite a few people who have already suggested that the resits will destroy students initial experiences of secondary school.

First of all, let’s return to the content domain defined in the framework for reading:

There was actually only one of these references (2e) which didn’t map straightforwardly into the GCSE Assessment Objectives when I produced this for the first post in this series:

This would suggest (unsurprisingly) that all of the KS2 domain is still relevant at KS3 and 4.

What about the grammar then? There must be a problem with that. Remember pages 8-12 of the Grammar, Punctuation and Spelling Framework which I mentioned in Part One. If not, or if you never looked at them in the first place, take a look at them. Now imagine that your students in Year 7 are so familiar with those terms that you could start teaching them properly to drop them into their analysis or that you could use them when discussing slips in their writing. That might be nice mightn’t it? There are some terms you might feel are less useful, some definitions you’d rather were changed, some terms you call something else but having children arrive at secondary school knowing this stuff – that could be a game changer couldn’t it?

So the reality is that the majority of this content will be relevant to our teaching for KS3 if we are following Ofsted’s sensible advice to ensure “teaching is of high quality and prepares pupils for more challenging subsequent study at Key Stages 4.”

Well, if it’s not the content that will limit our students, then surely it will be the question types – drilling the students who are being forced to resit in responding to these question types will almost certainly be detrimental won’t it. So let’s look at those again.

Question Types

 

They’re a mixture of multiple choice, ordering, matching and labeling with short and long responses – hardly the question types of the devil and actually, though I’d be wanting to shift the balance towards the extended responses, if students struggled with the basic questions which were mostly about finding information and vocabulary, then this is where they need more practice and this is how we need to amend the curriculum they experience in Year 7. We keep our challenging texts, we keep our focus on grammar and extended, independent writing, we keep our drive to improve responses to reading and all of the other things I’ve mentioned but we build in more work on knowledge of vocabulary as this is where the biggest challenge was in the reading test and, fortuitously, this will benefit these students in the longer term anyway.

When I started writing this, I didn’t expect to be in favor of the resits. In the proposed form, I’m still not, even though I think I’m now beginning to develop a clearer plan of how to deal with them.

If we are to have ‘retesting’ a better model, in my view, would be to test later, either towards the end of Year 7 or beginning of Year 8 and to test more or all of the cohort. I’d also propose a literature element to the tests and a much stronger focus on decontextualised vocabulary testing.

These changes would act as a much firmer lever, I think, to achieve what Ofsted recommended in their Key Stage 3 report.

img_3686-1

Why not try…reading the questions for once?

This is the second in a sequence of three posts about the Key Stage 2 National Curriculum and associated tests for English. In the first post, I explored the controversy surrounding the curriculum as a whole. This time, I’m looking in more detail at the tests themselves, in particular the 2016 papers. I want to see what all the fuss was about so as to unpick what lessons there might be, if any, for secondary English teachers. If you happen to be reading this as a primary teacher, you could probably skip the next bit as it’s an outline of the tests which you’re likely to be familiar with.

Why not literally try pulling the papers apart? No, I do mean it literally. Not metaphorically. Literally. Take the staples out, separate the pages, scatter them over the floor of the room, then dance around singing a ballad about the war between the family of the lion and the family of the bear.

There are two tests which are taken at the end of Key Stage 2 – one of which is split into two papers:

  • Reading
  • Grammar, punctuation and spelling Paper 1: Grammar and Punctuation
  • Grammar, punctuation and spelling Paper 2: Spelling

Reading

The reading test is made up of approximately 30-40 questions each year. Each question is, as of 2016, linked to one of the aspects of the “Content Domain” drawn from the comprehension section of the National Curriculum and listed in the reading test specification. If you want to find out more about content domains and content samples, this article from Daisy Chrisodoulou is certainly worth a read.

Content Domain

Though the connections aren’t perfect, a useful way of getting your head round this as a secondary teacher is to consider how each aspect of the content domain ties in with the Assessment Objectives for GCSE English. The table below attempts to do just that.

AO and Content Domain Comparison

What was clear from creating this grid was that, although there are undoubtedly clear connections between the skills required at Key Stage 2 and Key Stage 4, there is now more clearly (and rightly so I think) a wider gap between the two levels than there was when we were all using the same Assessment Focuses.

If you’re eagle eyed, you’ll notice that the percentages for reading for AQA GCSE English only add up to 50%. The other fifty comes from the completion of two writing questions – one either a narrative or descriptive piece, the other a personal opinion piece. However, as this blog is only about the tests, I’m leaving those for now. I may look at them, alongside moderated, teacher assessed writing, in a separate post in the future. What is worth noting here though is that it is the reading mark from Year 6 that is used by the government to calculate the secondary Progress 8 estimate. The writing mark is not used in this calculation. As a result, students’ outcomes at Key Stage 4 in reading and writing are now predicted based on just their reading response at Key Stage 2.

In addition to the information about the content domain, the test framework also establishes a range of ways in which the complexity of the questions will be varied. I think this is a really useful list for the design of any English comprehension test. It is important here though in terms of gauging whether the test is more or less challenging year on year. The list obviously includes the difficulty level of the text. However, the level of challenge will also be varied through:

The location of the information:

  • The number of pieces of information the question asks students to find and how close they are.
  • Whether the question provides students with the location of the information. For example, Paragraph 2.
  • Whether, in a multiple choice question, there are a number of close answers which act as distractors.

The complexity of the information in the relevant part of the text:

  • Whether the part of text required to answer the question is lexico-grammatically complex
  • Whether the information required to answer the question is abstract or concrete
  • How familiar the information is to the students

The level of skill required to answer the question:

  • Whether the task requires students to retrieve information directly from the text, to know the explicit meaning of single word or multiple words or to infer from a single or multiple pieces of information.

The complexity of the response strategy:

  •  Whether the task requires a multiple choice answer, a single word or phrase from the text or a longer response from the student.

The complexity of the language in the question and the language required in the answer

  • Whether there are challenging words used in the question.
  • Whether students have to make use of technical terms in their answers that aren’t in the question or the text.

We’ll come back to these later when we look at the concerns around this year’s test.

Grammar, Punctuation and Spelling:

The Grammar and Punctuation Paper is worth 50 marks (there are approximately 50 questions) and it takes 45 minutes.

The Spelling Paper, meanwhile, is worth twenty marks (there are 20 questions) and takes approximately 15 minutes, depending on how long the administrator takes to read the questions.

This table provides a sense of the weighting for each of these elements in the overall test.

SPAG Weighting

As with the reading test, there is a Specification for the Grammar, Punctuation and Spelling Test.  Again, this establishes the content domain from which the questions will be developed.

If you’re a secondary teacher, it would certainly be worth you reading this booklet. In particular, Pages 8-12 for the grammar domain and Pages 12-13 for spelling. These are important pages as they tell you what students ‘should’ have learnt in these three areas before the end of KS2.

Remember though, if you find out from the data you receive from their primary school that a pupil did well in these tests, it doesn’t mean that they recalled everything in the domain, merely that they performed well on the questions covering  a supposedly representative sample of the domain. Neither does it mean that they will have retained all of the knowledge over the summer break. If you don’t constantly reinforce this knowledge through further practice through similar isolated tests or through application during analysis or extended writing they will most likely forget it in the future or not make full use of it.

The table on Page 14 helpfully reminds us that students haven’t been assessed on their use of paragraphs in this test – this is instead done through the moderated teacher assessment of writing. This also serves to emphasize the point that students have not used the grammar or spelling in the test in context.

This is an isolated assessment of their knowledge, rather than the application of that knowledge.

Finally, there are some useful though controversial definitions, indicating what the students ‘should’ have been taught about grammar. A number of these have proved contentious because, as we know, there isn’t always agreement over elements of grammar. I’m not going to go over this ground again by unpicking the  2016 grammar paper as I think I covered it in the last post. However, the reading paper this year caused a bit of a stir for a number of reasons so I want to look at that in more detail. 

Why not try asking then answering your own rhetorical question?

So, what was all the fuss about this year? Well firstly, on 10th May, the TES published this article, citing a teacher who claimed the reading “paper…would have had no relevance to inner-city children or ones with no or little life skills.” If anyone has any recommendations of texts for pupils with no or few life skills which would also be suitable for a national reading test, please do leave a comment.

The texts chosen this year do appear challenging. There’s plenty of demanding vocabulary. Haze, weathered and monument appear in the first text. Promptly, sedately, pranced and skittishly appear in the second. In the third, we have oasis, parched, receding and rehabilitate. There’s also some complexity added to the vocabulary by the sentence structures. In text two, we have: “A streak of grey cut across her vision, accompanied by a furious, nasal squeal: ‘Mmweeeh!'” and “There she dangled while Jemmy pranced skittishly and the warthog, intent on defending her young, let out enraged squeals from below. Five baby warthogs milled around in bewilderment, spindly tails pointing heavenwards.” Some teachers have carried out reading accessibility checks on the texts and claim the texts are pitched at a 13-15 year old age range.

The problem here though is that, as we looked at earlier, the difficulty level of the test isn’t just set through the complexity of the texts as a whole. I’m currently working on a Question Level Analysis spreadsheet for the paper, which I hope to share in the final post in this series. In the process of producing this, it’s become clear that, although the first two texts are more challenging based on raw vocabulary the questions for these texts often direct children to simpler and shorter sections or even individual words. The children could pick up marks in the test here without understanding every single word in the whole texts. I would imagine though that not all children understood this, hence the reported tears. As you move through the paper the third text appears simpler in terms of vocabulary. Here though, the questions are based on longer sections of the text and two require longer answers. 

I don’t think the writers of the tests did this perfectly, but I don’t think they did a terrible job. I’ll look at some possible changes I’d like to see in the next post. 

There’s some truth, again superficially, to the claim that the contexts of at least two of the texts are fairly upper middle class: a rowing trip to an ancestral monument and a safari adventure on the back of a giraffe. Who knew you could ride a giraffe? Perhaps my life skills are limited.

Underneath the French polish veneer, these are essentially adventure stories though.  One about discovering family identity, the other about ending up in a dangerous situation after ignoring the rules. These are fairly familiar themes to children, even if the specific contexts may be alien to some of them. I’d worry if we were to find ourselves suggesting that “inner city” children or indeed working class children should be tested using  only texts which describe their lived experiences. 

In Reading Reconsidered, Lemov et al point to research carried out by Annie Murphy Paul in which she argues that “The brain does not make much of a distinction between reading about an experience and encountering it in real life.” Reading helps students empathize. This is no surprise, but is worth a reminder as, at secondary and primary level, it’s our responsibility as teachers of language and literature to expose students to a range of experiences through the texts we direct and encourage them to read. 

img_3686

Why not try finding out about the problem?

Why not try waking up screaming after a recurrent nightmare in which you ride a white camel whilst being pursued by bees who are suffering with CCD?

I’ve spent a good part of today exploring the 2016 Key Stage 2 National Curriculum Tests for Reading, Spelling and Grammar. 

This was in part because I was intrigued by the unrest about a number of changes to the curriculum, assessment and testing model this year. In particular, I was interested to see what all the fuss was about in terms of the level of challenge in the reading paper. Mostly though, as a secondary English teacher and Vice Principal of an all through academy, my motive was to get my head around what we might glean from the data we receive about our 2016 Year 7 cohort when we get the results so that we might address possible gaps in their knowledge to limit a dip at the start of the secondary phase and to deal with the resits which some of them are likely to end up taking. 

Wouldn’t it be good if we could drag something potentially positive out of an assessment which is viewed so negatively by some and distrusted by so many – something useful for the children we (both primary and secondary teachers) educate?

This post will be in three parts. In the first, I’ll focus on the current issues people take with the National Curriculum for English at Key Stage 2 and the three associated tests. The second post will look specifically at this year’s tests to see what all the fuss was about. In the third I’ll tentatively suggest some ways forward, with a particular focus on what secondary English teachers might do with the information from the tests, hopefully to the benefit of their students.  

Why not attend a ResearchED event dressed in a tweed jacket with leather elbow patches and chalk dust marks, convince everyone you’re a traditionalist by offering a reading of David Didau’s as yet unpublished, house sized edu-bible, but secretly start a breakout session on guerilla Brain Gym warfare?

In the limbo period between children taking the tests and the public release date of 20th May, I thought it’d be worthwhile finding out more about the controversy surrounding them. 

So, first, a bit of history…

When the National Curriculum was introduced to UK schools in 1988 it attempted to establish the knowledge and skills which children should learn between the start of their schooling and the age of 16. In order to do this, it  formally separated education into Key Stages. These were based on the structures which were already in place in the schooling system: 

  • Key Stage 1 – Infant school 
  • Key Stage 2 – Junior school
  • Key Stage 3 – Lower secondary
  • Key Stage 4 – Upper secondary

Kenneth Baker’s original consultation document proposed the following purposes for the curriculum:

  1. Setting standards for pupil attainment
  2. Supporting school accountability
  3. Improving continuity and coherence within the curriculum
  4. Aiding public understanding of the work of schools

The curriculum has been amended a number of times, with reasons for these changes being put variously down to streamlining, coherence, relevance and rigour. 

The current National Curriculum, in as far as it is one, has very similar aims to those outlined in Baker’s initial consultation – though the means to the end are quite different now. It seems unlikely, for example, that the following would have been seen in the original National Curriculum:

“The national curriculum is just one element in the education of every child. There is time and space in the school day and in each week, term and year to range beyond the national curriculum specifications. The national curriculum provides an outline of core knowledge around which teachers can develop exciting and stimulating lessons to promote the development of pupils’ knowledge, understanding and skills as part of the wider school curriculum.”

The Curriculum and its associated tests have always been contentious, as outlined by Robert Peal in his polemic Progressively Worse. At different points in its history, the designers and redesigners of the curriculum have been accused of contributing to a “dumbing down” of education with the help of Mr Men or being overly elitist as a result of focusing “too much” on dead white males. At the moment (though some would argue differently) the complaints mainly swing towards the latter of these two. Let’s categorise some of the current debate before we look at the tests themselves. 

Why not try sitting on the fence?

Content

The biggest current issue in terms of the content of both the KS2 English curriculum and the tests relates to grammar. 

This article, from 2013 in The Guardian, neatly summarizes the points Michael Rosen has to make against the National Curriculum’s treatment of grammar and the current testing methodology. Here, he states that he doesn’t disagree with the teaching of grammar in itself, but rather the manner of teaching and testing which the curriculum prescribes. 

On the flip side of the debate are Daisy Christodoulou and David Didau who view the teaching of grammar and linguistic terminology at primary level as a gateway to success at secondary school and beyond as an adult. 

Interestingly, there seems to be very little, if any similar argument about the isolated teaching of spelling and I doubt there would be if the government introduced an isolated vocabulary test. I know this is, in part, because there is far more consistent agreement about the spellings and meanings of words as a result of something called a dictionary, but I can’t help feeling that teaching novices a set of rules and conventions they can later be taught to bend and break would help them in the long run.  

Validity and Reliability 

Some commentators argue that the tests are neither valid (that they don’t assess a decent sample of the domain of each subject) nor reliable (that they don’t assess consistently or precisely enough). Page 31 of the Bew Report deals well with this and other issues further. 

Another argument against the National Curriculum tests is that they are unreliable because of issues with the accuracy of marking and faults in administration. The TES highlights these issues here. 

A number of anti-testers view teacher assessment as being the answer to these problems. The NUT outline their case for a shift towards this kind of model in this document

Teacher assessment can have its own pitfalls though, as Daisy Christodoulou identifies in this blog.  

What’s particularly concerning is the point that it seems to be particularly biased against poor and disadvantaged students. 
High stakes – Under Pressure

This aspect of the debate can be divided into two very closely related issues:

  1. The tests put pressure on schools and teachers to act in perverse ways. 
  2. The tests put undue pressure on children. 

An effective summary of the arguments relating to the former can be found here from Stephen Tierney. 

In terms of the latter, just read this report of children’s reactions to the tests on the day. 

Meanwhile, Martin Robinson offers some balance to this part of the debate in his piece about not panicking


Who uses the data anyway?

A significant issue with the National Curriculum tests and KS2 teacher assessments is that they create a divide between primary and secondary professionals at the exact point that they need to be working together most for the benefit of children. Many primary teachers believe the data is not only not used by their secondary counterparts but actively replaced by other information gleaned from other tests. Secondary teachers, meanwhile feel that the data is unreliable due to inflation resulting from the high stakes nature of the results. Both sides of this argument are explored really well here

The writer, Michael Tidd, who is a middle school teacher, finishes off by saying, “If I see an anomalous result, or a child who appears not to be working at the expected level, then I would think it only normal to speak to the previous class teacher. If only the same happened more frequently between schools.”

Perhaps a good starting point in this process would be for secondary teachers to have a better understanding of the nature of the test papers and this will be the focus of my next post.

Don’t evaluate a book by its cover. 

  
“Literature is the great garden that is always there and is open to everyone 24 hours a day. Who tends it? The old tour guides and sylviculturists, the wardens, the fuming parkies in their sweat-soaked serge: these have died off. If you do see an official, a professional, these days, then he’s likely to be a scowl in a labcoat, come to flatten a forest or decapitate a peak. The public wanders, with its oohs and ahs, its groans and jeers, its million opinions.” Martin Amis

In their own ways, what both Martin Amis and Stewart Lee’s comedy persona seem to be critiquing is the inability of the public to gauge and select quality in literature. Two weeks ago, Engchat focused on the way in which we teach this, under the umbrella heading of evaluation, in schools. A question arose around what evaluation actually is and why it’s often considered to be a higher level skill than analysis.

An answer to the first part of this question should be relatively easy – evaluation is forming an opinion, judging, assessing, appraising. It’s hopefully a lot more than groaning or jeering or oohing or ahing in Amis’ metaphorical garden of literature. 

The second part of the question is more challenging. Evaluation is different to, though often at its best interwoven with, analysis – a close and detailed examination to see either how something is made, how it works or to deepen understanding of its meaning. Evaluation in literary terms is not, in itself, more challenging than analysis. The level of challenge with either form of literary writing is set by the focus text and the question to which a student is writing a response. Whilst one evaluation question about a particular text might be a breeze for a student, the same student might be left to ooh and ah like Cantona by a different question about another text.

James Theobald has written excellently here about some of the key features of evaluative responses, including the need to judge the following four areas:

  • Outcomes
  • Impact
  • Alternative choices
  • Context

James’ post develops an analogy between the kinds of judgements made on bakery based reality shows and those made by students of literature. The four key areas he identifies are, without a doubt, as relevant to literature as they are flour based products and I completely agree that the ability to decide which baked treat or which literary work is best is reliant on secure knowledge. 

I think there are other challenges though which are specific to literary evaluative writing and it’s these which I’d like to explore in this post. 

Challenge #1. “Question: How d’you really feel about…?”

A key issue with making sweeping statements about “evaluation skills” in English is that the demonstration of any such skills – if they even exist – are displayed through responses to a variety of examination question types.

Interestingly, none of the question types require students to merely give a set of statements about a text’s inherent or comparative quality. They are miniature academic essays rather than literary reviews or book group contributions.

Having gone through most of the specimen papers for the new GCSE and A-Level specifications, the questions I’d identity as evaluative tend to fall into or between one of five main categories. I’ve provided exemplification of each below. 

1 Evaluate a text(s) in the light of a given viewpoint.

AQA Literature GCSE Paper 2

‘All animals are equal, but some are more equal than others.’ How far is this idea important in Animal Farm?

Write about:

  • what you think Orwell is saying about equality and inequality
  • how Orwell presents these ideas through the events of the novel.

2 Evaluate a viewpoint in the light of a given text(s).

AQA English GCSE Paper 1

Focus this part of your answer on the second part of the source, from line 19 to the end.

A student, having read this section of the text said: “The writer brings the very different characters to life for the reader. It is as if you are inside the coach with them.” To what extent do you agree?

In your response, you could:

  • write about your own impressions of the characters
  • evaluate how the writer has created these impressions

support your opinions with references to the text.

OCR English GCSE Paper 1

‘These texts are powerful because they show the importance of having freedom and strong personal beliefs.’ How far do you agree with this statement?

In your answer you should:

  • discuss what you learn about the importance of having freedom and strong personal beliefs
  • explain the impact of these ideas on you as a reader
  • compare the ways ideas about freedom and personal beliefs are presented.

OCR Literature Paper 1 

‘Money is the source of all Pip’s problems.’ How far do you agree with this view?

Explore at least two moments from the novel to support your ideas.

OCR English GCSE Paper 2

‘In these texts school is presented as a challenging place for the pupils.’ How far do you agree with this statement?

In your answer you should:

  • discuss your impressions of the pupils’ various experiences at school
  • explain what you find unusual about their school environment
  • compare the ways the writers present the pupils’ experiences of school.

Support your response with quotations from both extracts.

OCR Literature A-Level

‘A great surprise of the play is that Claudius has a conscience.’ How far and in what ways do you agree with this view?

AQA Language A-Level Paper 1

“Interaction with caregivers is the most important influence on a child’s language development.”

Referring to Data Set 1 in detail, and to relevant ideas from language study, evaluate this view of children’s language development.

3 Evaluate how far a text is typical of a particular genre. 

AQA Literature Specification B A Level Paper 2

Explore the significance of the crime elements in this extract. Remember to include in your answer relevant detailed analysis of the ways that Hill has shaped meanings.

4 Evaluate how far a text is influenced by its context.

EDUQAS Literature A Level

Consider the view that “spiritual or otherwise, Donne’s poems are consistently grounded in the physical world of his time.”

5 Evaluate the similarities and differences between two or more different texts or extracts from the same text (compare/contrast).

OCR GCSE Literature Paper 2

Compare how these poems present the effects of war on people’s lives.

OCR Literature A Level Paper 1

Discuss Milton’s portrayal of Adam and Eve’s actions and their consequences in the following extract from Paradise Lost Book 9.

In your answer explore the author’s use of language, imagery and verse form, and consider ways in which you find this extract characteristic of Paradise Lost Books 9 and 10.

Question level challenges emerge because: 

A generic approach to “evaluation skills” won’t work across all five different question types. Instead, teachers need to model the specific procedures needed to respond to the question type(s) their students are required to answer from Key Stage 3 onwards. Responding to each question type can, for novices, be broken down into a set of steps or procedures which, as students become increasingly expert, can be developed, manipulated or even subverted.

Sometimes, frustratingly, the mark scheme for a question can require students to do something very different to the expectations the question establishes. This is the case with the AQA English GCSE Paper 1 question above. The mark scheme for this question requires students to write in more of an analytical than evaluative manner. Though they’re asked to evaluate to what extent they agree with a statement, the exam board have said in training that the question is set up in such a way that students are more likely to do well if they simply agree. This is largely because of what I see as being a misuse of “evaluate” in the second bullet point which seems to me to be actually asking students to analyse. Issues like this can result in teachers focusing students on tricks to be successful when judged using the mark scheme rather than teaching students to write brilliant evaluative responses to the specific question posed. My advice would be to focus on the latter at Key Stage 3 and, pragmatically if you have to, focus on the former at Key Stage 4 or 5. As a result of this kind of issue, it’s really important that teachers are highly familiar with the mark schemes, that they keep up to date with exam board training and that, ideally, someone in the English faculty is a trained examiner.

The bullet points provided to supplement some of the questions can both help and hinder students. In the worst cases, they could actually lead students away from the evaluation of the original focus in the question. This can particularly be the case if they are used as a step by step structure by students, rather than a set of prompts to aid thinking. A good example of how this can go wrong is the OCR English GCSE question about whether the school experiences described in two texts are challenging. The bullet point prompts hint at aspects to be explored in the texts. However, a student using merely the prompts as a guide to responding could write very well about the bullet points without evaluating the statement at all. Presumably, these bullet points have been added to support students who would struggle with the question if they weren’t there. Irritatingly, this requires teachers to support students to jump an extra hurdle which was ironically put there to help them.

There are two other challenges which I think are so important that they deserve their own section.

Challenge #2. Question: How d’you like this knowledge that I brought?

I have my mum to thank in great part for my love of reading.

I well recall being read to every evening before bedtime and piling into my mum and dad’s room every morning with my older brother to have a story before any other part of our morning routine and working my way through all of the various middle class adventures of Peter and Jane and forcing our mother to read to us just before teatime until she was drifting off to sleep then waking her up to carry on and her listening to us reading our books from school and being taken to the library to max out our six book limit before heading off for rainy summer Wells holidays and Ladybird Books and Richard Scarry and Beatrix Potter and Roger Hargreaves and Helen Nicholl and Enid Blyton and Michael Bond and Roald Dahl and Richard Adams and Robert Arthur and Captain WE Johns and L Frank Baum and her encouraging me to read all kinds of things as a teenager, though mostly fantasy fiction and science fiction because that’s what I liked at the time, and her accepting that I was heading off to university to study a subject and read even more and at a much higher level because I loved it rather than because it led to a specific form of employment, oh, and funding much of it alongside my dad.

It’s hardly the first book of Aurora Leigh, but neither was it an upbringing in which being literate was a challenge.

My mum read too. She tells a tale of having read the whole of Heidi one Christmas Day as a child. My grandparents didn’t believe her and set her a quiz which successfully proved she hadn’t lied – good times. I also held a secret belief that I loved reading because (I was convinced) my mum had said she’d read War and Peace whilst she was pregnant with me. Turns out this was my younger brother. All I’ve actually got is a love of peanuts due to her cravings for monkey nuts whilst carrying me. Probably to escape from the cumbersome, Herculean challenge of bringing up three sons, mum moved to reading mostly trashy historical novels about challenging romantic relationships, mostly set in the time of the industrial revolution, mostly (for some reason) set in docklands, mostly in Liverpool or Ireland. More recently, in her retirement, she’s joined a book group which has widened the scope of her reading once more, though I suspect is also to do with getting the latest gossip from the village.

Both my mum and I are therefore fairly widely (though not necessarily well) read in our own very different ways which is a useful starting point for further studies but isn’t enough for an evaluative question. Let’s take a look at this question to explore further what I mean.

Consider the view that “spiritual or otherwise, Donne’s poems are consistently grounded in the physical world of his time.”

I currently know relatively little about Donne and his poetry. In order to be able to answer this question more successfully, I’d need to know or be able to work out from my current knowledge the following, and more, at question, textual, contextual and procedural level:

Question level:

  • The meaning of “spiritual.”
  • The meaning of “physical world.”
  • The meaning of “consistently grounded in.”

Text level:

  • A range of quotations from Donne’s poems which support and refute the view expressed in the quotation – even though this is an open book exam.
  • A range of linguistic and structural terms which are relevant to Donne’s poetry.
  • How Donne uses this language and these structures to present aspects of spirituality and physicality.

Context level:

  • What spiritual means in today’s context and how this differs from what it may have meant to Donne in terms of religion, the church as an organisation and the afterlife.
  • How the spiritual world might have been viewed as different to the physical world in Donne’s time and how this may have impacted on his writing.
  • How the concept of a “physical world” might link to or have changed as a result of Renaissance learning.
  • How, when and why it’s possible for the spiritual and physical world to be seen as intertwined, particularly in relation to courtship and relationships.
  • The nature of people’s views of the physical and spiritual world at the time of Donne’s writing.
  • Whether there are other themes or aspects of context in which Donne’s poetry might be grounded.
  • Relevant views of critics or other readers. 


Procedural level:

  • How to express my own viewpoints in an academic manner, using appropriate vocabulary and structuring.
  • How to select and then integrate relevant and appropriate supporting evidence to back up my own views and potentially refute the views of others.
  • How to interweave knowledge of the text with knowledge of the influence of context.
  • How to explore and debate a range of other critical viewpoints.

It’s clear that James Theobald’s four areas (outcomes, impact, alternative choices and context) are all highly relevant here. Pulling apart a specific model of question in this way though can give even further clarity to identifying the knowledge required and the potential challenges students face in writing a response. 

Challenge #3: I depend on me.

The final area of challenge I want to explore here stems from the fact that some of the evaluation questions, particularly it seems at GCSE level, are based on unseen texts. I think this unfortunately results in these questions being something of a lottery for students as they are dependent on students’ accumulated background knowledge. It can seem that what it comes down to is the interplay between what students know, the lucky dip contents of the text and their ability to express their judgements or evaluations.

This can lead to an overemphasis on teaching the procedural elements above such as sentence starters to make students sound as if they are writing evaluatively. It doesn’t have to be this way though as we can narrow the odds in our students’ favor by increasing what they know, including through:

  • Focusing in class time on really high quality texts in order to really build students’ knowledge.
  • Implementing a strategy to ensure that students read widely and often outside of the classroom.
  • Developing students’ vocabulary so that they are less likely to encounter unfamiliar words and more likely to be able to deduce the meaning of any new words which they do come across.
  • Providing high quality models, explanations and modeling of approaches to responding to these question types as well as opportunities to practice and receive feedback on how to improve.

Throw your hands up at me. 

If you feel I’ve missed anything, then please do comment on this post. I’d love to know what you think.

img_3596

Teacher Workload

The DFE have published three reports today on workload relating to:

These are short reports so I’d encourage you to take a look at them in full at the links above. Even so, to help my thinking and possibly yours, in the post below I’ve summarised the key principles established in each of them and outlined what I think are some of the key questions for Senior Leaders to consider in relation to each area. 

  

Marking:

The report maintains that “The Teachers’ Standards state…teachers should ‘give pupils regular feedback, both orally and through accurate marking, and encourage pupils to respond to the feedback’. There is not a requirement for pupils to provide a written response to feedback: it could simply be that pupils act on feedback in subsequent work.”

There are three central recommendations in the marking report. These are that marking should be meaningful, manageable and motivating. 

Meaningful: marking varies by age group, subject, and what works best for the pupil and teacher in relation to any particular piece of work. Teachers are encouraged to adjust their approach as necessary and trusted to incorporate the outcomes into subsequent planning and teaching.”

Manageable: marking practice is proportionate and considers the frequency and complexity of written feedback, as well as the cost and time-effectiveness of marking in relation to the overall workload of teachers. This is written into any assessment policy.”

Motivating: Marking should help to motivate pupils to progress. This does not mean always writing in-depth comments or being universally positive: sometimes short, challenging comments or oral feedback are more effective. If the teacher is doing more work than their pupils, this can become a disincentive for pupils to accept challenges and take responsibility for improving their work.”

Questions to consider:

  • Does an expectation of the frequency of “deep marking,” for example that it should be fortnightly, ever impact negatively on the timing of feedback or on other aspects of practice?
  • Should you expect the same model of feedback from all teachers or should you allow them to vary their model of feedback in ways which might be more appropriate to the subject or task and more beneficial to students?
  • Do you ever look for quantity/regularity of marking over quality of feedback?
  • Do you ever expect teachers to mark to suit your monitoring rather than to impact on students?
  • Are your expectations manageable in different subject areas and will they remain manageable if changes are made to the school day? If not, how can you make them more manageable without making feedback less meaningful or motivating?
  • Do you have teachers who are doing more work than their students repeatedly?
  • Do any of your teachers accept work which students haven’t checked sufficiently themselves?

Data:

The report outlines the following four principles. School leaders and processes should:

  1. Be streamlined: eliminate duplication – ‘collect once, use many times’
  2. Be ruthless: only collect what is needed to support outcomes for children. The amount of data collected should be proportionate to its usefulness. Always ask why the data is needed.
  3. Be prepared to stop activity: do not assume that collection or analysis must continue just because it always has
  4. Be aware of workload issues: consider not just how long it will take, but whether that time could be better spent on other tasks


Questions to consider:

  • Does the data you collect help you to progress as a school and help students and groups of students to make progress?
  • How do you know that the data you collect is accurate? Might collecting less make it more accurate and more useful?
  • Is the way you present data most helpful to the people who make an impact on student progress?
  • Are you getting the right data, to the right people, at the right time to make the right decisions?
  • Do you expect teachers to input any data more than once when it could be input just once and processed electronically into other forms?
  • Is there any data which you collect which isn’t used to have an impact on student progress or attainment?
  • Could the process of inputting data be made more efficient?
  • Is there any data which you could stop collecting without this having a negative impact on student progress or attainment?
  • Could you free up time for teachers to do things which would be more impactful by stopping the collation of any of your data?
  • Do you need to collect data as frequently as you currently do?
  • Do you need to provide any further training to any staff on the gathering, processing, analysis and use of data?

Planning:

The report establishes five principles of planning:

  1. Planning a sequence of lessons is more important than writing individual lesson plans
  2. Fully resourced schemes of work should be in place for all teachers to use each term
  3. Planning should not be done simply to please outside organisations
  4. Planning should take place in purposeful and well defined blocks of time
  5. Effective planning makes use of high quality resources

Questions to consider:

  • Do you expect teachers to produce detailed lesson plans which don’t benefit students?
  • Do you have collaboratively planned schemes of learning?
  • Are planning tools suitable for or flexible enough for different subject areas? 
  • Do you need to allow blocks of time for effective planning, perhaps instead of having smaller periods of PPA time?
  • Do your curriculum teams spend meeting time discussing curriculum planning rather than school business?
  • Do you and your curriculum leaders assign enough time to curriculum design and planning?
  • Do your curriculum teams have a shared understanding of what effective planning looks like in their subject area(s)?
  • How do you review time set aside for planning?
  • Do your teachers spend an unnecessary amount of time creating or searching for resources to suit the curriculum?
  • Do you ensure curriculum teams receive and/or share high quality curriculum training? 

Marking, marking, marking. A Triptych of Assessment Strategies.

Imagine that you’ve gone to buy a new car. You have certain requirements. Some of you will be limited by the need to find a “sensible” vehicle in which you can fit 2.4 children with associated holiday luggage or family shopping when you’re on the rush home to watch Strictly. Others of you will have an idealised (or even realistic) vision of yourself as a 007 Austin Martin type. Some of you might just want something to get you from A to B.

Either way, now imagine if, when you went to buy the car, you were only able to make the purchasing decision based either on seeing the MOT certificates of the cars or taking the cars on a test drive before you made your purchase. One or the other.

Would you choose just the MOT certificate or would you choose just the test drive?

The MOT will, of course, tell you whether the car can function in certain basic ways. Do the indicators work? Is the steering wheel strong enough? Are the oil levels right? Is the tyre tread depth legal? Many of these pass/fail judgements are taken in isolation. With the MOT certificate, you don’t find out if, once you’re out on the road, the car is appropriate for your needs.

With a test drive, you can get a sense of how the car feels, how the car functions, whether it matches your requirements. Does it take hills well? How does it steer? Is there enough leg room? You wouldn’t necessarily know though if it were safe or legal.

Austin Martin

One of the developments which we’ve been involved in, as part of United Learning, is a system of assessment called Key Performance Indicators. These are essentially elements of knowledge that we want our students to be able to master in English. They provide the MOT of English assessment. Here are just three of the Year 7 Key Performance Indicators:

  • Use the appropriate structure, conventions and vocabulary for formal letter writing
  • Identify specific words, language techniques and features of organization, commenting on why these have been used
  • Identify, define and accurately use the following in a range of writing: types of verb, types of noun, adjectives, adverbs, pronouns

They’re broken down into parts so that you can test students’ ability to apply them in isolation in a pass/fail way. What you might want to do here is set up a series of assessments which check whether a student can accurately identify, use and define the full range of different types of noun or use the appropriate structure and conventions in formal letter writing. What this can provide you with is a set of data which looks like this at whole year group, class and individual pupil level:

KPI Feedback Sheet

This data set can inform your planning because you can see that whilst this student is able, in a set of exam questions, to use past tense consistently accurately and make use of brackets for parenthesis, their use of speech punctuation is not accurate. Other students in the same class may have the opposite set of problems and so a bank of pre-developed resources could be drawn on to address these students’ needs. This kind of diagnostic assessment may sound potentially great – as long as the marking workload doesn’t prevent teachers from planning the in class interventions which could make the difference.

However, there are also possible risks to this kind of assessment model. The first of these is that, just like in the MOT/test drive model, with this isolated form of assessment you can find yourself making assumptions that students can do things when they are decontextualized which they actually can’t do in broader contexts. Just as there is a difference between a motor passing its MOT and performing how you want it to once it’s out on the road towing a caravan (if you like that kind of thing), there is also a great difference between adding speech punctuation to pre-written sentences which students know have been incorrectly punctuated and students writing a narrative from scratch which includes speech that they have correctly punctuated themselves. Equally though, if you prescribe a set of written aspects which need to be included in a piece of writing, you end up with quite sterile writing by numbers. This approach can also result in markers not paying attention to whether a piece of writing is effective as an example of the form/genre the students were trying to craft as they are so focused on a checklist of small and specific details relating to, for example, punctuation. In the worst instances, students could produce work which demonstrates achievement in the assessed KPIs but which is basically substandard. 

Another key risk with this form of assessment is that these elements and these elements alone can become the English curriculum. When teachers know that their students are being assessed in three weeks’ time and they are aware which Key Performance Indicators will be the focus of this assessment, there is a potential perverse incentive for them to teach to these KPIs even if they don’t know what the actual examination questions will be. It is possible to lose the wider scope of the curriculum or for wider expectations to dip.

This is why, in English, we are gradually developing the following set of assessment strategies which will include:

  1. Pretesting of the grammar and punctuation elements of the KPI framework in isolation to support teachers in identifying which students need further intervention in terms of their basic grammar knowledge. As far as possible, this will be done online so that teachers have the information which they need to and can therefore spend time reshaping lessons to support these students rather than spending time marking the answers to the questions themselves.
  2. Periodic retesting of these grammar and punctuation elements, including the ability to add these aspects into or edit the accuracy of pre-produced texts. This will mean that we can check whether students will be able to demonstrate the ability to make use of the elements of grammar when they are required, but not by forcing them to apply them in an artificial way in their own writing unless they choose to because it’s appropriate.
  3. The use of comparative judgements. David Didau has written about our progress with this process here and here. We see this as providing teachers with a method of checking the quality of students’ writing more holistically, more reliably and more efficiently whilst also enabling staff to set a standard within a year group rather than developing writing by numbers. Judgement questions will focus on the relative success of two students writing in a particular form or genre such as narrative, speech or letter writing. 

The first two of these strategies will act like an MOT, whilst the third will be the test drive. Alongside this of course, there are a range of formative assessment strategies which teachers use both in the moment and between lessons, which I’ll write more about in future posts.

When you’re buying a car, you don’t need to make that decision between the two forms of check and both have to be done separately. I wonder whether, in trying to do both forms of assessment with our assessments of writing or reading in the past, we’ve ended up getting ourselves and our students in a muddle.

 

Fractal

“Somehow the vital connection is made.”

In my last post “Capiche?” I explored how there can often be a lack of clarity when we use the term understanding in education. This can lead to very different strategies with varying degrees of effectiveness being used with very different outcomes, all in the name of “deepening understanding.”

When I use the term understanding, I take it to mean the fractal links that we make between pieces of knowledge which are developed, strengthened, displayed and assessed through generating explanations, performances, products or creations.

Fractal

In this sense, I’m inclined to agree with people like Michael Fordham, here, who question whether it’s worth using the word understanding at all, arguing that the term confuses the issue. Essentially, what understanding boils down to is knowing some bits of stuff, knowing that there are connections between those bits of stuff and knowing how to express these connections. The more connections you’re able to express which are relevant to the particular field of study in the way which is appropriate to that area , the more you are said to understand. Fordham argues that it’s all just knowing – “knowing that,” “knowing links between” and “knowing how to.”

At the end of my last post, having confessed to a very limited understanding of Lego, I explained that my older brother had a propensity to nab my kits before I could use them and learn from them properly. This meant:

  • I had limited knowledge.
  • There were few connections between the elements of knowledge which I did have.
  • I wasn’t following any sets of instructions.
  • I wasn’t using worked examples.

In this post, I’ll explore how we can support students with similar issues in English through:

  • Key Stage 3 curriculum design
  • Teaching practices
  • Assessment methodologies.

“What do I do now?” The Key Stage 3 Curriculum

I suspect that, given the changes to national qualifications which have taken place this year, there are unlikely to be many English teachers who could confidently say that they have their curriculum nailed at the moment. Recent online discussions about English teaching have frequently focused on the substantial changes to GCSEs and A-Levels as well as the consequent amendments which people have made to their KS3 curricula.

In order to give you a flavour of the thinking behind our English curriculum, which I believe will be useful in the rest of this post, I’ve written previously on our school’s teaching and learning blog about what we mean by mastery and put together this booklet about overall mastery curriculum design. I’ve also written here about the process we used, with David Didau’s support, to generate our initial overview for the KS3 English curriculum which can be seen here:

ks3 curriculum-1

A number of teachers, when I’ve showed this to them, questioned whether the pitch was too high – whether we were being overly ambitious. In planning and teaching the earlier units, we encountered a number of challenges with implementing a curriculum based on these texts, but pitch wasn’t one of them. In fact, these initial challenges actually related to us needing to more clearly define:

  1. What we want students to be able to do as a result of studying these texts
  2. What students need to know in order to be able to do these things
  3. How our assessment model enables us to see when students are successful in knowing what and knowing how so that we can do something in the classroom to support them
  4. The balance of time between studying such substantial and challenging literary texts and the other elements of an English curriculum

Hopefully, you can see how these link to my issues with Lego from earlier on.

In order to begin to resolve some of these issues, we started to think more clearly about planning with a set of end points in mind. The end point we had focused on originally, when planning with David, was that of exposing our students to the best literary fiction and non-fiction from our heritage and culture and this principle remains at the heart of the English curriculum we offer at all Key Stages. However, there are two other end points which we had not focused on as clearly as we might. The first of these is the long term end point of the terminal assessment of GCSE Language and Literature for all our students. The second is the medium term end point of each teaching cycle within a unit.

In terms of the former – the longer term end point – our curriculum, our teaching, our interventions need to prepare students to be successful in their lives and one of the gateways to that is their qualification in English. In exploring the new AQA GCSE specifications we realized that, although we’d plotted in a chronological sequence of inspiring texts, we hadn’t got the balance of exposure to non-fiction and opportunities for analysis of unseen texts right. We unpicked this through the production of this overview of the weightings of each element of the two specifications.

AQA Break Down

This kind of thinking led on to the creation of this second table (below) identifying the three overarching writing forms we want our students to be able to produce in the dark blue band, the knowledge students require in order to be able to produce them in the middle band and the kinds of high quality texts we need to expose them to in order for them to be successful in the lightest blue.

Overview

“Inbetweener” – Medium Term Planning

Just prior to the summer of 2015, we put in place a range of planning documents to support the medium term planning for our new scheme as well as knowledge organisers from a visit to Michaela Community School. The issue was that we ended up with too many different forms and, though we had the knowledge organisers, we’d designed them as an overview of knowledge from the whole of the text, rather than being more precise in identifying the specific knowledge which students need in the long term and for that unit. It was as if we were using instructions from five different Lego kits to build a model.

We have now stream-lined our medium term planning proformas and have a stronger model for the production of our knowledge organisers. I’ll be sharing some of these from the Rhetoric Unit in Year 7 on this page soon.

We also believe that designing or selecting model pieces of work for students will improve our planning process. To an extent, it will protect against teachers teaching to a set of assessment criteria . These models could also provide us with exemplars which make success more tangible to students and against which, ultimately, we can make comparative judgements of both the quality of their reading and writing combined.

“Today, Tomorrow, Sometime, Never” – The Problem of Time

The issue of time now arises. There are approximately 35 school weeks in the 2016-17 academic year. A number of these weeks are exam weeks at our school with limited teaching time. There will also be a number of other days with a collapsed timetable and therefore reduced teaching time. This leaves approximately 30 full teaching weeks. At Key Stage 3, this equates to approximately 150 lessons.

If you were to divide each year equally by the number of texts to study, it would allow approximately twenty eight lessons (just over five weeks) for each text. However, within this time students are not merely studying each text’s plot, character and themes. They also require time for some explicit teaching of related grammar, punctuation, spelling and vocabulary and opportunities to write texts about and inspired by the texts. Furthermore, there need to be opportunities for them to respond to unseen poetry and non-fiction as well as to write narratives, descriptions and rhetorical texts.

As a result of this, we are now looking to move from five core literature texts in each year to three, supplementing these with related and unrelated non-fiction texts. The intention here is to increase capacity for more and even better teaching of the knowledge elements of English and to enhance opportunities for writing at length.

“My Favourite Game” – A model for teaching and assessment

As with our master curriculum, I’ve written previously about how our Teaching and Learning Model has developed over time and been codified in this document:

Codification Document

Within English, we have taken this and have been thinking through how each of the elements of “knowing that,” “knowing the links” and “knowing how to” can be built into our teaching and assessment. This has led to the creation of this set of strategies which we have either already  implemented or will be implementing over the course of the coming year. In future posts, I will expand in terms of detail on each of these, but I’d certainly welcome feedback as to further avenues we might explore. What do you think we’ve missed in terms of helping students:

  • Encounter, develop and retain knowledge
  • Make connections between the elements of knowledge which they do have
  • Follow sets of instructions.
  • Use worked examples.

Teaching and Assessment Strategies

 

 

Capiche?

“They’ve told us that our students don’t show genuine understanding. It’s all just surface. They’ve said we’ve got to do something about it.”

“Yes. We need to use strategies that show we’re deepening their understanding.”

“Yes. They need to understand don’t they? But they need to understand deeper. Yes. Let’s do that.”

“Yes! Let’s. What? What are we doing?”

“Well. We’ll get them to understand.”

“Ok. Yes. I understand.”

I imagine that this is the kind of conversation which has gone on in more than no schools in the past. It’s the kind of conversation about education in schools which contains so many assumptions and has the potential to mean almost nothing – the kind of conversation which can result in either almost nothing happening or, more dangerously, lots of things happening that have very little impact on students. Neither of the people involved in the dialogue really grasps what the other is saying and neither knows what strategies are being discussed.

My next two posts will focus on understanding.

  • This post will look at what understanding is, if it exists at all, and how we check for it.
  • The next post will explore in more detail how we might go about developing students’ levels of understanding (or interconnected knowledge)  with a specific focus on the teaching of English.

Lego? Lego! Can’t hold me back any more.

I recently built this Lego model for my three your old boy.

2016-02-19 18.08.36

Awful isn’t it? However, it may surprise you to know that I was actually pretty proud of this dodgy space buggy because, tragically, it is genuinely better than I was capable of as a youngster. It’s well known in my family that I am “no good at Lego” – not the kind of Lego that you put together with instructions, but the kind of thing you build once you’ve finished with the original model.

As if to highlight this lack of Lego-ing skill, when I self-deprecatingly posted the photo above on Facebook, it took my younger brother only a matter of minutes to respond that, as a child, this had been about the limits of my space ship building capabilities:

Spaceship

So, how does this shared belief that I am “no good at Lego” connect with the problem of understanding understanding?

Hey Ho. Lego.

In order to comprehend understanding more fully, it is first useful to explore knowledge.

In terms of Lego and the image above, I can know that this piece is blue or this piece is grey. I can also know that this is a “flat piece” or this piece is a “block.” Likewise, I could know that this piece is a “2×4” or this is a “6×12” piece. I either know these things or I don’t. It is binary. This one’s blue. This one’s not blue. This one’s grey. This one’s not grey.

I don’t have to know what a grey, 6×12 flat piece is in order to select the blue, 2×4 block in the picture above. Among other things, I need to know what a block is, what blue is and what the numbers and symbol x stand for. At this level – the level of selecting one coloured piece from a pile – there is little interactivity between pieces of knowledge. 

I would doubt that anyone would argue that selecting one block demonstrates an understanding of Lego. So what would understanding look like? How would you check for understanding?

Would you check by asking someone to place a blue, 2×4 block in the exact centre of a grey, 6×12 flat piece? Would you check by asking someone to build an awful space buggy? Would you check by asking someone to build this, with instructions?

Spaceship 3

Or, would you check by asking them to build this without instructions?

Spaceship 2

Or, would you check if they could build the whole of Lego Land without instructions? Would that demonstrate an understanding of Lego? At what stage do we move from knowing to understanding?

I know that this bit is blue and I know that this bit is grey. I know that, when you connect this bit to these bits, it starts to look like something else. I know that, when I connect all of these bits together in the right way, it makes what looks like a whole world of stuff. Wow! I’ve made Lego Land.

To be successful in any of these tasks requires increasing amounts of interconnected knowledge. The more you are able to make links between the things that you know, the more you understand and the greater the level of challenge you can successfully take on to prove your level of understanding. You can understand a little bit or you can understand a lot – you have a lot or a little bit of interconnected knowledge. That is to say, what some people call understanding and others term interrelated knowledge, as Greg Ashman argues here, is like a fractal.

Perhaps more importantly, there is a tipping point at which, once you have a certain amount of interconnected knowledge, inferring and making knew links becomes both easier and more accurate. In Lego terms, once you have created enough models of a similar kind from the instructions, you can begin to make models without the instructions which also look pretty convincing. In reading terms, once you know enough vocabulary, once you have been exposed to enough examples of high quality texts, once your teacher has expertly modeled analytical writing and succinctly explained how to craft a beautiful essay, then you stand a chance of producing one.

To unpick this further, let’s go back to my issue with being “no good at Lego.” How did this family in-joke come about?

Everything’s not so awesome after all.

In contrast to me, my older brother was the Yoda of Lego. I was Jar Jar Binks.

By the time I was old enough to play with the bricks, he had thousands of pieces sorted into similar colours and stored in a variety of recycled ice-cream tubs. I don’t recall ever being bought any Lego of my own, though I may be wrong. If I was, it wasn’t long before my kits were subsumed into these containers. As a result, I knew very well how to sort Lego blocks into their variety of shades but unfortunately was stuck at the level of binary knowledge. That’s a blue block. It goes in there. That’s a tyre. It goes in there. That’s a dodgy, Lego man toupee. It goes in there.

As the pieces were so swiftly swept into the tubs of doom, I had very little experience of building a kit into a castle or a space station or a fire station from its instructions. In retrospect, I can see that this meant the process of making a model from my imagination became much more challenging. When you have a kit in front of you and a clear set of instructions, you begin to build knowledge of common processes. You don’t need to pick the pieces out of the boxes. They’re all pre-selected in a bag and you just follow the steps.

If you follow the steps enough times, these common processes become interconnected knowledge and you are starting to develop what some people would describe as understanding. This means that, when you are trying to deal with making something from scratch from your imagination, you’re not only trying to work as hard to keep the image of the thing in your head and make up the processes. When I think about it like this, it is far less surprising that my ability to demonstrate an understanding of Lego was at a much lower level than my brother’s.

And there’s the final issue I want to address. My propensity to compare my own skills to my brother’s was actually damaging.

  • I had limited knowledge.
  • There were few connections between the elements of knowledge which I did have.
  • I wasn’t following any sets of instructions.
  • I wasn’t using worked examples.
  • I was unable to see that my inability to apply the limited knowledge I did have to demonstrate understanding was anything other than to do with the fact that I was “no good at Lego.” Things would always be this way.

In my next post, I’ll look at how we might address some similar issues in the English classroom, supporting and challenging students to develop connections between elements of knowledge so that they read and write better.