Real Assessment vs Gummy Assessment 

This post forms the basis for my presentation at TLT17. Elements of it are drawn from a number of the other posts on this blog.

I’d like to think my children have fairly high levels of cultural capital. My daughter and I have read the Narnia series, The Secret Garden, we’re just finishing off Salman Rushdie’s Haroun and the Sea of Stories. We do ballet lessons, swimming lessons, orchestra, choir. We go to National Trust properties in the summer. We probably seem so middle class that, even after the first Chanel 4 series of Great British Bake Off, you’d think we’re still struggling to cope with the departure of Mary Berry – we do actually see it as being equivalent to the advent of anarchy.

And yet my cosy middle class life of easy aspirations has been polluted by this:My little boy has discovered Real Food Vs Gummy Food. This is a  challenge in the YouTubes, the main variation of which entails two participants selecting one of two covered items of food. One is a real item of food like a banana or a fried egg or even something more substantial like a roast dinner and the other is a Haribo style gummy version. One child has to eat the real version, the other has the gummy item. They score the food out of ten using a carefully crafted marking rubric and then start again with more food.

There are hundreds, possibly thousands of videos in the YouTubes featuring children and adults taking part in this kind of whacky challenge. I have actually used some of the time I have carefully built up in my attempts at a work life balance to save you the need to do so as I think there is at least a tenuous link between this and what I want to really focus on here. Plus, it’s always good to start with an anecdote.

My contention here will be that we often use a gummy form of assessment rather than a real one. In my training and for the first fifteen years of teaching, my knowledge of assessment was limited to a few dubiously remembered quotations from Inside the Black Box by Black and Wiliam, a load of ‘assessment for learning’ strategies and APP. I think there were many who trained at around the same time as me who fell into the trap of fairly carelessly pulling our favourite squidgy bits of synthetic assessment from the Starmix pack rather than being able to use a more real form of assessment – one which was carefully interwoven with curriculum design and deliberate practice.

How did we come to a point where we fell into our bag of gummy assessment? It begins, I’m afraid for me, with a confession. I used to be a Local Authority, National Strategies Consultant for literacy. My PGCE training year at Warwick was built, in large part, around National Strategies materials. During my NQT year and second year of teaching, the department I was a part of worked closely with National Strategy advisers and at the school I moved to, in order to become Head of English, we continued to use National Strategy consultants and support materials. When I got the job as consultant, it was as if my whole teaching career had led me to the unquestioning pedalling of other people’s materials. This was essentially a delivery model. Schools, especially schools in which students were underperforming, did most if not all that they were told rather than developing an understanding of the principles behind curriculum, pedagogy and assessment design.

In my consultancy work, I was guilty of advising others to use more group work, more discovery learning, less teacher talk, heavy scaffolding, short extracts of texts rather than full novels, plays or non-fiction texts and, in terms of assessment the monstrous behemoth of the Assessing Pupil Progress materials and hundreds of different objectives and progress grids. I’d go into other teachers’ classrooms to teach one off lessons to demonstrate’ how to do this as if it were possible to do in an individual lesson.

This was a delivery model. Band aids for bumps and bruises which often generated rashy allergies. When we use this kind of assessment, we over-complicate matters for our teachers and this has a knock on impact on the experiences of our students in the classroom. We should be better than this. I hope we are becoming better than this because we’re taking a more coordinated approach to curriculum and assessment design.

Curriculum and Assessment

Have you ever tried connecting two hoses together while the water’s flowing? It’s a tricky, exceptionally splashy business that’s easy to get wrong. There’s a pivotal point in education where you can say the same of curriculum design and assessment. The issue with assessment is, I think, that you can’t  focus on assessment whilst divorcing it from curriculum design. If you do, you’ll end up getting soaking wet.

Let me exemplify this by using the example of swimming.

Both of our kids are making their way through the stages of Aqualetes based on the nationally recognised Swim England Learn to Swim programme. As you watch the sessions unfold over time, you can see the way everything has been carefully sequenced – from the way the instructors get children used to having water in their hair through doggy paddle, breaststroke, back and front crawl to developing the children’s knowledge of survival strategies. I’m still not quite convinced by butterfly or backwards skulling, but the rest all makes sense.

The other week, I watched as one of the teachers called a boy back to the side of the pool, re-explained the correct leg movement for breaststroke, showing him with her arms, then gave him two more opportunities to show her that he could do it correctly. The boy set off the first time and you could tell from the reduction in his speed and slight awkwardness in his movement that he was really thinking carefully about the corrections he’d been coached to make. His legs were moving better, but the front half of his body was submerging more between each stroke. This corrective stage was affecting his fluency but he was trying to do exactly as he’d been told. The second time through, his performance improved. It wasn’t, by any means, perfect but it was more fluid and resembled breaststroke more closely. This was Stage 5 swimming and he was moving closer to Stage 5 performance.

Knowing what’s required in Stage 5 and what the child should have been able to do in the previous enables the teacher to isolate and identify where the issue is for the learner. Assessment is easier if you understand the sequencing of prior learning.

But assessment and curriculum are not good enough alone for students to improve their performance in a discipline. Once an aspect of the curriculum has been grasped, whether it’s back crawl, or simultaneous equations or the use of subordinating conjunctions, students need to continue deliberately practicing these granular elements or steps within procedures to both improve and maintain them.

In their book Peak, Anders Erickson & Robert Pool propose that,

“Some activities, such as playing pop music in pop music groups, solving crossword puzzles and folk dancing have no standard training approaches. Whatever methods there are seem slapdash and produce unpredictable results.

Other activities, like classical music performance, mathematics and ballet are blessed with highly developed, broadly accepted training methods. If one follows these methods carefully and diligently, one will almost surely become an expert.”

Peak by Anders Ericsson and Robert Pool

Some time ago, Bodil Isaksen wrote a blog entitled A Lesson is the Wrong Unit of Time in which she argued that we fell into a trap of attempting to plan learning into chunks of an hour or however long your school’s lessons are because that felt convenient. 

This isn’t how learning works. I think, though, that many schools – certainly all the schools I’ve worked in or with – fall into a similar trap with curriculum and assessment design, for which, a half term is the wrong unit of time. How many of us have, in the past or present, looked at the spacing of a year and decided that we’re going to have six units for our English curriculum because that’s the way the academic year is split up in the calendar and these are the points in the year at which we will be reporting to parents? If we want the children we teach to move from being novices in our subjects towards becoming experts, then we need to accept that it’s more complex than this at the level of curriculum and assessment, but less complicated than we try to make it at the level of teaching.

Curriculum alone

Swindon Academy’s curriculum design follows a mastery model. Mastery is a term which has become commonplace now in education. It’s used to mean so many different things that it runs the risk of becoming meaningless. It would therefore be worthwhile explaining what we mean here by a mastery curriculum. There are four essential strands to our mastery Curriculum. For us, mastery can be contrasted with other approaches, such as spiral curricula which require pupils to move through the content at a pre-determined pace, often changing units after four weeks or half a term because it is time to move on, rather than because the students have understood the content contained within the module. Our model is based on the following set of principles:

  • It’s pitched to the top with clearly mapped, carefully sequenced schemes of learning
  • There’s a shared belief that the vast majority of students can meet our high expectations
  • We have a clear model of teaching
  • There is a process of assessing and closing the gaps

You might be forgiven for thinking that, students taught in a system of mastery would never return to content they had mastered again and again. However, we were keenly aware in establishing our curriculum that, if we wanted students to be genuinely successful, they would need to retain this knowledge way beyond the first testing. Even wizened jedis need to practice to maintain their level of skill.

Yoda

Likewise, elite athletes returning to their sport have to find different methodologies to regain their form and some never do. Once mastered, it is possible for mastery to fade or return.

Jess Ennis Hill

Meanwhile, it is also possible for many of us on the second, third, fourth or umpteenth time to pass a driving test, to claim we have mastered driving at that point and then to almost immediately begin to develop driving methods which suggest we had never mastered the procedures in the first place.


A question which is commonly heard in school staff rooms across the country is: ‘Why don’t our students remember what they’ve been taught? How come when it comes to the exam, they seem to forget so much?’ We also wonder why our students don’t use and apply the basic rules of spelling, grammar and numeracy we have taught them – especially when they are writing in subjects other than English or using mathematical processes outside of their maths lessons. To understand why this happens, there are two models of memory and the mind which we believe it’s important for every one of our teachers to know.

The first model of the mind is from  Daniel Willingham  which he discusses at length in his book “Why Don’t Students like School?” Willingham identifies that the crucial cognitive structures of the mind are working memory (a system which can become a bottleneck as it is largely fixed, limited and easily overloaded) and long-term memory (a system which is like an almost limitless storehouse).

Willingham Memory Model

To exemplify the difference between being able to recall knowledge as a single fact and having to work through unnecessarily laborious process when facts aren’t stored in long term memory, Willingham uses the mathematical calculation 18×7.

18x7

It’s worth bearing in mind that, as Willingham admits himself, this is a highly simplified model. A range of other models have divided the working memory into a set of subsystems. Alan Baddeley, for example, has developed a model which includes a phonological loop which deals with spoken and written material and a visuo-spacial sketchpad which deals with visual and spacial information as well as episodes or events.

Badeley Model

The central executive in this model monitors, evaluates and responds to information from three main sources:

  • The external environment (sensory information).
  • The internal environment (body states)
  • Previous representations of external and internal environments (carried in the pattern of connections in neural networks)

These alternative models have implications for the ways in which we differentiate learning experiences for students. We don’t currently have a clear map of the information processing pathways and there is evidence that the feedback and feed-forward pathways are more complex than the diagram here shows, but this is a useful representation for us to think about in terms of teaching and learning.

For teachers, a key learning point from both of these models is that if nothing has changed in long-term memory, then nothing has been learned and nothing can be recalled or applied. Our teaching should therefore minimise the chances of overloading students’ working memories and maximise the retention in their long-term memories. Willingham maintains that this requires deliberate, repeated practice. The models therefore have implications for curriculum design, lesson planning, pedagogy and the strategies which students need to develop in order to move towards independence.

The second model of memory I think teachers should be aware of stems from Robert Bjork’s work on learning and forgetting – again, I’m sure many of you are familiar with his work, but just to quickly recap, the storage strength and retrieval strength of a memory explain why we remember some things better than others. Storage strength is how well learned something is. Retrieval strength is how accessible or re-callable it is.

I’ve adapted the following diagram and explanation from David Didau’s blog.

Bjork - Storage and Retrieval Strength

Making learning easier causes a boost in retrieval strength in the short-term leading to better performance. However, because the deeper processing that encourages the long-term retention is missing, that retrieval strength quickly evaporates. The very weird fact of the matter is that, when you feel you’ve forgotten how to do something because the task you’ve taken on is difficult, you are actually creating the capacity for learning . If you don’t feel like you’ve forgotten you limit your ability to learn.

So we actually want students to feel like they’ve forgotten some of their knowledge. When learning is difficult, students make more mistakes and naturally they infer that what they’re doing must be wrong. In the short-term, difficulties inhibit performance, causing more mistakes to be made and more apparent forgetting. However, it is this “forgetting” that actually benefits students in the long-term – relearning forgotten material takes demonstrably less time with each iteration. I think this connects to Robert Coe’s best proxy for learning being students having to think hard about challenging subject content. This could have the following implications for our curriculum design:

  • We should space learning sessions on the same topic apart rather than massing them together
  • We should interleave topics so that they’re studied together rather than discretely
  • We should test students on material rather than having them simply restudy it
  • We ought to have learners generate target material through a puzzle or other kind of active process, rather than simply reading it passively
  • We should explore ways to make learning challenging so that learning is not easy

Assessment alone:

We’ve found, in introducing the our mastery curriculum as well as our teaching and learning model, that it’s useful to have a shared vocabulary so that teachers can have efficient and effective conversations about their work and their impact. This should also be the case with assessment practices. The following terms (all of which I’ve learnt from reading the work of Daisy Christodoulou will be, I think, key to developing a shared understanding of assessment practices:

Domain:

The domain is the entirety of the knowledge from which an exam/assessment could draw to test a student’s understanding/ability. At Key Stage 4 and 5, this is defined by the specification, though there are also elements of knowledge from previous Key Stages which aren’t listed in specifications but that still form part of the domain.

Sample:

The sample indicates the parts of the domain which are assessed in a specific task or exam. It’s rare we’d assess the whole of a domain as the assessment would be overly cumbersome. Well designed assessments are carefully thought through. Samples should represent the domain effectively so that valid inferences can be made based on the data gained from the assessment.

Validity:

The validity of an assessment relates to how useful it is in allowing us to make the inferences we’d wish to draw from it. “A test may provide good support for one inference, but weak support for another.” (Koretz D, Measuring Up) We do not describe a test as valid or invalid, but rather the inferences which we draw from them.

Reliability 

Daisy argues that if an assessment is reliable, it would “show little inconsistency between one measurement and the next.”

Test reliability can be affected by:

Sampling:

  • Most tests don’t directly measure a whole domain; they only sample from it as the domain is too big. If the sample is too narrow, the assessment can become unreliable.
  • If the sample is always the same, teachers will strategically teach to the test to seemingly improve student performance.

Marking:

  • Different markers may apply a mark scheme rubric differently.
  • One marker’s standards may fluctuate during a marking period.
  • Teachers can consciously or subconsciously be biased towards individuals or groups of students.

Students:

  • Performance on a particular day can vary between the start and end of a test.
  • Students perform differently due to illness, time of day, whether they have eaten, emotional impact of life experiences.

Difficulty model

In this form of assessment, students answer a series of questions of increasing difficulty. A high jump competition or a GCSE Maths exam paper are good examples of this model.

Quality model

Here, students perform a range of tasks and the marker judges how well they have performed, most often in relation to a set of criteria. Figure skating competitions and English and history GCSE papers use this model.

General issues which Christodoulou identifies with the most common assessment models:

  • A focus on the teaching and assessment of generic skills can lead to teachers paying insufficient attention to the knowledge required as a foundation for those skills. For example, vocabulary, number bonds, times tables, historical chronologies or relevant, subject specific facts can be overlooked in favour of how to evaluate or problem solve.
  • Generic skill teaching makes deliberate practice far more challenging as it focuses on larger scale success as opposed to fine grained assessment and training. For example, formative assessment in sport may take place during a match rather than a drill. Here, the coach may miss an issue which a student has with a specific aspect of the sport and then not address it.
  • Using only exam questions for assessment, especially though not exclusively for subjects whose exams are based on the quality model, can hide weaknesses which are at a smaller scale.

To more fully grasp this, take a look at these two videos, imagine you’re a cricket coach and think about which clip would be most useful to support your formative coaching of a batsman and which would be most helpful in picking a batsman for your team.

In the first of the two clips, as a coach, you can see the player’s ability to repeatedly respond to a specific situation. The ball lands almost exactly the same every time. As with the swimming coach earlier on, you can provide feedback to the response and potentially provoke an immediate change in processing. However, this drill doesn’t provide you with information about how the player will respond to the same situation in a match play situation. The second clip may or may not provide you with this as you could watch hours of footage without seeing the same kind of ball being bowled down the pitch. When it does, however, there are factors which could impact on the player’s reaction other than the bowling motion, the ball’s movement through the air and the bounce off the pitch. The pattern of proceeding balls, the length of time the batsman has been at the crease, the relationship between the batsman and his current partner, the quality of sledging the batsman has been exposed to. There are, therefore, times when we need to drill our students in the really granular elements of our subjects to be able to provide them with high impact, immediate feedback and, I believe, times when we need to allow them to play something more akin to the full match.

This requires a greater understanding of assessment design and the relationship between the curriculum and the assessment.

When we teach, we teach the whole of a subject domain. A (hopefully representative) sample of the domain is used for summative assessments. If the domain is colour and you’ve taught the whole spectrum, then this sample could be a good one.

Domain and Sample

The sample in terminal exams won’t be the same year on year though – just as no one whole cricket match is the same as another. If your students came out of their GCSEs which had covered this sample, and you hadn’t taught your students much about light blue then you might be quite relieved.

Domain and Sample 2

If you were to turn this into a school subject, Spanish, and you were to imagine that light blue is the equivalent of aspects of language relating to family and you’d spent quite a lot of your curriculum time on family, then you could feel like kicking yourself – though you may well be wrong to do so.

When producing assessments, there is a need to consider how the sample you’ve selected might skew your outcomes, the inferences you draw from these assessment outcomes and the actions you take as a result of these inferences. Dependent on the assessment format, this sample could be used to make valid summative inferences if you’ve just taught the blue, yellow and green elements.

Domain and Sample 3

This sample, meanwhile, may be less effective in making valid summative inferences if you’ve taught the blue, yellow and green elements but could be used well if you’ve just taught yellow and green. Having said this, it doesn’t assess the aspects of purple they were taught last year to see how much they’ve

Domain and Sample 4

Two added complications arise in a subject like English language. The first is that the primary domain in the GCSE papers is a set of procedures and the second is that there is what could be described as a hidden domain. As English teachers in AQA centres, we know that the pattern of questions will always be the same on both papers. you take each strip of colour below to be a separate question and you could teach the procedures students need to follow for these questions ad nauseam. This would cover the domain which will invariably be sampled in the paper.

Domain and Sample

The second English language paper, though, can feature texts from any number of different domains: geography, history, religion, philosophy, science. Many of the vocabulary, grammatical elements and linguistic requirements are also hidden if you only go about teaching the procedures required in responding to the question types. Again this highlights the need to drill students in both the granular details and give them opportunities for match play.

Hidden Domain and Sample

Bringing the hoses back together

Typically, in my experience at least, schools will map their curriculum and assessment so that it looks something like this:

Typical map of assessment and curriculum

Chunks of time are allocated to specific blocks of the curriculum. Often these blocks are dealt with discretely, assessed separately, students forget content and, as they are not required to recall it frequently, or potentially at all, they are less successful when their abilities are sampled in a terminal examination.

An alternative model is to carefully sequence content and interleave both content and assessment so that students are having to more frequently recall elements of the subject. This would look a little more like the model below. Each module introduces new curricular content, but also involves further assessment of prior content to secure a greater chance of increasing storage strength and retrieval strength.

Enhanced map of curriculum and assessment

To support our implementation of these principles in school, we’ve identified two aspects we want to address: a teacher element and a student element.

What are our next steps for the teacher element?[1]

  1. Ensure the curriculum is effectively mapped out and sequenced, establishing the factual and procedural knowledge which students will learn. Divide the knowledge from the curriculum into that which students need in the long term and that which students need for a specific unit. Ensure the bulk of curriculum and prep/revision time is spent on students focusing on retaining the most important knowledge. Build space into the curriculum to assess retention of knowledge from previous units which students need in the long term.
  2. Establish when students will be assessed both summatively (whole Academy calendar) and formatively (faculty curriculum overviews). As far as possible, this should take into consideration: the completion of teaching all elements, enough time between teaching and testing for revision and to suggest that our inferences are based on learning rather than performance.
  3. Ensure that the purpose of each assessment is clear to all involved in its design, delivery, marking and provision of feedback. The format of the test should enable the function to be achieved. It should also ensure that the inferences drawn from the results are as valid as possible. The main purposes of our summative assessments include re-streaming students, reporting to parents, establishing attainment and progress over time in teaching groups and cohorts of students to report to governors. A key question for you here is whether your summative assessments are reliable enough to enable you to validly infer that certain students are working at “age related expectations” in your subject. Formative assessments should be used to identify potential gaps in knowledge, misconceptions or deficiencies in ability that can be subsequently addressed.
  4. Design assessments aligned with this timing and purpose. Using Christodoulou’s principles for summative and formative assessments will help here. Over time, two separate banks could be built up: one of summative and one of formative assessment tasks. For summative assessment, it’s also worth asking yourself the following questions, based on those found in Santoyo’s book Driven by Data. Do assessments in each year:
    • Address the same standard of skill/content as the end of Key Stage assessment
    • Match the end of Key Stage assessment in format?
    • Enable students to move beyond that year’s content/skill level?
    • Reassess previously taught content which is necessary to retain until the end of the Key Stage?
  1. Trial the use of comparative judgements in subjects where the substantial proportion of assessment uses the quality model. 
  2. Preview assessment tasks to ensure that:
  • Questions don’t provide clues as to the answer.
  • Questions are actually testing that students have learned or can apply the knowledge you wanted rather than something else.
  • Questions are worded accurately and any unnecessary information is removed.
  1. Review assessments after use to establish whether they provided you with information that enabled you to make the inferences you wished. Make amendments to assessment items, where required, if they are to be reused in the future. 
  2. Standardise the conditions in which summative assessments take place and the ways in which they are marked. 
  3. Ensure that, where data from assessments is used to make key decisions, the data is sufficiently reliable. For example, when moving students between sets, data from more than one assessment is utilized.
  4. Develop the teaching and learning review which forms part of each teacher’s CPD Booklet to ensure that teachers have action plans in place to address gaps in attainment.
  5. Establish procedures for Curriculum Leaders to review and summarise teacher’s action plans, sharing them with their Line Managers for quality assurance.

The Student Element. 

Over the past two years, a number of our faculties have been trialing the use of knowledge organisers and low stakes testing or quizzing as part of the process of curriculum design. Different models have emerged, sometimes with different purposes and using different frameworks. We want to make the use of knowledge organisers, self-testing and the use of flashcards a core part of our students prep across subjects.

In order to secure the highest impact of this work, we need to evaluate the models currently in use to generate a set of shared principles and uses for these tools. We need to be sensibly consistent in our approach, keeping in mind the differences between the subjects that we teach. There are certainly potential benefits to the use of both knowledge organisers and quizzing, but we need to ensure these are harnessed effectively in each subject area.

Why should we bother with quizzing and knowledge organisers? Aren’t they just fads?

The term knowledge organiser could be a fad, but the idea of organising knowledge into schemas is certainly not as it has been going on for centuries.

As subject specialists, having carefully mapped our curriculum through from Key Stage 3 to Key Stage 5, it would be both wise and desirable to look for the most effective methods to ensure that students retain as much of the knowledge we are teaching them from one year to the next and, of course, into their lives beyond school. 

On a more pragmatic level, in order to support our students to do well with the new GCSE qualifications, we need to help them develop methods for retaining knowledge in the longer term. These qualifications are now more demanding. They require students to retain knowledge longer as they are based increasingly on terminal examinations rather than coursework and they ask more of them in terms of problem solving.

Even if it weren’t for this though, over the course of the last century, hundreds of cognitive science studies have ranked practice testing as one of the most effective methods of improving the retention of information and procedures in the long term memory. “In 2013, five cognitive scientists (Dunlosky, Rawson,Marsh, Nathan, Willingham 2013) collated hundreds of such studies and showed that practice testing has a higher utility for retention and learning than many other study techniques.”

The table below is taken from John Dunlosky’s “Strengthening the Student Toolkit”. In this paper, he argues that, “while some [study] strategies are broadly applicable, like practice testing and distributed practice, others do not provide much – if any – bang for the buck.” Low stakes, practice testing is one of the most effective study methods. 

Dunlovsky

Alongside this, sits Cognitive Load Theory and the work of John Sweller. Our teaching and learning handbook outlines the idea that our working memories have limited capacity only coping with approximately 7+/- 2 items of information. Once we go beyond these limits, then our thinking processes become bogged down. These ideas have been refined over the last couple of decades into a set of instructional principles called Cognitive Load Theory. In their book, “Efficiency in Learning” Sweller et al argue that, “Taken together, the research on segmenting content tells us that:

  • Learning is more efficient when supporting knowledge, such as facts and concepts, is taught separately from main lesson content.
  • Teaching of process stages should be proceeded by teaching the names and functions of components in the process.
  • Teaching of task steps should be segmented from teaching of supported knowledge such as the reasons for the steps and/or concepts associated with the steps.”

Well-designed knowledge organisers or schemas and effective self-testing could therefore be useful in terms of reducing the cognitive load on our students when they are applying knowledge in performance, production of problem solving.

Knowledge Organisers

In a blog post entitled, “Knowledge Organisers: Fit for Pupose?” Heather Fearn describes how she looked at lots of examples of knowledge organisers and found that often there was a confusion over their purpose which caused the documents to be muddled in design. As a result, they were confusing for students to use. She identifies three valid purposes:

  • A curriculum mapping tool for the teacher
  • A reference point for the pupil
  • A revision tool for the pupil and possibly parents

Given that we have Schemes of Learning for teachers to make use of and text books for students as a wider reference resource, I believe a useful definition of a knowledge organiser at Swindon Academy would be:

A structured, single A4 sheet which students, teachers and parents can use to create low stakes practice quizzes. The sheet identifies the raw knowledge which needs to be recalled swiftly in order to be successful within the assessment for a specific unit. This could include: 

  • Definitions of terms, concepts or key ideas
  • Components of a process
  • People/Characters involved in a chronology
  • Processes/Chronologies/Narrative summaries
  • The steps in procedures

Use the following to check the formatting of your knowledge organisers.

  • Identify knowledge which will be required beyond the end of the unit and knowledge which is only required in the medium term.
  • Include the absolute key words so that peer markers of quizzes can check they have been used in answers.
  • If you have to write more than one sentence, consider your phrasing. This will make your own explanations clearer and more efficient when you speak.
  • Don’t have too many sections/categories – four or five are probably sufficient.
  • If including images, ensure these are the same format as those you will use in your actual lessons.
  • Spellcheck your knowledge organizer.
  • Don’t include ‘thoughts to consider’.
  • If it isn’t essential it shouldn’t be there.

Self-testing. 

In his blog, “One Scientific Insight for Curriculum Reform” Joe Kirby of Michaela Community School poses the question: “what’s the optimal format and frequency of low-stakes testing or retrieval practice?” He cites various research papers from Roediger et al. In terms of format, he maintains that “Applied research suggests [well designed] multiple-choice questions are as effective as short-answer questions. The latest research study is as recent as March 2014, so this is a fast-evolving field, and one to keep an eye on.” With regards to frequency, he adds, shorter and more frequent quizzes outperform longer and less frequent. However, current research suggests that impact on our long term memory is maximised if this testing is spaced and interwoven.

He then goes on to summarise the work of a number of cognitive psychologists from the book “Make It Stick” in the following set of principles for self-testing:

  • Use frequent quizzing: testing interrupts forgetting
  • Roll forward into each successive quiz questions on work from the previous term.
  • Design quizzing to reach back to concepts and learning covered earlier in the term, so retrieval practice continues and learning is cumulative.
  • Frequent low-stakes quizzingin class helps the teacher verify that students are in fact learning as well as they appear to be and reveal the areas where extra attention is needed.
  • Cumulative quizzingis powerful for consolidating learning and concepts from one stage of a course into new material encountered later.
  • Simply including one test retrieval practicein a class yields a large improvement in final exam scores, and gains continue to increase as the frequency of testing increases.
  • Effortful retrieval makes for stronger learning and retention. The greater the effort to retrieve learning, *provided that you succeed*, the more learning is strengthened by retrieval.
  • In virtually all areas of learning, you build better mastery when you use testing as a tool
  • One of the best habits to instill in a learner is regular self-quizzing.

What are our next steps for the student element?

  1. Design knowledge organisers which fit the definition above for Schemes of Learning in Years 7-9.
  2. Use the checklist above to review the knowledge organisers.
  3. Devise self-tests or drills which could be used to assess students’ retention of the knowledge. This should include:
  • Completion of a blanked out timeline
  • Matching definitions and key terms
  • Labeling key diagrams from the organiser
  • Answering questions based on the knowledge organiser
  • A crossword with definitions from the organiser as the clues
  • Translation exercises for MFL using vocabulary from the organiser
  • Short answer questions and multiple choice questions based on the knowledge from the organiser
  1. Generate a prep schedule for students for self-testing of the sections of each knowledge organiser. In the first week, students will produce flashcards based on the organiser and in future weeks, students will use Look And Say and Cover and Write and Check (LASACAWAC) or an online quizzing platform for a specific proportion of their prep each week.
  2. Ensure knowledge organisers are stuck in to each prep book.
  3. Train students in how to use their knowledge organisers.
  4. Ensure that, as students move through the key stage, they are frequently testing themselves and being assessed in class on knowledge from previous units which they require at the end of the key stage.
  5. Add the schedule to E-Praise (the online homework record we use).

Here are the knowledge organisers for terms one and two for Year 7.

[1] Some of the following are taken from Phil Stock’s sequence of blog posts “Principles of Assessment.”

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s

This site uses Akismet to reduce spam. Learn how your comment data is processed.