Thursday 31 December 2015

Two Stars and a Wish... For Better Feedback: Part 3

In this post, I want to take some time to consider the importance of peer feedback for exam classes.  I think that it's important that, when planning peer/self assessment activities, we (as teachers) consider what we want students to gain from the activity. When it boils down to it, there are a few possible options: if they mark their own work then you can spend more time planning; you need a plenary task and this should be fairly simple; you want them to see models of work produced by other students; or you want them to have a greater understanding of how they're being assessed. For me, it's always the latter.

For exam classes, I often liken the exam to an elaborate tap dance: you have a short amount of time to demonstrate a variety of steps and the smallest mistake can cost you your grade.  Just like with the tap dance, practice is key, as is understanding precisely what the examiners wish to see.  It's true that you can also use other analogies for this but the essence of it is simple: the more students understand what's being assessed, the easier they will find it to produce this in exam conditions.


I've tried a number of ways of getting students to get to grips with mark schemes; often this means creating an 'understandable English' version that cuts through all of the vague examiner terminology that litters mark schemes in my subject.  I find that this is best done with students, rather than giving them a diluted version straight off.  After this, I then use various tools to ensure self/peer assessment is productive.

First and foremost is the checklist.  This works well when assessing writing skills, as students need to apply certain skills to 'check off' the examiner's list.  However, it's clear that these need to be differentiated.  As I teach mixed ability at KS4, I will often use this as a chance to stretch key students by giving them the checklist for the next band up, or giving them a choice between that of their target grade and the next one up.  This then gives students a clear set of criteria to check their work for.

Another tool that works well, especially for reading skills, is the 'humble highlighter' that I looks at in a previous post (http://goo.gl/fmL2Hw).  This means that students can identify where they've used certain skills, using steps as in the slide below.


It is also useful to model assessment to students, as effective training will help them develop a successful approach to assessing their own work.  As part of this, I often given them a selection of strengths/targets that they might look to apply for different grades, which they can use if they are struggling with the mark scheme criteria.

Another place that students can look for suitable strengths and targets is in my own marking: they can use this to identify if the piece of work has met past targets, or if it needs to develop further.  I've found the use of 'writing sprints' beneficial for this (an idea that was recently suggested by a colleague). In this students write in five minute chunks.  At the start of each 'sprint' they set themselves a single target for that five minutes.  At the end, they annotate where they've met the target and set themselves another target for the next paragraph.  This constant approach to reflect also encourages them to proof read as they go along in the exam, rather than leaving until the end when they could risk running out of time.


The final tool that I used to help develop student-led assessment is me.  By reading their feedback as I mark their books, I can check their understanding of the mark scheme.  One example is the use of 'ambitious' vocabulary: I recently marked a book where a student had said that they'd met their target of using it through their use of words such as 'however' or 'definitely'. Whilst the student had improve the range of vocabulary used, it was clear that these choices would not be ambitious enough for the exams.  Checking their self assessment then allowed me to leave some questions for them about what ambitious vocabulary is, making links to some of the bonus spellings I'd given in a recent spelling test as examples.

Overall, I feel that this understanding of how to assess helps students in two ways.  Primarily, it means that they understand what examiners are looking for and, as a consequence, helps them to display these skills under exam conditions.  However, it also works to develop their independent revision skills, meaning that they now have a way of reflecting on their progress when doing their own revision outside of the classroom.

Image credits:

Sunday 6 December 2015

Two Stars and a Wish... for Better Feedback: Part 2

In my last post, I reflected on how I had used highlighters as a tool for self and peer assessment. These are a great tool for getting students to highlight where specific skills have been applied but with that application of peer assessment, there is still a crucial barrier to effective peer assessment: untrained students.  This can be a particular problem if your students are always swapping work with the same person as, if their peer assessor is giving poor targets, then they will consistently get a bad deal in comparison to others students; this can be especially problematic in lower sets or mixed ability classes.

In order to give every student more 'bang for their buck', I've taken to using gallery feedback when students peer assess extended pieces of writing.  However, I've adapted the method that I've shamelessly stolen from others on Twitter to support students in covering a wider range of skills for feedback, whilst keeping the great opportunities that it offers to students (in terms of reading other students' work as models and also in the engagement with mark schemes that it provides).

I also want to stress how I feel about allowing students to talk during this process; some teachers I know of have encouraged students to complete gallery feedback in silence, though I feel that (if feedback is to be effective) students should have the opportunity to discuss the targets they're giving.  Additionally, I also believe that making the task an unstructured 'wander around the room' could be problematic- as it leaves students open to gather for a chat with friends as well as stopping each child from getting the same feedback coverage.

My approach, then, has always been fairly systematic: students start the feedback with several postits (one for each stage of the feedback) that they write their names on.  The naming gives a degree of accountability as well as enabling students to follow up on any targets that they want further clarification on.  After this, students then move one seat over and assess the first piece of work for a specific skill (range of vocabulary, for example), highlighting and annotating where the writing demonstrates these skills and using the postit to write a strength and a target that is left with the piece of work.

Of course, this could get very tedious and it would also be ineffective to keep the same assessment criteria when students move on to the next piece of work, as there is little value to the task if the student ends up with several postits with the same feedback.  It is for this reason that I change the criteria each time that students move to the next seat- thus allowing them to assess a range of skills as well as ensuring that each student gets different areas to work on.

Often this will take a full lesson to do thoroughly, as I often ask students to share the strengths and targets that they are giving along the way to model this process for others.  It also means that, in the second part of the lesson, I can give students a chance to choose which target(s) they wish to work on by redrafting a section of their work and then self assessing their progress at the end of the lesson.

I've found that this is particularly good when preparing students for assessments/exams (through both KS3 and KS4), as it clearly demonstrates how they are eventually assessed on a number of skills through one task.  On top of this, it allows them to see other examples as models (something that I encourage by getting them to reflect on what they have learnt from reading the other examples before they generate their redraft).

If you have tried anything similar, please let me know your thoughts by commenting or tweeting me (@borismcdonald). I'm also going to be tweeting some examples of this feedback in the following week.