At a recent department meeting, I once again kicked up a bit of a stink. Unfortunately, I didn’t articulate myself particularly well and my point was obscured beneath my habitual layer of bluster carefully draped upon an educational self-righteousness. So I wanted to make my point a little clearer and, finding myself stranded from work with nothing but a laptop (long story) I wanted to do something which I’ve been meaning to for a while: start a blog.
At the meeting, we discussed how we were going to give feedback to students following their regular internal assessments. The model agreed upon was that following the assessment, students would receive a sheet showing what marks they received and for which topic of question. There would be a kind of “next step” box at the bottom where teachers would write a target for the student to pursue based on their performance. So if they got a question on ionic bonding wrong, they would be given a new, slightly different question on ionic bonding to do. Time in class would be spent on students “reflecting and perfecting” their work.
My objection to this is starts with the purpose of these types of assessments. Generally we have three reasons for “exam-conditions” type internal assessments:
- To summatively assess how a student is performing for monitoring and tracking purposes
- To formatively give a student information about their performance, provide feedback and improve performance next time
- To boost retention using retrieval practice.
To my mind, number 3 is actually the most important. Based on it, I like to test hard and test often, but this is a discussion for another time.
Generally, we were discussing the assessment in terms of point 2. What does this assessment tell me about what the student does know and does not know? What can I therefore do to boost student knowledge?
My problem with this is as follows. My anecdotal experience is that at GCSE level there are many students who, in response to a question, demonstrate that they have knowledge of the content. However, they drop a mark or two because they just haven’t used the key phrase necessary. There are other students who, despite clearly not knowing what they are talking about, manage to score a mark by using a key word or phrase. I’ve put a couple of examples of this below. This would make the assessment invalid. The result does not correctly tell me what a student does or does not know.
I also think that the assessments are unreliable. Namely that if the same student took the same assessment the next day they would not give the same word-for-word answers as they did the previous day. Assessment maestro Dylan William has written about this at length. The long and short of it is that short, teacher crafted assessments can experience wild swings in reliability – you just need to look at the bonkers GCSE remark statistics for proof of this.
So if that’s true – what is the point in spending time on each individual student, painstakingly looking at the individual content of the questions they got right and the ones they got wrong, then asking them to improve those specific questions? Not only would their reflection prove little more than “given a specific piece of feedback on a specific piece of work the student can make a specific improvement” but you don’t even know whether or not you have even pointed your feedback in the right direction.
So what do we do instead? I have been totally swayed by those who argue that the focus of feedback should be switched around. Instead of the focus being on what the teacher feeds back to the student, re-align it to what feedback the teacher receives from the students. On an individual level sure, I don’t really know whether or not this student understands ionic bonding. But if I have been through 30 papers then I can get a much better read on what the class as a whole struggles with. You then spend your “reflect and perfect” time working together as a class on the one or two issues that you think are most important.
This process might be tacit and expertise-based, but in my opinion would be a hell of a lot more useful and efficient than individual student feedback on individual work. It doesn’t mean that the specific feedback can’t or shouldn’t be delivered. But in terms of where we choose to position our focus and emphasis – whole class beats individual student every time.
The boring stuff:
A couple of examples of students who would drop marks despite knowing the content:
A classic example is in GCSE rates of reaction. The rate of a reaction depends on the frequency of the collisions of the particles taking part. In a question about the effect of temperature on the rate of a reaction if a student writes “as the temperature of a reaction is increased, there are more collisions” they drop a mark for not writing “more frequent collisions.” Is it clear that they know what they are talking about? Probably. Do they get the mark? No.
Compare that to a student who knows bugger-all about the effect of temperature on the rate of a reaction. However, they know that the key phrase whenever talking about rates is “there are more frequent collisions.” They might not even know what “rate of reaction” means, or what things are doing the colliding and would still get one mark.