A little while back, I wrote about mock exams, assessment theory and advice following tests. You can read that post here, and I recommend you do before trying this one out. To summarise, I argued that:
- Tests are about sampling a domain
- We use the test to try and figure out what things a student knows and what they don’t know
- QLA doesn’t really help you with that
- AO analysis doesn’t really help you with that either
Broadly, if the questions in the test reveal that there is an area of content (a domain) your students don’t know, the response isn’t to focus on those questions, but to go back to that area of content (the domain) and reteach it.
My thoughts in that post were very general and theoretical. I didn’t really go into detail about what to actually do beyond saying “go back to the domain.” In this post I want to go through what I actually do and how I marry this bit of assessment theory with the increasingly popular idea of whole class feedback (WCF). In a nutshell, WCF is a way of checking students’ work and giving them feedback but without faffing around with time consuming written comments that don’t help anybody (for more on marking, see here). You take in the students’ work, read it all, and on a sheet of paper you jot down some common problems or things you want to address, bring those up in class and expect students to amend and correct their work. Easy.
Into the classroom
So let’s say my students have just done a test, as I mark their answer right/wrong (no comments), I’m going to want to take notes on common errors, but not just any errors – the errors that I both can and want to do something about. Let’s look at some examples from a recent year 7 assessment on particles, with the first example being a question that starts like this:
The question goes on to ask this:
Notwithstanding the obvious typo, a number of my students gave really nice examples of control variables, like how long it was stirred for or the mass of sugar on each spatula. Despite that, they got this wrong, because the question says “described in the pictures” – and those variables are not mentioned.
So here’s the thing: the point of the question is to figure out students’ knowledge of this experiment and its variables. My students demonstrated that knowledge, but come away with no mark. If I’m honest, I don’t really want to spend lesson time talking to students about the ins and outs of reading the question at this point. They’re year 7, and for me I’m ok with what they know.
As such, this is a non-example. It’s not the kind of thing I want to go over in class so I don’t add it to my list.
Let’s look at this question next:
All my students bar one got this question wrong, despite the fact that almost all of them got the first three right. I scratched my beard and wasn’t sure what happened. So I picked up the phone and called the maths department “hey could you just let me know when you teach students about range?….ok….ok…right not till the end of year 8. That explains it.”
This is what we call construct irrelevant variation. The construct is what you are trying to measure, in this case students’ ability to predict a state of matter based on melting/boiling points. This question tests something else as well – their knowledge of mathematical range – which in this case irrelevant to the construct (their knowledge of states of matter). So in this case, not only am I not going to go over this in class, but I’m going to delete it from the test so that students next year don’t have the same problem.
Another non-example to be sure, but it’s important. What I’m getting at is that carefully thinking about your assessment data requires so much more than pretty colours on a spreadsheet, red pen comments on the paper or some time spent reviewing the questions in class. Let’s look at a positive example:
The correct answer to this is that solids can’t be compressed. A frustratingly large number of my students got this wrong by referencing things like:
“the particles have strong forces of attraction”
“the particles vibrate on the spot”
“there is no space between particles”
All of those things are true for solids, but they aren’t properties – which is what the question wants. There is a crucial distinction running the whole way through secondary science between the properties of an entire material and the explanation for why that material has that property based on its structure. So solids can’t be compressed because there is no space between the particles. The former is the property, the latter the explanation based on the structure. A lot of my students didn’t get that. And even for the ones who did get it right, do I know that they fully understand the distinction between structure and property, or did they have a list of things in their head that they thought were right, one of which was actually right, and they picked at random? I can’t possible know, and it’s reasonable to assume that a mistake held by many students is probably held by others too.
So, looking in the mirror, I realised that I just hadn’t emphasised this distinction enough. I went away and had a think, and came up with a diagram that looks like this:
Using an image like this will allow me to model exactly what I mean when I talk about the difference between structure and property. I won’t beam it up directly onto the board as I would rather draw it live, but you get the idea. I’ll follow this up with this one:
I’ve faded this one a bit, and turned it into an activity for the students to complete. Next up is this one:
Which has even less guidance. Students will complete, I’ll be circulating, then we’ll review. Simple. Following this, students will get drill questions like:
These should really test their understanding. I’ll probably add some more questions on the end to try and challenge them a bit further and link this material to other topics they have learnt.
You get the idea: in many classrooms, the teacher would spend a couple of minutes going over the answer to that question, fielding a load of queries about “but I wrote…and I didn’t get the mark” or “but sir! David wrote exactly the same as me and got the mark and I didn’t” and would move on to the next question, probably a little frustrated and exasperated. Little learning will take place. Instead, in a lesson where students don’t have the papers in front of them,
One last example – with a question we’ve seen already:
Some students for this one wrote “amount of water” – which is incorrect. We don’t say “amount” as it isn’t specific enough, instead we say “volume of water.” I’m not all that fussed they didn’t get this right, as it is something that I mentioned a couple of times but it wasn’t core material (as it were). However, I do know I need to teach it properly at some point – I can’t just leave it to something I mention now and then and the students just magically pick up. So I look at the next unit, find a practical with water in it, and write into my resources that I want to emphasise the distinction between “amount” and “volume.” In turn when I come to prepare the unit after, I’ll make sure there are a couple of questions on this (as well as some on structure and properties), and for the next unit and so on and so forth.
In sum then, when going over a mock or any other piece of assessed work:
- Think hard about what the students’ responses tell you about their knowledge
- Split your findings into two piles: “things I care about” and “things I don’t care about”
- For things you don’t care about, either forget them or change the test for next time
- For things you do care about, split into two piles: “for reteaching” and “for building in”
- Reteach and build in!