It is a truth universally acknowledged that however daft a teaching idea is, as soon as you point this out online someone will tell you that it works just fine for them. Inevitably, you will be told that it is just a tool, and it depends on how you use it. You might also be told off for setting up false dichotomies, not being interested in general student development, not focussing enough on a vanishing minority of students and cases or of committing emotional abuse (yes, really). If you look at some of the replies to the tweet below, you will see the full range of responses:

I suppose it is true that everything works somewhere. Humans are complicated beings and by definition you will have things that by rights shouldn’t work, but for some strange reason in some strange case do end up working. We have outliers and anomalies: it’s the nature of dealing with large biological populations.

In addition to the strangeness of organic behaviour, a further factor is that when we talk about impact we have to talk in relative terms. So for example, we know that for novice learners discovery based learning is worse than fully guided instruction. As far as the research is concerned, that isn’t controversial. But that doesn’t mean that students in a discovery classroom will learn nothing. It just means that they will learn less than ones in a fully guided classroom. There will be more misconceptions, greater fragmentation in students’ mental models and just generally less stuff learnt. We don’t normally get control conditions in our classrooms, so we can rarely actually compare teaching techniques against each other. We then end up thinking that this approach or technique is great when in reality it didn’t work as well as something else would have done. We get further tricked by the engagement problem: sometimes it really looks and feels like our students are learning because of how engaged they are. But what if they are engaged precisely because they are not learning anything?

Often, as in the Twitter thread above, people will say that it’s all about how you do it. If, for example, you do a marketplace activity like the above but you plan it in such and such a way then it definitely works. According to the above, the straightforward retort is that someone who spends an hour before a lesson planning a marketplace activity would have done better to spend that time thinking about their explanations or quality of student practice. They spent a long time on the marketplace, and it got results: but not as good results as if they had have done something else.

I have a bigger response though that runs a bit like this. If your chosen technique only works under really specific circumstances, with really specific contextual variables all lining up nicely in a perfect Goldilocks’ zone; is it a good idea to recommend that to others? If 95% of the time people do Marketplace activities they don’t work, does the fact that you once managed to get one to work mean that others should follow your lead? If it’s more likely to fail than to succeed, maybe it should be ditched?

Group work is a great example of this. There is some evidence that group work is effective, given that a number of conditions are met*. The problem is that those conditions are incredibly hard to meet and require a huge amount of preparation in advance. Are they worth it, compared to proper explicit instruction? Our question has moved from “does this work somewhere?” to “how likely is this to work here?”

Often you see research trials coming back saying that a particular intervention didn’t work because people didn’t implement it properly. They then conclude that next time, we need to do the implementation training better. But that treats the two as separate things: the intervention and its ease of implementation are tied together: if something is very difficult to implement, it’s not a good intervention.

As an example then, let’s say I present evidence that shows retrieval practice is a better revision technique than highlighting. We can map possible responses and counters:

  1. Highlighting works for my students.
    Counter: that’s great, but your students are outliers. Most students are not outliers.
  2. Highlighting works for my students, and I can tell because they were really engaged.
    Counter: that’s all very well, but they probably weren’t learning anything.
  3. I’ve taught my students how to highlight using a really sophisticated method that works for them.
    Counter: that’s great, but how long did it take you? What could you have done with that time otherwise?
  4. I’ve taught my students how to highlight using a really sophisticated method that works for them.
    Counter: it’s way easier to just grab a retreival roulette and do retrieval practice.

I’m quite a bit into my photography, and plenty of analogies come to mind. The camera is just a tool, and the quality of the picture depends on the planning, knowledge, experience and vision of the photographer. But it would be silly to say that the quality of the outcome doesn’t depend at all on the tool; that some tools aren’t better than others. Sure, Ansel Adams could probably have done some amazing photography with my dusty old canon, but it would have required way more time, way more effort and probably way more luck than using the massive and expensive Hasselblad camera he actually shot with. I somehow doubt he would then turn around and say “well, I’ve managed to get some amazing pictures with a dusty old Canon, so given a choice between the dusty old Canon and the Hasselblad people should choose the dusty old Canon.” If you have a choice between an easily-used tool which all evidence suggests should work and something that may workin really limited cases, why would you not choose the former?

Among others, here are some tools that we know work:

  1. Direct Instruction
  2. Explicit instruction
  3. Retrieval practice
  4. Teaching for knowledge

They work, and we have a moral duty to use them. So next time someone knocks your favourite teaching method and you are tempted to point out that it’s just a tool, remember: it might be a tool, but some tools are better than others.


*I wrote about this extensively here, and I’m pretty sure even this particular evidence base is a little dodgy.